How To Become A Hacker – CyberSecurity Careers

With Cybersecurity Career Talks

Do you want to know “How To Become A Hacker” let us learn from world-renowned hackers, cybersecurity experts, social engineering experts. Adriel Desautels, Jayson E. Street and Philippe Caturegli share the mindset, training, experience and education (if any) required for a cybersecurity career.

Who is a hacker? A person who finds innovative ways of solving problems. Attributes required for breaking into cybersecurity mindset, soft skills, and other requirements. How important is networking and social media in a job search? Tips for a career changer. We will discuss, what works for these experts, what they have seen has worked for friends and how you can plan a successful cyber security investigation career.

Hacking Casinos with Zero Day Exploits

Hacking casinos with zeroday exploits

Most popular email programs like Microsoft Outlook, Apple Mail, Thunderbird, etc. have a convenient feature that enables them to remember the email addresses of people that have been emailed.  Without this feature people would need to recall email addresses from memory or copy and paste from an address book. This same feature enables hackers to secretly breach networks using a technique that we created back in 2006 and named Email Seeding.

This article explains how we used Email Seeding to breach the network of a well-known and otherwise well protected casino.  As is always the case, this article has been augmented to protect the identity of our customer.

Lets begin…
Our initial objective was to gather intelligence about the casino’s employees.  To accomplish this, we developed a proprietary LinkedIn tool that uses the name or domain of a company and extracts employee information.  The information is compiled into a dossier of sorts that contains the name, title, employment history and contact information for each targeted individual.  Email address structure is automatically determined by our tool.

It is to our advantage if our customers use Google apps as was the case with the casino.  This is because Google suffers from a username enumeration vulnerability that allows hackers to extract valid email addresses.  For example, if we enter [email protected] and the address does not exist then we get an error.  If we enter the same address and it does exist, we don’t get an error.  Our LinkedIn tool has native functionality that leverages this vulnerability which allows us to compile a targeted list of email addresses for Spear Phishing and/or Social Engineering.

We used this tool to compile a target list for the casino. Then we assembled an offensive micro-infrastructure to support a chameleon domain and its associated services.  The first step in this process to register a chameleon domain, which is a domain designed to impersonate a legitimate domain (with SSL certificates and all).  Historically this was accomplished by using a now obsolete IDN Homoglyph attack.  Today we rely on psychological trickery and influence the tendency of the human brain to autocorrect incorrectly spelled names while perceiving them as correct.

For example, let’s pretend that our casino’s name is Acme Corporation and that their domain is  A good chameleon domain would be or, which are both different than (read them carefully).  This technique works well for longer domains and obscure domains but is less ideal for shorter domains like or for example.  We have tactics for domains like that but won’t discuss them here.
There are a multitude of advantages to using a chameleon domain over traditional email spoofing techniques.  One example is that chameleon domains are highly interactive.  Not only can we send emails from chameleon domains but we can also receive emails.   This high-interaction capability helps to facilitate high-threat Social Engineering attacks.  Additionally, because chameleon domains are actually real domains they can be configured with SPF records, DKIM, etc.  In fact, in many cases we will even purchase SSL certificates for our chameleon domains.  All of these things help to create a credible infrastructure.  Finally, we always configure our chameleon domains with a catchall email address.  This ensures that any emails sent to our domain will be received.

Netragard maintains active contracts with various Virtual Private Server (VPS) providers.  These providers enable us to spin up and spin down chameleon infrastructures in short time.  They also enable us to spin up and spin down distributed platforms for more advanced things like distributed attacking, IDS/IPS saturation, etc. When we use our email seeding methodology we spin up a micro-infrastructure that offers DNS, Email, Web, and a Command & Control server for RADON.
For the casino, we deployed an augmented version of bind combined with something similar to honeytokens so that we could geographically locate our human targets.  Geolocation is important for impersonation as it helps to avoid the accidental face-to-face meetings. For example, if we’re impersonating John to attack Sally and they bump into each other at the office then there’s a high risk of operational exposure.

With the micro-infrastructure configured we began geolocating employees.  This was accomplished in part with social media platforms like Twitter, Facebook, etc. The employees that could not be located with social media were located using a secondary email campaign.  The campaign used a unique embedded tracker URL and tracker image.  Any time the host associated with the URL was resolved our DNS server would tell us what IP address the resolution was done from.  If the image was loaded (most were) then we’d receive the IP address as well as additional details about the browser, operating system in use by our target, etc.  We used the IP addressing information to plot rough geographic locations.

When we evaluated the data that we collected we found that the Casino’s employees (and contractors) worked from a variety of different locations.  One employee, Jack Smith, was particularly appealing because of his title which was “Security Manager” and his linked in profile talked about incident response and other related things.  It also appeared that Jack worked in a geographically dissimilar location to many potential targets.  Jack became our primary choice for impersonation.
With Jack selected we emailed 15 employees from  [email protected].   That email address is a chameleon address, note the “ec” is inverted to “ce”. Jack’s real email address would be [email protected].  While we can’t disclose the content of the email that we used, it was something along the lines of:
“Hi <name>, did you get my last email?” 
Almost immediately after sending the email we received a 3 out of office auto-responses.  By the end of the next day we received 12 human responses indicating that we had a 100% success rate.  The 12 human responses were exciting because chances were high that we had successfully seeded our targets with Jack’s fake chameleon address.

After 4 days we received an email from an employee named Brian with the title “Director of IT Security”. Brian emailed us rather than emailing the real Jack because his email client auto-completed Jack’s email with our seeded address rather than Jack’s real one. Attached to the email was a Microsoft Word document.  When we opened the document we realized that we were looking at an incident report that Jack had originally emailed to Brian for comment.

While the report provided a treasure trove of information that would have been useful for carrying out a multitude of different attacks, the document and trust relationship between Jack and Brian was far more interesting.  For most customers we’d simply embed malware (RADON) into a document and use macro’s or some other low-tech method of execution.   For this customer, given that they were a high-profile casino with high-value targets, we decided to use a zeroday exploit for Microsoft Word rather than something noisy like a macro.

While the exploit was functional it was not flawless.  Despite this we were confident that exploitation would be successful.  The payload for the exploit was RADON, our home-grown zeroday malware and it was configured to connect back to our command and control server using one three different techniques. Each of the three techniques uses common network protocols and each communicates in using methods that appear normal as to evade detection.  The exact details on these techniques isn’t something that share because we use them regularly.

We delivered our now weaponized Microsoft Word document back to Brian with an email that suggested more updates were made.  Within 10 minutes of delivery RADON called home and we took covert control of Brian’s corporate desktop.
The next step was to move laterally and infect a few more targets to ensure that we did maintained access to the casino’s LAN.  The normal process for doing this would be to scan / probe the network and identify new targets.   We wanted to proceed with caution because we didn’t know if the Casino had any solutions to detect lateral movement.  So, to maintain stealth, rather than scanning the internal network we sniffed and monitored all network connections.

In addition to sniffing, our team also searched Brian’s computer for intelligence that would help to facilitate lateral movement.  Searching was carried out with extreme care as to avoid accessing potential bait files.  Bait files, when accessed, will trigger an alarm that alerts administrators and we could not afford to get caught in such early stages.  Aside from collecting network and filesystem information we also took screenshots every minute, activated Brian’s microphone, took frequent web-cam photographs and recorded his keystrokes using RADON.
After a few hours of automated reconnaissance, we began to analyze our findings.  One of the first things that caught our attention was a screen shot of Brian using TeamViewer.  This prompted us to search our keylogger recordings for Brian’s TeamViewer credentials and when we did we found them in short time.  We used his captured credentials to login to TeamViewer and were presented with a long list of servers belonging to the casino.  What was even more convenient was that credentials for those servers were stored in each server profile so all we had to do was click and pwn.  It was like Christmas for Hackers!

Our method from that point forward was simple.  We’d connect to a server, deploy RADON, use RADON to gather files, credentials, screenshots, etc.  Within 30-minutes we went from having a single point of access to having more control over the casino’s network than their own IT department. This was in large part because our control was completely centralized thanks to RADON and we weren’t limited by corporate polices, rules, etc.  (We are the hackers after all).

This was the first casino that we encountered with such a wide-scale deployment of TeamViewer.  When we asked our customer why they were using TeamViewer in this manner their answer was surprising.  The casino’s third party IT support company recommended that they use TeamViewer in place of RDP suggesting that it was more secure.  We of course demonstrated that this was not the case.  With our direction the casino removed TeamViewer and now requires all remote access to be handled over VPN with 2 factor authentication and RDP.

For the sake of clarity, much more work was done for the Casino than what was discussed here.  We don’t simply hack our clients, say thank you and leave them hanging. We do provide our customers with detailed custom reports and if required assistance with hardening. With that explained, this article was written with a specific focus on email seeding.   We felt that given the current threat landscape this was a good thing to be aware of because it makes for an easy breach.

Hackers - Vulnerability Disclosures

Protect Yourself – Chronicle’s 4-Part Video Series

This first clip focuses on confidence tricks (Social Engineering) which is something that we also do when we deliver Realistic Threat Penetration Tests to our customers. Our objective when using social engineering isn’t to con our customers out of money but instead to trick them into doing things that enable us access to their corporate network. This can include stealing passwords, deploying malware, or simply convincing someone to grant us access.

This second clip focuses on COVID-19 and the new Work from Home requirement. Our CEO (Adriel Desautels) was interviewed for this segment and asked what it was that we saw on the front lines in terms of bad actors taking advantage of this crisis. During this clip Adriel drops a hit about a new blog entry where talk about having used the pandemic and the PPE shortages to compromise a healthcare customer via a virtual meeting. We also discuss again how security software does not provide the level of protection that it promises.

This clip talks about robocalls and how people are often taken advantage of by these calls. There’s a great app that you can install on your iPhone called RoboKiller. RoboKiller answers robocalls for you and engages the caller in a fake conversation which consumes their time and resources. We suggest that people use an app like RoboKiller or, if they are using an iPhone, to block calls from unknown numbers.

This last segment contains a question and answer section where everyday people pose questions to various security experts including our CEO.

Bug Bounties

The dark side of bug bounties

Bug Bounty companies (often called crowd sourced penetration tests) are all the hype.  The primary argument for using their services is that they provide access to a large crowd of testers, which purportedly means that customers will always have a fresh set of eyes looking for bugs.  They also argue that traditional penetration testing teams are finite and, as a result, tend to go stale in terms of creativity, depth, and coverage.  While these arguments seem to make sense at face value, are they accurate?

Penetration Testing Company

The first thing to understand is that the quality of any penetration test isn’t determined by the volume of potential testers, but instead by their experience, talent, and overall capabilities.  A large group of testers with average talent will never outperform a small group of highly talented testers in terms of depth and quality.  A great parallel example of this is when the world’s largest orchestra played ninth symphonies of Dvorák and Beethoven.  While that orchestra was made up of 7,500 members, the quality of their song was nothing compared to that which is produced by The Boston Symphony Orchestra (which is made up of 91 musicians).

Interestingly, it appears that bug hunters are incentivized to spend as little time as possible per bounty.  This is because bug hunters need to maintain a profitable hourly rate while working or their work won’t be worth their time.  For example, a bug hunter might spend 15 minutes to find a bug and collect a $4,000.00 bounty, which is an effective rate of $16,000.00 per hour!  In other cases, a bug hunter might spend 40 hours to find a bug and collect a $500.00 bounty which is a measly $12.50 per hour in comparison.  Even worse they might spend copious time finding a complex bug only to learn that it is a duplicate and collect no bounty (wasted time).

This argument is further supported when we appraise the quality of bugs disclosed by most bug bounty programs.  We find that most of the bugs are rudimentary in terms of ease of discovery, general complexity, and exploitability.  The bugs regularly include cross-site scripting vulnerabilities, SQL injection vulnerabilities, easily spotted configuration mistakes, and other common problems.  On average they appear to be somewhat more complex than what might be discovered using industry standard automated vulnerability scanners and less complex than what we’ve seen exploited in historical breaches.  To be clear, this doesn’t suggest that all bug hunters are low talent individuals, but, instead, that they are not incentivized to go deep.

In contrast to bug bounty programs, genuine penetration testing firms are incentivized to bolster their brand by delivering depth, quality, and maximal coverage to their customers.  Most operate under a fixed cost agreement and are not rewarded based on volume of findings, but instead by the repeat business that is earned through the delivery of high-quality services.  They also provide substantially more technical and legal safety to their customers than bug bounty programs do.

For example, we evaluated the terms and conditions for several bug bounty companies and what we learned was surprising.  Unlike traditional penetration testing companies, bug bounty companies do not accept any responsibility for the damages or losses that might result from the use of their services.  They explicitly state that the bug hunters are independent third parties and that any remedy with respect to loss or damages that a customer seeks to obtain is limited to a claim against that bug hunter.  What’s more is that the vetting process for bug hunters is lax at best.  In nearly all cases, background checks are not run and even when they are run the bug hunter could provide a false identity. The validation around who a bug hunter really is, is also lacking. To sign up to most programs you simply need to validate your email address. In simple terms, organizations that use bug bounty programs accept all risk and have no realistic legal recourse, even if a bug hunter acts in a malicious manner.

To put this into context, bug bounty programs effectively provide anyone on the internet with a legitimate excuse to attack your infrastructure.  Since these attacks are expected as a part of the bug bounty program, it may impact your ability to differentiate between an actual attack and an attack from a legitimate bug hunter.  This creates an ideal opportunity for bona fide malicious actors to hide behind bug bounty programs while working to steal your data. When you combine this, with the fact that it takes an average of ~200 days for most organizations to detect a breach, the risk becomes even more apparent.

There’s also the issue of GDPR. GDPR increases the value of personal data on the black market and to organizations alike.  Under GDPR, if personal data of a European citizen is breached, the organization that suffered the breach can face heavy fines, penalties, and more. In article 4 of the GDPR, a personal data breach is defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data transmitted, stored or otherwise processed”. While bug bounty programs target configurations, systems, and implementations, they do not incentivize bug hunters to go after personal data.  However, because of GDPR, a malicious bug hunter who exploits a vulnerability that discloses personal data (accidental or not), may be incentivized to ransom their finding for a higher dollar value. Likewise, organizations might be incentivized to pay the ransom and report it as a bounty to avoid having to notify the Data Protection Authorities (“DPA”) as is required by GDPR.

On a positive note, many of our customers use bug bounty programs in tandem with our Realistic Threat Penetration Testing services.  Customers who use bug bounty programs have far less vulnerabilities in terms of low-hanging-fruit than ones who don’t.  In fact, we are confident that bug bounty programs are pointedly more effective at finding bugs than automated vulnerability scanning could ever be. It’s also true that these programs are more effective than penetration testing vendors who deliver services based on the product of automated vulnerability scans.  When compared to a research driven penetration test, however, the bug bounty programs pale in comparison.

False Sense of Security Comic

Industry standard penetration testing and the false sense of security.

Our clients often hire us to as a part of their process for acquiring other businesses.   We’ve played a quiet role in the background of some of the largest acquisitions to hit the news and some of the smallest that you’ve never heard of.  In general, we’re tasked with determining how well secured the networks of the organization to be acquired are prior to the acquisition.   This is important because the acquisitions are often focused on sensitive intellectual property like patents, drug formulas, technology, etc.   Its also important because in many cases networks are merged after an acquisition and merging into a vulnerable situation isn’t exactly ideal.

Recently we performed one of these tests for a client but post rather than pre-acquisition.  While we can’t (and never would) disclose information that could be used to identify one of our clients, we do share the stories in a redacted and revised format.  In this case our client acquired an organization (we’ll call it ACME) because they needed a physical presence to help grow business in that region of the world.   ACME alleged that their network had been designed with security best practices in mind and provided our client with several penetration testing reports from three well known vendors to substantiate their claims.

Acme Security Systems ComicAfter the acquisition of ACME our client was faced with the daunting task of merging components of ACME’s network into their own.  This is when they decided to bring our team in to deliver a Realistic Threat Penetration Test™ against ACME’s network.  Just for perspective, Realistic Threat Penetration Testing™ uses a methodology called Real Time Dynamic Testing™ which is derived from our now infamous zeroday vulnerability research and exploit development practices. In simple terms it allows our team to take deep research-based approach to penetration testing and provides greater depth than traditional penetration testing methodologies.

When we deliver a Realistic Threat Penetration Test we operate just like the bad guys but in a slightly elevated threat context. Unlike standard penetration testing methodologies, Real Time Dynamic Testing™ can operate entirely devoid of automated vulnerability scanning.  This is beneficial from a quality perspective because automated vulnerability scanners produce generally low-quality results. Additionally, automated vulnerability scanners are noisy, increase overall risk of outages and damage, and generally can’t be used in a covert way.  When testing in a realistic capacity being discovered is most certainly disadvantageous.  As master Sun Tzu said, “All warfare is based on deception”.

When preparing to beach an organization accurate and actionable intelligence is paramount.  Good intelligence can often be collected without sending any packets to the target network (passive reconnaissance).  Hosts and IP addresses can be discovered using services like those provided by or via google dorks.  Services, operating systems, and software versions can be discovered using other tools like, shodan and others.  Internal information can often be extracted by searching forums, historical breaches, or pulling metadata out of available materials on the Internet.   An example of how effective passive reconnaissance can be is visible in the work we did for Gizmodo related to their story about Crosscheck.

Using passive reconnaissance against ACME we discovered a total of three externally connectable services.  One of those services was a VPN endpoint, the other was a web service listening on port 80, and the other was the same service listening on port 443.  According to passive recon the services on 80 and 443 were provided by a web-based database management software package.  This was obviously an interesting target and something that shouldn’t be internet exposed.  We used a common web browser to connect to the service and were presented with a basic username and password login form.  When we tried the default login credentials for this application (admin/admin) they worked.

At this point you might be asking yourself why we were able to identify this vulnerability when the three previous penetration testing reports made no mention of it.  As it turns out, this vulnerability would have been undetectable using traditional methodologies that depend on automated vulnerability scanning.  This is because the firewall used by ACME was configured to detect and block the IP addresses (for 24 hours) associated with any sort of automated scan.  It was not configured to block normal connection attempts.  Since we did passive reconnaissance, the first packet we sent to the target was the one that established the connection with the database software.   The next series of packets were related to successful authentication.

After using the default credentials to authenticate to the management application, we began exploring the product.  We realized that we had full control over a variety of databases that varied from non-sensitive to highly sensitive.  These included customer databases, password management, internal chat information, an email archive, and much more.  We couldn’t find any known vulnerabilities for the management software, but it didn’t seem particularly well written from a security perspective.   In short time we found a vulnerability in an upload function and used that to upload a back door to the system.  When we connected to the backdoor, we found that it was running with SYSTEM privileges.  What’s even more shocking is that we quickly realized we were on a Domain Controller.  Just to be clear, the Internet connectable database management software that was accessible using default credentials was running on a domain controller.

The next step was for us to determine what the impact of our breach was.  Before we did that though we exfiltrated the password database from the domain controller for cracking.  Then we created a domain admin account called “Netragard” in an attempt to get caught.  While we were waiting to get caught by the networking team we proceeded with internal reconnaissance.   We quickly realized that we were dealing with a flat network and that not everything on the network was domain connected.  So, while our compromise of the domain controller was serious it would not provide us with total control.  To accomplish that we needed to compromise other assets.

Unfortunately for ACME this proved to be far too easy of a task.  While exploring file shares we found a folder aptly named “Network Passwords”.   Sure enough, contained within that folder was an excel spreadsheet containing the credentials for all the other important assets on the network.  Using these credentials we were able to rapidly escalate our access and take full control of ACME’s infrastructure including but not limited to its firewall, switches, financial systems, and more.

Here are a few important takeaways from this engagement:

  • The penetration testing methodology matters. Methodologies that depend on automated scanners, even if whitelisted, will fail to detect vulnerabilities that won’t be missed by attackers using a hands-on research based approach.
  • Default configurations should always be changed as a matter of policy to avoid easy compromise.
  • Use two factor authentication for all internet connectable services.
  • Do not expose sensitive administrative applications to the internet. Instead, configure a VPN with two factor authentication and use that to access sensitive applications.
  • Domain controllers should be treated like Domain controllers and not like web application servers.
  • Domain controllers should not be Internet connectable or offer internet connectable services.
  • Do not store passwords in documents even if they are encrypted (we can crack them).
  • Always doubt your security posture and never allow yourself to feel safe. The moment you feel safe is the moment that you’ve adopted a false sense of security.

The reality behind hospital and medical device security.

We recently presented at the DeviceTalks conference in Boston Ma about the vulnerabilities that affect hospitals and medical devices (insulin pumps, pacemakers, etc.).  The goal of our presentation wasn’t to instill fear but sometimes fear is a reasonable byproduct of the truth.  The truth is that of all the networks that we test, hospital networks are by far the easiest to breach.  Even more frightening is that the medical devices contained within hospital networks are equally if not more vulnerable than the networks that they are connected to.   It seems that the healthcare industry has spent so much time focusing on safety that they’ve all but lost sight of security.

The culprit behind this insecurity is mostly convenience.  Hospitals are generally run by healthcare experts with a limited understanding of Information Technology and an even more limited understanding of IT Security. It would be unreasonable to expect healthcare experts to also be IT security experts given the vast differences between both fields. When healthcare experts hire IT experts and IT Security experts they do it to support the needs of the hospital.  Those needs are defined by the doctors, nurses, and other medical professionals tasked with running the hospital.  Anything that introduces new complexity or significant changes will be slow to adopt or perhaps not adopted at all.  Unfortunately, good security is the antithesis of convenience and so good security often falls by the wayside despite best efforts by IT and security personnel.

Unfortunately, in many respects the IT security industry is making the situation worse with false advertising.  If antivirus solutions worked as well as they are advertised, then malware would be a thing of the past.  If Intrusion  Prevention Solutions worked as well as advertised, then intrusions would be a thing of the past.  This misrepresentation of the capabilities provided by security solutions produces a false sense of security.  We aren’t suggesting that these solutions are useless, but we are encouraging organizations to carefully test the performance and effectiveness of these solutions rather than simply trusting the word of the vendor.

After we breach a network there exists a 30-minute window of susceptibility to ejection from the network. Most malicious hackers have a similar or larger window of susceptibility.  If a breach is responded to within that window, then we will likely lose access to the network and be back to square one (successful damage prevention by the client).   If we are not detected before that window expires, then the chance of successful ejection from the network is close to zero.  Astonishingly, the average length of time it takes for most organizations to identify a breach is 191 days.  Rather than focusing on breach prevention (which is an impossibility) organizations should be focusing on breach detection and effective incident response (which is entirely attainable).  An effective incident response will prevent damage.

Within about 40 minutes of breaching a hospital network our team takes inventory.  This process involves identifying systems that are network connected and placing them into one of two categories.  Those are the medical device category and the IT systems category.  Contained within the IT systems category are things like domain controllers, switches, routers, firewalls and desktops.  Contained within the medical device category are things like imaging systems, computers used to program pacemakers, insulin pumps etc.  On average the systems in the medical device category run antiquated software and are easier to take control of than the IT devices.  This is where security and safety intersect and become synonymous.

These medical device vulnerabilities afford attackers the ability to alter the operation of life-critical systems.  More candidly, computer attackers can kill patients that depend on medical devices.  The reality of medical device vulnerability is nothing new and it doesn’t seem to be getting any better. This is clearly evidenced by the ever-increasing number of medical device recalls triggered by discovered cybersecurity vulnerabilities. These vulnerabilities exist because the security of the software being deployed on medical devices is not sufficiently robust to safeguard the lives of the patients that rely on them.

More frightening is that attackers don’t need to breach hospital networks to attack medical devices.  They can attack medical devices such as implants from afar using a laptop and a wireless antenna.  This was first demonstrated in 2011 by security researcher Barnaby Jack.   He proved the ability to wirelessly attack an insulin pump from a distance of 90 meters causing it to repeatedly deliver its maximum dose of 25 units until its full reservoir of 300 units was depleted.  In simple terms Barnaby demonstrated how easily an attacker could kill someone with a keyboard and make it look like a malfunction.  He also did the same thing with a pacemaker causing it to deliver a lethal 840-volt shock to its user.  Similar attacks are still viable today and affect a wide variety of life supporting devices.

To solve this problem two things needs to happen.  The first is that medical device manufacturers need to begin taking responsibility for the security of their devices.  They need to recognize that security is in many cases a fundamental requirement of safety.  They also need to begin taking a proactive approach to security rather than reactive.  In our experience medical device manufacturers are unfriendly when interfacing with vulnerability researchers.  They might want to reconsider and even offer bug bounties as a step in the right direction.

Hospitals also need to make some significant changes too.  They need to begin to put security above convenience when it has the potential to impact patient safety.  This might mean installing good password managers and enforcing strong passwords with two factor authentications, increasing security budgets, or even paying for good security training programs.  Most hospitals are patient safety focused but fail to recognize that IT security and patient safety are now synonymous.

Gizmodo interview with Netragard – "Snake Oil Salesmen Plague the Security Industry, But Not Everyone Is Staying Quiet"
Adriel Desautels was suddenly in a serious mess, and it was entirely his fault.
Sitting in his college dorm room back in the mid-1990s, Desautels let his curiosity run rampant. He had a hunch that his school’s network was woefully insecure, so he took it upon himself to test it and find out.
“My thoughts at the time were, ‘Hey, it’s university. I’m here to learn. How much harm can there really be in doing it?’” Desautels says in a recent phone call, the hint of a tremor in his voice.
It wasn’t long before he found himself in a dull faculty conference room, university officials hammering him with questions as a pair of ominous-looking men—Desautels says he still doesn’t know who they were, but it’s hard not to assume they had badges in their pockets—stood quietly listening on the sidelines.
Penetrating the school’s network proved simple, he says, and thanks to Desautels’ affable arrogance, talking his way out of trouble was easier still. Forensically speaking, he argued to the school officials, there was no way to prove he did it. It could’ve just as easily been another student, at another computer, in a dorm room that wasn’t his. And he was right; they couldn’t prove shit, Desautels recalls. One of the mystery men smiled knowingly.
Read the full article here

Retro: FACEBOOK – Anti-Social Networking (2008).

This is a retro post about a penetration test that we delivered to a client back in 2008.  During the test we leveraged personal data found on Facebook to construct and execute a surgical attack against an energy company (critical infrastructure).  The attack was a big success and enabled our team to take full control of the client’s network, domain and their critical systems.

Click to download:

Given the recent press about Facebook and its respective privacy issues we thought it would be good to also shed light on the risks that its users create for the companies and/or agencies that they work for.  It is important to stress that the problem isn’t Facebook but instead is the way that people use and trust the platform.  People have what could be described as an unreasonable expectation of privacy when it relates to social media and that expectation directly increases risk.  We hope that this article will help to raise awareness about the very real business risks surrounding this issue.
Full Writeup (Text extract from PDF): June 2008
FACEBOOK Anti-Social Networking:
“It is good to strike the serpent’s head with your enemy’s hand.”


The Facebook Coworker search tool can be abused by skilled attackers in sophisticated attempts to compromise personal information and authentication credentials from your company employees. Josh Valentine and Kevin Finisterre of Penetration Testing Company Netragard, Inc. also known as Peter Hunter and Chris Duncan, were tasked with conducting a penetration test against a large utility company. Having exhausted most conventional exploitation methods they decided to take a non conventional approach to cracking the companies networks. In this case they decided that perhaps a targeted attack against the companies Facebook population would be the most fruitful investment of time. Since Facebook usage requires that you actually sign up Josh and Kevin had to research believable back grounds for their alter ego’s Peter and Chris. The target company had a fairly  large presence in the US with four offices located in various places. Due to the size of the company it was easy to cherry pick bits and pieces of information from the hundreds of available profiles. Because many profiles can be browsed without any prior approval gathering some basic information was easy. Armed with new identities based on the details and demographics of the companies Facebook population it was time to make some new friends. After searching through the entries in the Coworker search tool they began selectively attempting to befriend people. In some cases the attempts were completely random and in others they tried to look for ‘friendly’ people. The logic was that once Peter and Chris had a few friends on their lists they could just send out a few mass requests for more new friends. With at least four or five friends under their belt the chances of having overlapping friends would increase.

“by the way… thanks for the hookup on the job. I really appreciate it man.”

Appearing as if they were ‘friends of friends’ made convincing people to accept the requests much easier. Facebook behavior such as the ‘Discover People You May Know’ sidebar also added benefit of making people think they knew Peter and Chris. Blending in with legit accounts meant that the two fake accounts needed to seem like real people as much as possible. Josh and Kevin first came up with basic identities that were just enough to get a few friends. Now If they wanted to continue snaring new friends and not raise any suspicions with existing friends they would need to be fairly active with the accounts.Things needed to get elaborate at this point so Josh and Kevin combed the internet looking for random images as inspiration for character background. Having previously decided on their desired image and demographic they decided to settle on a set of pictures to represent themselves with. They came up with a few photos from the surrounding area and even made up a fake sister for Chris. All of this obviously helped solidify the fact that they were real people in the eyes of any prospective friends. Eventually enough people had accepted the requests that Facebook began suggesting Chris and Peter as friends to many of the other employees of the target company.
Batch requests are the way to go Cherry picking individual friends was obviously the way to get a good profile started but Josh and Kevin were really after as many of the employees as possible so a more bulk approach was needed. After they were comfortable that their profiles looked real enough the mass targeting of company employees began. Simply searching the company Facebook network yielded 492 possible employee profiles. After a few people became their friends the internal company structure became more familiar. This allowed the pair to make more educated queries for company employees. Due to the specific nature of the company industry it was easy to search for specific job titles. Anyone could make a query in a particular city and search for a specific job title like “Landman” or “Geologist” and have a reasonable level of accuracy when targeting employees.
At the time the Chris Duncan account was closed there were literally 208 confirmed company employees as friends. Out of the total number of accounts that were collected only 2 or 3 were non employees or former employees. The company culture allowed for a swift embracing of the two fictitious individuals. They just seemed to fit in. Given enough time it is reasonable to expect that many more accounts would have been collected at the same level of accuracy.
Facebook put some measures in place to stop people from harvesting information. The first 50 or so friend requests that were sent Facebook required a response to a captcha program. Eventually Facebook was complacent with the fact that the team was not a pair of bots and allowed requests to occur in an unfettered manner. The team did run into what appeared to be a per hour as well as a per day limit to the number of requests that could be sent. There was a sweet spot and the team was able to maintain a nice flow of requests.

“Hi Chris, are you collecting REDACTED People? :)”

The diverse geography of the company and the embracing of internet technologies made the ruse seem comfortable. In many cases employees approached the team suspecting suspicious behavior but they were quickly appeased with a few kind words and emoticons. The hometown appeal of the duo’s profiles seemed to help people drop their guard and usual inhibitions. With access to the personal details of several company employees at their fingertips it was now time to sit back and reap the benefits. Once the pair had a significant employee base intra company relationships were outlined, common company culture was revealed. As an example several employees noted and pointed out to Chris and Peter that they could not find either individual in the “REDACTED employee directory”. Small tidbits of information like this helped Kevin and Josh carefully craft other information that was later fed to the people they were interacting with. With a constant flow of batch requests going there was a consistent and equally constant flow of new friends to case for information.
Over a seven day period of data collection there were as few as 8 newly accepted friends or as many as 63.
Days with more than 20 or so requests were not at all unusual for us.
Even after our testing was concluded the profiles continued to get new friend requests from REDACTED.

May 26 – 11
May 25 – 9
May 25 – 8
May 23 – 15
May 22 – 26
May 21 – 63
May 20 – 40

Every bit of information gleaned was considered when choosing the ultimate attack strategy. The general reactions from people also helped the team gauge what sort of approach to take when crafting the technique for the coup de grâce. Josh and Kevin had to go with something that was both believable and lethal at the same time. Having cased several individuals and machines on the company network it was time to actually attack those lucky new friends.

“ALL WARFARE IS BASED ON DECEPTION Hence, when able to attack, we must seem unable; when using our forces we must seem inactive; when we are near, we must make the enemy believe we are far away…”

Having spent several days prior examining all possible means of conventional exploitation Kevin and Josh were ready to move on and actually begin taking advantage of all the things they had learned about the energy companies network.

“Forage on the enemy, use the conquered foe to augment one’s own strength”

During their initial probes into the company networks the Duo came across a poorly configured server that provided a web based interface to one of the companies services. Having reverse engineered the operations of the server and subsequently compromising the back-end database that made the page run they were able to manipulate the content of the website in a manner that allowed for theft of company credentials in the near future. During information gathering it was common for employees to imply that they had access to some sort of company portal by which they could obtain information and perhaps access to various parts of the company.

“Supreme excellence consists in breaking the enemy’s resistance without fighting”

The final stages of the penetration testing happened to fall on a holiday weekend. The entire staff was given the Friday before the holiday off as well as the following Monday. Lucky for the team this provided an ideal window of opportunity during which the help desk would be left undermanned. A well orchestrated attack that appeared to be from the help-desk would be difficult to ward off and realistically unstoppable if delivered during this timeframe.

“In all fighting the direct method may be used for joining battles, but indirect methods will be needed in order to secure victory”

Several hundred phishing emails were sent out to the unsuspecting Facebook friends, the mailer was perfectly modeled from an internal company site. The mailer implied that the users password may have been compromised and that they should attempt to login and verify their settings. In addition to the mailer the status of the two Profiles were changed to include an enticing link to the phishing site. Initially 12 employees were fooled by the phishing mailer. Due to a SNAFU at the AntiSPAM company Postini another 50 some odd employees were compromised. The engineer at Postini felt that the mailer looked important and decided to remove the messages from the blocked queue. Access to the various passwords allowed for a full compromise of the client’s infrastructure including the mainframe, various financial applications, in house databases and critical control systems.
Clever timing and a crafty phishing email were just as effective if not more effective than the initial hacking methods that were applied. Social engineering threats are real,educate your users and help make them aware of efforts to harvest your company info. Ensure that a company policy is established to help curb an employee usage of Social Networking sites. Management staff should also consider searching popular sites for employees that are too frivolously giving out information about themselves and the company they work for. Be vigilant don’t be another phishing statistic.

Netragard Protects Voters

We protect voters from people like us

Dear Kris Kobach,
We recently read an article published by Gizmodo about the security of the network that will be hosting Cross Check.  In that article we noticed that you said “They didn’t succeed in hacking it.” referring to the Arkansas state network.  First, to address your point, no we did not succeed in hacking the network because we didn’t try.  We didn’t try because hacking the network without contractual permission would be illegal and we really don’t want to do anything illegal.  Our goal here at Netragard is to protect people, their data, and their privacy through the delivery of Realistic Threat Penetration Testing services.
We would like to offer you our Realistic Threat Penetration Testing services one time free of charge as a way to help protect the privacy of the American people. In exchange for this we would like a public statement about your collaboration with Netragard to help improve your security.
Netragard, Inc.

Hackers - Vulnerability Disclosures

What hackers know about vulnerability disclosures and what this means to you

Before we begin, let us preface this by saying that this is not an opinion piece.  This article is the product of our own experience combined with breach related data from various sources collected over the past decade.  While we too like the idea of detailed vulnerability disclosure from a “feel good” perspective the reality of it is anything but good.  Evidence suggests that the only form of responsible disclosure is one that results in the silent fixing of critical vulnerabilities.  Anything else arms the enemy.

Want to know the damage a single exposed vulnerability can cause? Just look at what’s come out of MS17-010. This is a vulnerability in Microsoft Windows that is the basis for many of the current cyberattacks that have hit the news like WannaCry, Petya, and NotPetya.
However, it didn’t become a problem until the vulnerability was exposed to the public. Our intelligence agencies did know about the vulnerability, kept it a secret, and covertly exploited it with a tool called EternalBlue. Only after that tool was leaked and the vulnerability that it exploited was revealed to the public did it become a problem. In fact, the first attacks happened 59 days after March 14th, which was when Microsoft published the patch thus fixing the MS17-010 vulnerability. 100% of the WannaCry, Petya and NotPetya infections occurred no less than two months after a patch was provided.
Why? The key word in the opening paragraph is not vulnerability. It’s exposed. Many security experts and members of the public believe that exposing vulnerabilities to the public is the best way to fix a problem. However, it is not. It’s actually one of the best ways to put the public at risk.
Here’s an analogy that can help the reader understand the dangers of exposing security vulnerabilities. Let’s say everyone on earth has decided to wear some kind of body armor sold by a particular vendor. The armor is touted as an impenetrable barrier against all weapons. People feel safe while wearing the armor.
Let’s say a very smart person has discovered a vulnerability that allows the impenetrable defense to be subverted completely, rendering the armor useless. Our very smart individual has a choice to make. What do they do?

Choice One: Sell it to intelligence agencies or law enforcement

Intelligence agencies and law enforcement are normally extremely judicious about using any sort of zero-day exploit.  Because zero-day exploits target unknown vulnerabilities using unknown methods they are covert by nature. If an intelligence agency stupidly started exploiting computers left and right with their zero-day knowledge, they’d lose their covert advantage and their mission would be compromised. It is for this reason that the argument of using zero-day exploits for mass compromise at the hands of intelligence or law enforcement agencies is nonsensical. This argument is often perpetuated by people who have no understanding of or experience in the zero-day industry.
For many hackers this is the best and most ethical option. Selling to the “good guys” also pays very well. The use cases for sold exploits includes things like combating child pornography and terrorism. Despite this, public perception of the zero-day exploit market is quite negative. The truth is that if agencies are targeting you with zero-day exploits then they think that you’ve done something sufficiently bad to be worth the spend.

Choice Two: Sit on it

Our very smart individual could just forget they found the problem. This is security through obscurity. It’s quite hard for others to find vulnerabilities when they have no knowledge of them. This is the principle that intelligence agencies use to protect their own hacking methods. They simply don’t acknowledge that they exist. The fewer people that know about it, the lower the risk to the public. Additionally it is highly unlikely that low-skilled hackers (which make up the majority) would be able to build their own zero-day exploit anyway. Few hackers are truly fluent in vulnerability research and quality exploit development.
Some think that this is an irresponsible act. They think that vulnerabilities must be exposed because then they can be fixed and to fail to do so puts everyone at increased risk. This thinking is unfortunately flawed and the opposite is true. Today’s reports show that over 99% of all breaches are attributable to the exploitation of known vulnerabilities for which patches already exist. This percentage has been consistent for nearly a decade.

Choice Three: Vendor notification and silent patching

Responsible disclosure means that you tell the vendor what you found and, if possible, help them find a way to fix it. It also means that you don’t publicize what you found which helps to prevent arming the bad guys with your knowledge. The vendor can then take that information, create and push a silent patch. No one is the wiser other than the vendor and our very smart individual.
Unfortunately, there have been cases where vendors have pursued legal action against security researchers who come to them with vulnerabilities. Organizations like the Electronic Frontier Foundation have published guides to help researchers disclose responsibly, but there are still legal issues that could arise.
This fear of legal action can also prompt security researchers to disclose vulnerabilities publicly under the theory that if they receive retaliation it will be bad PR for the company. While this helps protect the researcher it also leads to the same problems we discussed before.

Choice Four: Vendor notification and publishing after patch release

Some researchers try to strike a compromise with vendors by saying they won’t publicly release the information they discovered until a patch is available. But given the slow speed of patching (or complete lack of patching) all vulnerable systems, this is still highly irresponsible. Not every system can or will be patched as soon as a patch is released (as was the case with MS17-010). Patches can cause downtime, bring down critical systems, or cause other pieces of software to stop functioning.
Critical infrastructure or a large company cannot afford to have an interruption. This is one reason why major companies can take so long to patch vulnerabilities that were published so long ago.

Choice Five: Exploit the vulnerability on their own for fun and profit.

The media would have you believe that every discoverer of a zero-day vulnerability is a malicious hacker bent on infecting the world. And true, it is theoretically possible that a malicious hacker can find and exploit a zero-day vulnerability. However, most malicious hackers are not subtle about their use of any exploit. They are financially motivated and generally focused on a wide-scale, high-volume of infection or compromise. They know that once they exploit a vulnerability in the wild it will get discovered and a patch will be released. Thus, they go for short-term gain and hope they don’t get caught.

Choice Six: Expose it to the public

This is a common practice and it is the most damaging from a public risk perspective. The thinking goes that if the public is notified then vendors will be pressured to act fast and fix the problem. The assumption is also that the public will act quickly to patch before a hacker can exploit their systems. While this thinking seems rational it is and has always been entirely wrong.
In 2015 the Verizon Data Breach Investigation Report showed that half of the vulnerabilities that were disclosed in 2014 were being actively exploited within one month of disclosure. The trend of rapid exploitation of published vulnerabilities hasn’t changed. In 2017 the number of breaches is up 29 percent from 2016 according to the Identity Theft Resource Center. A large portion of the breaches in 2017 are attributable to public disclosure and a failure to patch.
So what is the motivator behind public disclosure? There are three primary motivators.  The first is that the revealer believes that disclosure of vulnerability data is an effective method for combating risk and exposure. The second is that the revealer feels the need to defend or protect themselves from the vulnerable vendor.  The second is that the revealer wants their ego stroked. Unfortunately, there is no way to tell the public without also telling every bad guy out there how to subvert the armor. It is much easier to build a new weapon from a vulnerability and use it than is to create a solution and enforce its implementation.
Exposing vulnerability details to the public when the public is still vulnerable is the height of irresponsible disclosure.  It may feel good and be done with good intention but the end result is always increased public risk (the numbers don’t lie).
It is almost certainly fact that if EternalBlue had never been leaked by ShadowBrokers then WannaCry, Petya and NotPetya would never have come into existence. This is just one out of millions of examples like this. Malicious hackers know that businesses don’t patch their vulnerabilities properly or in a timely manner.  They know that they don’t need zeroday exploits to breach networks and steal your data.  The only thing they need is public vulnerability disclosure and a viable target to exploit.  The defense is logically simple but can be challenging to implement for some.  Patch your systems.