The dark side of bug bounties

Bug Bounty companies (often called crowd sourced penetration tests) are all the hype.  The primary argument for using their services is that they provide access to a large crowd of testers, which purportedly means that customers will always have a fresh set of eyes looking for bugs.  They also argue that traditional penetration testing teams are finite and, as a result, tend to go stale in terms of creativity, depth, and coverage.  While these arguments seem to make sense at face value, are they accurate?

Penetration Testing Company

The first thing to understand is that the quality of any penetration test isn’t determined by the volume of potential testers, but instead by their experience, talent, and overall capabilities.  A large group of testers with average talent will never outperform a small group of highly talented testers in terms of depth and quality.  A great parallel example of this is when the world’s largest orchestra played ninth symphonies of Dvorák and Beethoven.  While that orchestra was made up of 7,500 members, the quality of their song was nothing compared to that which is produced by The Boston Symphony Orchestra (which is made up of 91 musicians).

Interestingly, it appears that bug hunters are incentivized to spend as little time as possible per bounty.  This is because bug hunters need to maintain a profitable hourly rate while working or their work won’t be worth their time.  For example, a bug hunter might spend 15 minutes to find a bug and collect a $4,000.00 bounty, which is an effective rate of $16,000.00 per hour!  In other cases, a bug hunter might spend 40 hours to find a bug and collect a $500.00 bounty which is a measly $12.50 per hour in comparison.  Even worse they might spend copious time finding a complex bug only to learn that it is a duplicate and collect no bounty (wasted time).

This argument is further supported when we appraise the quality of bugs disclosed by most bug bounty programs.  We find that most of the bugs are rudimentary in terms of ease of discovery, general complexity, and exploitability.  The bugs regularly include cross-site scripting vulnerabilities, SQL injection vulnerabilities, easily spotted configuration mistakes, and other common problems.  On average they appear to be somewhat more complex than what might be discovered using industry standard automated vulnerability scanners and less complex than what we’ve seen exploited in historical breaches.  To be clear, this doesn’t suggest that all bug hunters are low talent individuals, but, instead, that they are not incentivized to go deep.

In contrast to bug bounty programs, genuine penetration testing firms are incentivized to bolster their brand by delivering depth, quality, and maximal coverage to their customers.  Most operate under a fixed cost agreement and are not rewarded based on volume of findings, but instead by the repeat business that is earned through the delivery of high-quality services.  They also provide substantially more technical and legal safety to their customers than bug bounty programs do.

For example, we evaluated the terms and conditions for several bug bounty companies and what we learned was surprising.  Unlike traditional penetration testing companies, bug bounty companies do not accept any responsibility for the damages or losses that might result from the use of their services.  They explicitly state that the bug hunters are independent third parties and that any remedy with respect to loss or damages that a customer seeks to obtain is limited to a claim against that bug hunter.  What’s more is that the vetting process for bug hunters is lax at best.  In nearly all cases, background checks are not run and even when they are run the bug hunter could provide a false identity. The validation around who a bug hunter really is, is also lacking. To sign up to most programs you simply need to validate your email address. In simple terms, organizations that use bug bounty programs accept all risk and have no realistic legal recourse, even if a bug hunter acts in a malicious manner.

To put this into context, bug bounty programs effectively provide anyone on the internet with a legitimate excuse to attack your infrastructure.  Since these attacks are expected as a part of the bug bounty program, it may impact your ability to differentiate between an actual attack and an attack from a legitimate bug hunter.  This creates an ideal opportunity for bona fide malicious actors to hide behind bug bounty programs while working to steal your data. When you combine this, with the fact that it takes an average of ~200 days for most organizations to detect a breach, the risk becomes even more apparent.

There’s also the issue of GDPR. GDPR increases the value of personal data on the black market and to organizations alike.  Under GDPR, if personal data of a European citizen is breached, the organization that suffered the breach can face heavy fines, penalties, and more. In article 4 of the GDPR, a personal data breach is defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data transmitted, stored or otherwise processed”. While bug bounty programs target configurations, systems, and implementations, they do not incentivize bug hunters to go after personal data.  However, because of GDPR, a malicious bug hunter who exploits a vulnerability that discloses personal data (accidental or not), may be incentivized to ransom their finding for a higher dollar value. Likewise, organizations might be incentivized to pay the ransom and report it as a bounty to avoid having to notify the Data Protection Authorities (“DPA”) as is required by GDPR.

On a positive note, many of our customers use bug bounty programs in tandem with our Realistic Threat Penetration Testing services.  Customers who use bug bounty programs have far less vulnerabilities in terms of low-hanging-fruit than ones who don’t.  In fact, we are confident that bug bounty programs are pointedly more effective at finding bugs than automated vulnerability scanning could ever be. It’s also true that these programs are more effective than penetration testing vendors who deliver services based on the product of automated vulnerability scans.  When compared to a research driven penetration test, however, the bug bounty programs pale in comparison.

Industry standard penetration testing and the false sense of security.

Our clients often hire us to as a part of their process for acquiring other businesses.   We’ve played a quiet role in the background of some of the largest acquisitions to hit the news and some of the smallest that you’ve never heard of.  In general, we’re tasked with determining how well secured the networks of the organization to be acquired are prior to the acquisition.   This is important because the acquisitions are often focused on sensitive intellectual property like patents, drug formulas, technology, etc.   Its also important because in many cases networks are merged after an acquisition and merging into a vulnerable situation isn’t exactly ideal.

Recently we performed one of these tests for a client but post rather than pre-acquisition.  While we can’t (and never would) disclose information that could be used to identify one of our clients, we do share the stories in a redacted and revised format.  In this case our client acquired an organization (we’ll call it ACME) because they needed a physical presence to help grow business in that region of the world.   ACME alleged that their network had been designed with security best practices in mind and provided our client with several penetration testing reports from three well known vendors to substantiate their claims.

After the acquisition of ACME our client was faced with the daunting task of merging components of ACME’s network into their own.  This is when they decided to bring our team in to deliver a Realistic Threat Penetration Test™ against ACME’s network.  Just for perspective, Realistic Threat Penetration Testing™ uses a methodology called Real Time Dynamic Testing™ which is derived from our now infamous zeroday vulnerability research and exploit development practices. In simple terms it allows our team to take deep research-based approach to penetration testing and provides greater depth than traditional penetration testing methodologies.

When we deliver a Realistic Threat Penetration Test we operate just like the bad guys but in a slightly elevated threat context. Unlike standard penetration testing methodologies, Real Time Dynamic Testing™ can operate entirely devoid of automated vulnerability scanning.  This is beneficial from a quality perspective because automated vulnerability scanners produce generally low-quality results. Additionally, automated vulnerability scanners are noisy, increase overall risk of outages and damage, and generally can’t be used in a covert way.  When testing in a realistic capacity being discovered is most certainly disadvantageous.  As master Sun Tzu said, “All warfare is based on deception”.

When preparing to beach an organization accurate and actionable intelligence is paramount.  Good intelligence can often be collected without sending any packets to the target network (passive reconnaissance).  Hosts and IP addresses can be discovered using services like those provided by domaintools.com or via google dorks.  Services, operating systems, and software versions can be discovered using other tools like censys.io, shodan and others.  Internal information can often be extracted by searching forums, historical breaches, or pulling metadata out of available materials on the Internet.   An example of how effective passive reconnaissance can be is visible in the work we did for Gizmodo related to their story about Crosscheck.

Using passive reconnaissance against ACME we discovered a total of three externally connectable services.  One of those services was a VPN endpoint, the other was a web service listening on port 80, and the other was the same service listening on port 443.  According to passive recon the services on 80 and 443 were provided by a web-based database management software package.  This was obviously an interesting target and something that shouldn’t be internet exposed.  We used a common web browser to connect to the service and were presented with a basic username and password login form.  When we tried the default login credentials for this application (admin/admin) they worked.

At this point you might be asking yourself why we were able to identify this vulnerability when the three previous penetration testing reports made no mention of it.  As it turns out, this vulnerability would have been undetectable using traditional methodologies that depend on automated vulnerability scanning.  This is because the firewall used by ACME was configured to detect and block the IP addresses (for 24 hours) associated with any sort of automated scan.  It was not configured to block normal connection attempts.  Since we did passive reconnaissance, the first packet we sent to the target was the one that established the connection with the database software.   The next series of packets were related to successful authentication.

After using the default credentials to authenticate to the management application, we began exploring the product.  We realized that we had full control over a variety of databases that varied from non-sensitive to highly sensitive.  These included customer databases, password management, internal chat information, an email archive, and much more.  We couldn’t find any known vulnerabilities for the management software, but it didn’t seem particularly well written from a security perspective.   In short time we found a vulnerability in an upload function and used that to upload a back door to the system.  When we connected to the backdoor, we found that it was running with SYSTEM privileges.  What’s even more shocking is that we quickly realized we were on a Domain Controller.  Just to be clear, the Internet connectable database management software that was accessible using default credentials was running on a domain controller.

The next step was for us to determine what the impact of our breach was.  Before we did that though we exfiltrated the password database from the domain controller for cracking.  Then we created a domain admin account called “Netragard” in an attempt to get caught.  While we were waiting to get caught by the networking team we proceeded with internal reconnaissance.   We quickly realized that we were dealing with a flat network and that not everything on the network was domain connected.  So, while our compromise of the domain controller was serious it would not provide us with total control.  To accomplish that we needed to compromise other assets.

Unfortunately for ACME this proved to be far too easy of a task.  While exploring file shares we found a folder aptly named “Network Passwords”.   Sure enough, contained within that folder was an excel spreadsheet containing the credentials for all the other important assets on the network.  Using these credentials we were able to rapidly escalate our access and take full control of ACME’s infrastructure including but not limited to its firewall, switches, financial systems, and more.

Here are a few important takeaways from this engagement:

  • The penetration testing methodology matters. Methodologies that depend on automated scanners, even if whitelisted, will fail to detect vulnerabilities that won’t be missed by attackers using a hands-on research based approach.
  • Default configurations should always be changed as a matter of policy to avoid easy compromise.
  • Use two factor authentication for all internet connectable services.
  • Do not expose sensitive administrative applications to the internet. Instead, configure a VPN with two factor authentication and use that to access sensitive applications.
  • Domain controllers should be treated like Domain controllers and not like web application servers.
  • Domain controllers should not be Internet connectable or offer internet connectable services.
  • Do not store passwords in documents even if they are encrypted (we can crack them).
  • Always doubt your security posture and never allow yourself to feel safe. The moment you feel safe is the moment that you’ve adopted a false sense of security.

 

The reality behind hospital and medical device security.

We recently presented at the DeviceTalks conference in Boston Ma about the vulnerabilities that affect hospitals and medical devices (insulin pumps, pacemakers, etc.).  The goal of our presentation wasn’t to instill fear but sometimes fear is a reasonable byproduct of the truth.  The truth is that of all the networks that we test, hospital networks are by far the easiest to breach.  Even more frightening is that the medical devices contained within hospital networks are equally if not more vulnerable than the networks that they are connected to.   It seems that the healthcare industry has spent so much time focusing on safety that they’ve all but lost sight of security.

The culprit behind this insecurity is mostly convenience.  Hospitals are generally run by healthcare experts with a limited understanding of Information Technology and an even more limited understanding of IT Security. It would be unreasonable to expect healthcare experts to also be IT security experts given the vast differences between both fields. When healthcare experts hire IT experts and IT Security experts they do it to support the needs of the hospital.  Those needs are defined by the doctors, nurses, and other medical professionals tasked with running the hospital.  Anything that introduces new complexity or significant changes will be slow to adopt or perhaps not adopted at all.  Unfortunately, good security is the antithesis of convenience and so good security often falls by the wayside despite best efforts by IT and security personnel.

Unfortunately, in many respects the IT security industry is making the situation worse with false advertising.  If antivirus solutions worked as well as they are advertised, then malware would be a thing of the past.  If Intrusion  Prevention Solutions worked as well as advertised, then intrusions would be a thing of the past.  This misrepresentation of the capabilities provided by security solutions produces a false sense of security.  We aren’t suggesting that these solutions are useless, but we are encouraging organizations to carefully test the performance and effectiveness of these solutions rather than simply trusting the word of the vendor.

After we breach a network there exists a 30-minute window of susceptibility to ejection from the network. Most malicious hackers have a similar or larger window of susceptibility.  If a breach is responded to within that window, then we will likely lose access to the network and be back to square one (successful damage prevention by the client).   If we are not detected before that window expires, then the chance of successful ejection from the network is close to zero.  Astonishingly, the average length of time it takes for most organizations to identify a breach is 191 days.  Rather than focusing on breach prevention (which is an impossibility) organizations should be focusing on breach detection and effective incident response (which is entirely attainable).  An effective incident response will prevent damage.

Within about 40 minutes of breaching a hospital network our team takes inventory.  This process involves identifying systems that are network connected and placing them into one of two categories.  Those are the medical device category and the IT systems category.  Contained within the IT systems category are things like domain controllers, switches, routers, firewalls and desktops.  Contained within the medical device category are things like imaging systems, computers used to program pacemakers, insulin pumps etc.  On average the systems in the medical device category run antiquated software and are easier to take control of than the IT devices.  This is where security and safety intersect and become synonymous.

These medical device vulnerabilities afford attackers the ability to alter the operation of life-critical systems.  More candidly, computer attackers can kill patients that depend on medical devices.  The reality of medical device vulnerability is nothing new and it doesn’t seem to be getting any better. This is clearly evidenced by the ever-increasing number of medical device recalls triggered by discovered cybersecurity vulnerabilities. These vulnerabilities exist because the security of the software being deployed on medical devices is not sufficiently robust to safeguard the lives of the patients that rely on them.

More frightening is that attackers don’t need to breach hospital networks to attack medical devices.  They can attack medical devices such as implants from afar using a laptop and a wireless antenna.  This was first demonstrated in 2011 by security researcher Barnaby Jack.   He proved the ability to wirelessly attack an insulin pump from a distance of 90 meters causing it to repeatedly deliver its maximum dose of 25 units until its full reservoir of 300 units was depleted.  In simple terms Barnaby demonstrated how easily an attacker could kill someone with a keyboard and make it look like a malfunction.  He also did the same thing with a pacemaker causing it to deliver a lethal 840-volt shock to its user.  Similar attacks are still viable today and affect a wide variety of life supporting devices.

To solve this problem two things needs to happen.  The first is that medical device manufacturers need to begin taking responsibility for the security of their devices.  They need to recognize that security is in many cases a fundamental requirement of safety.  They also need to begin taking a proactive approach to security rather than reactive.  In our experience medical device manufacturers are unfriendly when interfacing with vulnerability researchers.  They might want to reconsider and even offer bug bounties as a step in the right direction.

Hospitals also need to make some significant changes too.  They need to begin to put security above convenience when it has the potential to impact patient safety.  This might mean installing good password managers and enforcing strong passwords with two factor authentications, increasing security budgets, or even paying for good security training programs.  Most hospitals are patient safety focused but fail to recognize that IT security and patient safety are now synonymous.

Gizmodo interview with Netragard – "Snake Oil Salesmen Plague the Security Industry, But Not Everyone Is Staying Quiet"

https://gizmodo.com/snake-oil-salesmen-plague-the-security-industry-but-no-1822590687
Adriel Desautels was suddenly in a serious mess, and it was entirely his fault.
Sitting in his college dorm room back in the mid-1990s, Desautels let his curiosity run rampant. He had a hunch that his school’s network was woefully insecure, so he took it upon himself to test it and find out.
“My thoughts at the time were, ‘Hey, it’s university. I’m here to learn. How much harm can there really be in doing it?’” Desautels says in a recent phone call, the hint of a tremor in his voice.
It wasn’t long before he found himself in a dull faculty conference room, university officials hammering him with questions as a pair of ominous-looking men—Desautels says he still doesn’t know who they were, but it’s hard not to assume they had badges in their pockets—stood quietly listening on the sidelines.
Penetrating the school’s network proved simple, he says, and thanks to Desautels’ affable arrogance, talking his way out of trouble was easier still. Forensically speaking, he argued to the school officials, there was no way to prove he did it. It could’ve just as easily been another student, at another computer, in a dorm room that wasn’t his. And he was right; they couldn’t prove shit, Desautels recalls. One of the mystery men smiled knowingly.
Read the full article here

Retro: FACEBOOK – Anti-Social Networking (2008).

This is a retro post about a penetration test that we delivered to a client back in 2008.  During the test we leveraged personal data found on Facebook to construct and execute a surgical attack against an energy company (critical infrastructure).  The attack was a big success and enabled our team to take full control of the client’s network, domain and their critical systems.

Click to download:


 
Given the recent press about Facebook and its respective privacy issues we thought it would be good to also shed light on the risks that its users create for the companies and/or agencies that they work for.  It is important to stress that the problem isn’t Facebook but instead is the way that people use and trust the platform.  People have what could be described as an unreasonable expectation of privacy when it relates to social media and that expectation directly increases risk.  We hope that this article will help to raise awareness about the very real business risks surrounding this issue.
 
Full Writeup (Text extract from PDF): June 2008
FACEBOOK Anti-Social Networking:
“It is good to strike the serpent’s head with your enemy’s hand.”

THE FRIEND OF MY ENEMY IS MY FRIEND. (2008)

The Facebook Coworker search tool can be abused by skilled attackers in sophisticated attempts to compromise personal information and authentication credentials from your company employees. Josh Valentine and Kevin Finisterre of Penetration Testing Company Netragard, Inc. also known as Peter Hunter and Chris Duncan, were tasked with conducting a penetration test against a large utility company. Having exhausted most conventional exploitation methods they decided to take a non conventional approach to cracking the companies networks. In this case they decided that perhaps a targeted attack against the companies Facebook population would be the most fruitful investment of time. Since Facebook usage requires that you actually sign up Josh and Kevin had to research believable back grounds for their alter ego’s Peter and Chris. The target company had a fairly  large presence in the US with four offices located in various places. Due to the size of the company it was easy to cherry pick bits and pieces of information from the hundreds of available profiles. Because many profiles can be browsed without any prior approval gathering some basic information was easy. Armed with new identities based on the details and demographics of the companies Facebook population it was time to make some new friends. After searching through the entries in the Coworker search tool they began selectively attempting to befriend people. In some cases the attempts were completely random and in others they tried to look for ‘friendly’ people. The logic was that once Peter and Chris had a few friends on their lists they could just send out a few mass requests for more new friends. With at least four or five friends under their belt the chances of having overlapping friends would increase.

“by the way… thanks for the hookup on the job. I really appreciate it man.”

Appearing as if they were ‘friends of friends’ made convincing people to accept the requests much easier. Facebook behavior such as the ‘Discover People You May Know’ sidebar also added benefit of making people think they knew Peter and Chris. Blending in with legit accounts meant that the two fake accounts needed to seem like real people as much as possible. Josh and Kevin first came up with basic identities that were just enough to get a few friends. Now If they wanted to continue snaring new friends and not raise any suspicions with existing friends they would need to be fairly active with the accounts.Things needed to get elaborate at this point so Josh and Kevin combed the internet looking for random images as inspiration for character background. Having previously decided on their desired image and demographic they decided to settle on a set of pictures to represent themselves with. They came up with a few photos from the surrounding area and even made up a fake sister for Chris. All of this obviously helped solidify the fact that they were real people in the eyes of any prospective friends. Eventually enough people had accepted the requests that Facebook began suggesting Chris and Peter as friends to many of the other employees of the target company.
Batch requests are the way to go Cherry picking individual friends was obviously the way to get a good profile started but Josh and Kevin were really after as many of the employees as possible so a more bulk approach was needed. After they were comfortable that their profiles looked real enough the mass targeting of company employees began. Simply searching the company Facebook network yielded 492 possible employee profiles. After a few people became their friends the internal company structure became more familiar. This allowed the pair to make more educated queries for company employees. Due to the specific nature of the company industry it was easy to search for specific job titles. Anyone could make a query in a particular city and search for a specific job title like “Landman” or “Geologist” and have a reasonable level of accuracy when targeting employees.
At the time the Chris Duncan account was closed there were literally 208 confirmed company employees as friends. Out of the total number of accounts that were collected only 2 or 3 were non employees or former employees. The company culture allowed for a swift embracing of the two fictitious individuals. They just seemed to fit in. Given enough time it is reasonable to expect that many more accounts would have been collected at the same level of accuracy.
Facebook put some measures in place to stop people from harvesting information. The first 50 or so friend requests that were sent Facebook required a response to a captcha program. Eventually Facebook was complacent with the fact that the team was not a pair of bots and allowed requests to occur in an unfettered manner. The team did run into what appeared to be a per hour as well as a per day limit to the number of requests that could be sent. There was a sweet spot and the team was able to maintain a nice flow of requests.

“Hi Chris, are you collecting REDACTED People? :)”

The diverse geography of the company and the embracing of internet technologies made the ruse seem comfortable. In many cases employees approached the team suspecting suspicious behavior but they were quickly appeased with a few kind words and emoticons. The hometown appeal of the duo’s profiles seemed to help people drop their guard and usual inhibitions. With access to the personal details of several company employees at their fingertips it was now time to sit back and reap the benefits. Once the pair had a significant employee base intra company relationships were outlined, common company culture was revealed. As an example several employees noted and pointed out to Chris and Peter that they could not find either individual in the “REDACTED employee directory”. Small tidbits of information like this helped Kevin and Josh carefully craft other information that was later fed to the people they were interacting with. With a constant flow of batch requests going there was a consistent and equally constant flow of new friends to case for information.
Over a seven day period of data collection there were as few as 8 newly accepted friends or as many as 63.
Days with more than 20 or so requests were not at all unusual for us.
Even after our testing was concluded the profiles continued to get new friend requests from REDACTED.

May 26 – 11
May 25 – 9
May 25 – 8
May 23 – 15
May 22 – 26
May 21 – 63
May 20 – 40

Every bit of information gleaned was considered when choosing the ultimate attack strategy. The general reactions from people also helped the team gauge what sort of approach to take when crafting the technique for the coup de grâce. Josh and Kevin had to go with something that was both believable and lethal at the same time. Having cased several individuals and machines on the company network it was time to actually attack those lucky new friends.

“ALL WARFARE IS BASED ON DECEPTION Hence, when able to attack, we must seem unable; when using our forces we must seem inactive; when we are near, we must make the enemy believe we are far away…”

Having spent several days prior examining all possible means of conventional exploitation Kevin and Josh were ready to move on and actually begin taking advantage of all the things they had learned about the energy companies network.

“Forage on the enemy, use the conquered foe to augment one’s own strength”

During their initial probes into the company networks the Duo came across a poorly configured server that provided a web based interface to one of the companies services. Having reverse engineered the operations of the server and subsequently compromising the back-end database that made the page run they were able to manipulate the content of the website in a manner that allowed for theft of company credentials in the near future. During information gathering it was common for employees to imply that they had access to some sort of company portal by which they could obtain information and perhaps access to various parts of the company.

“Supreme excellence consists in breaking the enemy’s resistance without fighting”

The final stages of the penetration testing happened to fall on a holiday weekend. The entire staff was given the Friday before the holiday off as well as the following Monday. Lucky for the team this provided an ideal window of opportunity during which the help desk would be left undermanned. A well orchestrated attack that appeared to be from the help-desk would be difficult to ward off and realistically unstoppable if delivered during this timeframe.

“In all fighting the direct method may be used for joining battles, but indirect methods will be needed in order to secure victory”

Several hundred phishing emails were sent out to the unsuspecting Facebook friends, the mailer was perfectly modeled from an internal company site. The mailer implied that the users password may have been compromised and that they should attempt to login and verify their settings. In addition to the mailer the status of the two Profiles were changed to include an enticing link to the phishing site. Initially 12 employees were fooled by the phishing mailer. Due to a SNAFU at the AntiSPAM company Postini another 50 some odd employees were compromised. The engineer at Postini felt that the mailer looked important and decided to remove the messages from the blocked queue. Access to the various passwords allowed for a full compromise of the client’s infrastructure including the mainframe, various financial applications, in house databases and critical control systems.
Clever timing and a crafty phishing email were just as effective if not more effective than the initial hacking methods that were applied. Social engineering threats are real,educate your users and help make them aware of efforts to harvest your company info. Ensure that a company policy is established to help curb an employee usage of Social Networking sites. Management staff should also consider searching popular sites for employees that are too frivolously giving out information about themselves and the company they work for. Be vigilant don’t be another phishing statistic.

We protect voters from people like us.

Dear Kris Kobach,
We recently read an article published by Gizmodo about the security of the network that will be hosting Cross Check.  In that article we noticed that you said “They didn’t succeed in hacking it.” referring to the Arkansas state network.  First, to address your point, no we did not succeed in hacking the network because we didn’t try.  We didn’t try because hacking the network without contractual permission would be illegal and we really don’t want to do anything illegal.  Our goal here at Netragard is to protect people, their data, and their privacy through the delivery of Realistic Threat Penetration Testing services.
We would like to offer you our Realistic Threat Penetration Testing services one time free of charge as a way to help protect the privacy of the American people. In exchange for this we would like a public statement about your collaboration with Netragard to help improve your security.
 
Sincerely,
Netragard, Inc.
 
 
 

What hackers know about vulnerability disclosures and what this means to you

Before we begin, let us preface this by saying that this is not an opinion piece.  This article is the product of our own experience combined with breach related data from various sources collected over the past decade.  While we too like the idea of detailed vulnerability disclosure from a “feel good” perspective the reality of it is anything but good.  Evidence suggests that the only form of responsible disclosure is one that results in the silent fixing of critical vulnerabilities.  Anything else arms the enemy.

Want to know the damage a single exposed vulnerability can cause? Just look at what’s come out of MS17-010. This is a vulnerability in Microsoft Windows that is the basis for many of the current cyberattacks that have hit the news like WannaCry, Petya, and NotPetya.
However, it didn’t become a problem until the vulnerability was exposed to the public. Our intelligence agencies did know about the vulnerability, kept it a secret, and covertly exploited it with a tool called EternalBlue. Only after that tool was leaked and the vulnerability that it exploited was revealed to the public did it become a problem. In fact, the first attacks happened 59 days after March 14th, which was when Microsoft published the patch thus fixing the MS17-010 vulnerability. 100% of the WannaCry, Petya and NotPetya infections occurred no less than two months after a patch was provided.
Why? The key word in the opening paragraph is not vulnerability. It’s exposed. Many security experts and members of the public believe that exposing vulnerabilities to the public is the best way to fix a problem. However, it is not. It’s actually one of the best ways to put the public at risk.
Here’s an analogy that can help the reader understand the dangers of exposing security vulnerabilities. Let’s say everyone on earth has decided to wear some kind of body armor sold by a particular vendor. The armor is touted as an impenetrable barrier against all weapons. People feel safe while wearing the armor.
Let’s say a very smart person has discovered a vulnerability that allows the impenetrable defense to be subverted completely, rendering the armor useless. Our very smart individual has a choice to make. What do they do?

Choice One: Sell it to intelligence agencies or law enforcement

Intelligence agencies and law enforcement are normally extremely judicious about using any sort of zero-day exploit.  Because zero-day exploits target unknown vulnerabilities using unknown methods they are covert by nature. If an intelligence agency stupidly started exploiting computers left and right with their zero-day knowledge, they’d lose their covert advantage and their mission would be compromised. It is for this reason that the argument of using zero-day exploits for mass compromise at the hands of intelligence or law enforcement agencies is nonsensical. This argument is often perpetuated by people who have no understanding of or experience in the zero-day industry.
For many hackers this is the best and most ethical option. Selling to the “good guys” also pays very well. The use cases for sold exploits includes things like combating child pornography and terrorism. Despite this, public perception of the zero-day exploit market is quite negative. The truth is that if agencies are targeting you with zero-day exploits then they think that you’ve done something sufficiently bad to be worth the spend.

Choice Two: Sit on it

Our very smart individual could just forget they found the problem. This is security through obscurity. It’s quite hard for others to find vulnerabilities when they have no knowledge of them. This is the principle that intelligence agencies use to protect their own hacking methods. They simply don’t acknowledge that they exist. The fewer people that know about it, the lower the risk to the public. Additionally it is highly unlikely that low-skilled hackers (which make up the majority) would be able to build their own zero-day exploit anyway. Few hackers are truly fluent in vulnerability research and quality exploit development.
Some think that this is an irresponsible act. They think that vulnerabilities must be exposed because then they can be fixed and to fail to do so puts everyone at increased risk. This thinking is unfortunately flawed and the opposite is true. Today’s reports show that over 99% of all breaches are attributable to the exploitation of known vulnerabilities for which patches already exist. This percentage has been consistent for nearly a decade.

Choice Three: Vendor notification and silent patching

Responsible disclosure means that you tell the vendor what you found and, if possible, help them find a way to fix it. It also means that you don’t publicize what you found which helps to prevent arming the bad guys with your knowledge. The vendor can then take that information, create and push a silent patch. No one is the wiser other than the vendor and our very smart individual.
Unfortunately, there have been cases where vendors have pursued legal action against security researchers who come to them with vulnerabilities. Organizations like the Electronic Frontier Foundation have published guides to help researchers disclose responsibly, but there are still legal issues that could arise.
This fear of legal action can also prompt security researchers to disclose vulnerabilities publicly under the theory that if they receive retaliation it will be bad PR for the company. While this helps protect the researcher it also leads to the same problems we discussed before.

Choice Four: Vendor notification and publishing after patch release

Some researchers try to strike a compromise with vendors by saying they won’t publicly release the information they discovered until a patch is available. But given the slow speed of patching (or complete lack of patching) all vulnerable systems, this is still highly irresponsible. Not every system can or will be patched as soon as a patch is released (as was the case with MS17-010). Patches can cause downtime, bring down critical systems, or cause other pieces of software to stop functioning.
Critical infrastructure or a large company cannot afford to have an interruption. This is one reason why major companies can take so long to patch vulnerabilities that were published so long ago.

Choice Five: Exploit the vulnerability on their own for fun and profit.

The media would have you believe that every discoverer of a zero-day vulnerability is a malicious hacker bent on infecting the world. And true, it is theoretically possible that a malicious hacker can find and exploit a zero-day vulnerability. However, most malicious hackers are not subtle about their use of any exploit. They are financially motivated and generally focused on a wide-scale, high-volume of infection or compromise. They know that once they exploit a vulnerability in the wild it will get discovered and a patch will be released. Thus, they go for short-term gain and hope they don’t get caught.

Choice Six: Expose it to the public

This is a common practice and it is the most damaging from a public risk perspective. The thinking goes that if the public is notified then vendors will be pressured to act fast and fix the problem. The assumption is also that the public will act quickly to patch before a hacker can exploit their systems. While this thinking seems rational it is and has always been entirely wrong.
In 2015 the Verizon Data Breach Investigation Report showed that half of the vulnerabilities that were disclosed in 2014 were being actively exploited within one month of disclosure. The trend of rapid exploitation of published vulnerabilities hasn’t changed. In 2017 the number of breaches is up 29 percent from 2016 according to the Identity Theft Resource Center. A large portion of the breaches in 2017 are attributable to public disclosure and a failure to patch.
So what is the motivator behind public disclosure? There are three primary motivators.  The first is that the revealer believes that disclosure of vulnerability data is an effective method for combating risk and exposure. The second is that the revealer feels the need to defend or protect themselves from the vulnerable vendor.  The second is that the revealer wants their ego stroked. Unfortunately, there is no way to tell the public without also telling every bad guy out there how to subvert the armor. It is much easier to build a new weapon from a vulnerability and use it than is to create a solution and enforce its implementation.
Exposing vulnerability details to the public when the public is still vulnerable is the height of irresponsible disclosure.  It may feel good and be done with good intention but the end result is always increased public risk (the numbers don’t lie).
It is almost certainly fact that if EternalBlue had never been leaked by ShadowBrokers then WannaCry, Petya and NotPetya would never have come into existence. This is just one out of millions of examples like this. Malicious hackers know that businesses don’t patch their vulnerabilities properly or in a timely manner.  They know that they don’t need zeroday exploits to breach networks and steal your data.  The only thing they need is public vulnerability disclosure and a viable target to exploit.  The defense is logically simple but can be challenging to implement for some.  Patch your systems.

What Thieves Know About Anti-Phishing Solutions & What This Means To You

Without taking proper precautions, your computer is a veritable smörgåsbord for hackers. Hackers have developed an array of techniques to infiltrate your system, extract your data, install self-serving software, and otherwise wreak havoc on your system. Every network in the world is vulnerable to hacking attempts; it’s simply a matter of which systems the hackers deem worth the effort. Preventing hackers from successfully compromising your data requires an understanding of the various solutions. However, very few of those solutions are truly effective.

The Differences Between Phishing and Spear Phishing

Phishing casts a wide net to hundreds, thousands or even millions of email addresses. Phishing can be used to steal passwords, perform wide-scale malware deployment (think WannaCry), or even as a component of disinformation campaigns (think Russia). More often than not phishing is carried out by financially motivated criminals. In most cases, the phishing breaches are not detected until it is too late and it is nearly impossible to prevent damages.
Spear phishing, like the name implies, is a more targeted version of phishing. Spear phishing campaigns are generally conducted against companies, specific individuals, or small groups of individuals. The primary goal of spear phishing campaigns is to make entry into a target network. The DNC hack for example, was accomplished by using spear phishing as an initial method of breach. Once the breach was affected the hackers began performing Distributed Metastasis (aka pivoting) and secured access to sensitive data.
In nearly all cases, businesses and governments are ill prepared to defend against phishing attacks. This is in part because the solutions that exist today are largely ineffective. Most commercial phishing platforms provide the same basic level of benefit as automated vulnerability scanners. If you really want to defend against phishing then you need to use a solution designed specifically for you and your network.

Real (not commercial) Tactics For Phishing and Spear Phishing

An email will go out, supposedly from a trusted source. In reality, it will be a chameleon domain set up specifically by the hackers to leverage your trust. A chameleon domain is a domain which appears to be the same as your company’s domain or a high profile domain but isn’t. (The domains are often accompanied by a clone website with a valid SSL certificate.) For example, instead of linkedin.com, the chameleon domain might be 1inkedin.com. These two domains might look identical at a glance, but in the second the L of LinkedIn is exchanged for the number one. Historically, hackers used Internationalized Domain Name (IDN) homograph attacks to create chameleon domains, but that methodology is no longer reliable.
An email might also arrive from a different Top Level Domain (TLD). Let’s say, linkedin.co, linkedin.org, or even linkedin.abc. There are many opportunities for deception when it comes to creating a chameleon domain. All of these oppotrunities exist because the human brain will read a word the same way so long as the first and last letter of the word are in the correct place. For example, you will likely fall victim to phishing if you just the word “opportunities” and didn’t notice that we swapped the places of the letters “T” and “R”. Experienced hackers are masters at exploiting this human tendency.  (https://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/)
When (spear) phishing is combined with malware it becomes a powerful weapon. A common misconception is that antivirus and antimalware software will protect you from infection. If that were in fact true, then things like the recent WannaCry (MS17-010) threat would never have been a problem.  The reality is that antivirus technologies aren’t all that effective at preventing infections. In fact, Intrusion Prevention Systems (IPS) also aren’t all that effective at preventing intrusions. If they were then we would not be seeing an ever-increasing number of breached businesses (nearly all which use some form of IPS or third party MSSP).
The bad guys may target 3 or 30 people with a spear phishing attack. To be successful with a well-crafted attack they only need a single victim.  That victim usually becomes their entry point into a network and from there it is only a matter of time until the network is fully compromised.  With a normal phishing attack, campaigns with larger numbers of victims are desirable. More victims equates to more captured data.

Businesses Making Money from Anti-Phishing

For some companies, there’s not a week that goes by without a phishing attempt landing in their email server. They are the consternation of companies everywhere.
Security companies, concerned about the devastation that phishing and spear phishing efforts can rain, have taken up the mantle of offering education about phishing to their clients. They have special programs for mid- and large- level corporations to combat phishing efforts.
Once a company signs up for education it’s common to test the company soon afterward to see what needs to be covered. For instance, a phishing attempt is made against half or all of a company. It will be a typical, run-of-the-mill ‘attack,’ where the users are given a convenient link and encouraged to go there to ‘make it right’ again.
After clicking on the link, the user is taken to a site which informs them that they were phished, how they were phished, and safety measures to prevent future successful phishing. Information about the success rate of the phishing attempt is also gathered, so the security company has a baseline. From that information, educational materials are given to the company for further training.
A set amount of time later, usually a few months, the security company runs the same type of phishing attempt on the employees of the target company. The success rates are then compared (the second try usually has fewer people who were fooled) and the target company receives certification that they are safer from phishing attempts now that they have been educated.

How Effective Are Anti-Phishing Companies?

Employing an anti-phishing security firm can provide a false sense of security for companies that would be vulnerable to phishing attempts. Going through the education prevents the likelihood of a blatant and basic phishing attempt from being successful, but it usually does not do much to prevent a real-world, targeted attack, especially a spear phishing one.
Anti-phishing companies generally use automated systems to test a company’s phishability. They use the most rudimentary phishing techniques, but many advertise that their solutions will be more effective than they actually are against real-world phishing attempts. In other words, these anti-phishing companies generally provide a political solution rather than a real solution to the problem of phishing and spear phishing. This very similar to how vulnerability scanning companies market themselves.
The people who want to break into a company’s system are patient. They custom-create a strategy to get into your systems, not send a blanket email to everyone in the company. It’s too blatant. Their attempts to socially engineer a favorable outcome are most likely going undetected.
The biggest question that an anti-phishing company has to ask itself is whether they are providing the level of security that they are promoting. By certifying employees as being phish-proof, does that mean that those employees are truly savvy enough to detect ANY phishing attempt? Is the security company simply marketing, or is it truly interested in protecting their clients against phishing?
Before going with a company that advertises anti-phishing education, keep in mind that spear phishing is highly customized and most likely won’t come to you as an email from Paypal, LinkedIn, or another popular site. It will most likely come to you from someone you know, possibly within your own company. Ask them what measures they plan to take to help you truly fight against the spear phishing attacks at your company.
 

What they are not telling you about the CIA leaks.

_95042278_f3058f0e-4e13-44a3-8c07-6d42b3597598The CIA leaks are making huge waves across the world. In a nutshell, the documents claim to reveal some of the hacking capabilities that the CIA has. Many privacy advocates believe that exposure of secrets like these is a net benefit for citizens because it provides transparency in government action. The media also likes leaks like these because it provides excellent story fodder.
But there is one thing that no one is talking about with these leaks that has serious long-term consequences with all of our foreign relationships. The concept is called attribution in the intelligence field, and it’s important that everyone get an idea of what it is and why it is important so they can put the real danger of these leaks into the proper context.

What is Attribution?

Attribution is the ability to accurately trace back evidence of a situation back to whoever did it. Even if you don’t know the term, these examples will make it quite clear. Let’s say you’re a child on a school playground. You tell your best friend a secret that you don’t want anyone to know about. A few days later, the whole school knows. If you know you didn’t tell anyone, who told the secret? The obvious one to blame is the best friend. That breach of trust could end your friendship.
That’s a simple example. A more complex one is a murder case. Let’s say that your neighbor kills your best friend in your house, but isn’t caught. Instead, you are accused and you spend a lot of money on lawyers to get the charges dismissed. Your reputation is damaged, but you stay out of jail. The case grows cold.
Now, let’s say over time you become close friends with your neighbor. Later, for whatever reason, the neighbor gets his DNA analyzed and there is a match to the old murder. The neighbor might get arrested, but how would you react?
In the first case, the fact that only one other person knew the secret and leaked it makes us able to attribute the link to the person. In the second, a telltale fingerprint that’s impossible to forge creates an attribution that wasn’t there before and provides ironclad evidence that you weren’t involved.

Leaking and Attribution

 Put bluntly, the general public and the media are overreacting in how much the CIA might or might not be using the things leaked to spy on them. A much more serious concern is what every other government in the world is thinking about the information in these leaks. Here’s why.
One of the roles of any government is to protect the interests of the country and its citizens. Countries use intelligence networks, spies, hacking, and other espionage techniques to gather information in advance about what their enemies and their allies might do next. Failing to get that knowledge puts the country at risk of something called information asymmetry. Other countries can get more information about you than you can about them. It’s like they can peek at your hand in a game of poker before the betting round, but you can’t.
The CIA’s role in America’s spy networks is international intelligence. The CIA isn’t going to turn their attention to people inside of the U.S. unless there is an extraordinarily good reason, despite what conspiracy theorists may think. But foreign governments definitely know the CIA will have at least thought about spying on them at some point. However, unless a spy was caught red-handed and confessed they were a CIA operative, it’s hard for a country to accuse us of spying on them in a specific instance. In short, there is no attribution. Just guesses.
What the CIA leaks do is give information to every government who wants to know how we might hack them. It is extremely difficult to attribute a hacking attack to a specific state actor, despite what the media and television might lead you to believe. You might be able to detect the attack and gather forensic evidence about a hacking incident, but until you can get definitive proof that another country knew about that particular exploit at the time of the attack and had the tools necessary to leverage it, you can’t say for certain. The leak now gives other governments details they can use to analyze their old forensic data and see if there is a match, much like the DNA evidence in the earlier example.
In short, now they can prove that we peeked at their poker hands and know how we did it. The how is also crucial not just for attribution, but for how hacks are conducted between governments.

Hidden Exploits

99.9% of all breaches are the result of the exploitation of known vulnerabilities (for which patches exist), many of which have been published (open to the public) for over a year. But those aren’t the vulnerabilities that governments generally want to exploit. They want to target 0-day vulnerabilities with 0-day exploits. A 0-day vulnerability is a bug in software that is unknown to the vendor or the public.  A 0-day exploit is the software that leverages a 0-day vulnerability usually to grant its user access to and control over the target. 0-day’s are the secret in the playground of geopolitical hacking.
Governments want to keep some 0-day exploits as state secrets. The time for a defense to be built against a revealed exploit can be as little as 24 hours. A 0-day exploit can be used for 6 months or even years. That is a lot of time for a government. But governments don’t want to use these too often anyway. Each time a 0-day exploit is used successfully, it leaves behind some form of forensic evidence that could be used later to gain attribution. The first time might be a surprise. The second will reveal similar patterns with the two attacks. The third time runs the risk of getting caught.
The value of these exploits varies and is determined by operational need, how rare the exploit is, how likely it is to be discovered or detected, etc.  Governments can pay as little as tens of thousands of dollars to as much as several million dollars for a single zeroday exploit.. Each time a 0-day exploit is used its lifespan is shortened significantly.  In some cases, a 0-day is only used once before it is exposed (burnt).  In other cases, 0-day exploits may last years before they are burnt.  One thing is always true.  If governments are going to spend millions of dollars on 0-day exploits, then they are not likely to use them on low-value targets like everyday civilians or for easily detected mass exploitation. They are far more likely to be used for high-value, well protected targets where detection of breach simply isn’t an option.
Because these are not open secrets, when 0-day exploit information is released in a leak it makes it extremely easy to attribute attacks to a state and it diminishes that states’ intelligence capabilities. Furthermore, now every other government has leverage against that state, and could even have grievances. They could feel like the unjustly accused murder suspect. And unlike the suspect, states have options that citizens do not in terms of how they can retaliate such as levying sanctions or declaring war. Worse, they could even gain the moral high ground even though they might be doing the same thing because the managed to keep their intelligence information secret.
Regardless of whether you think leakers and whistleblowers are heroes or traitors, there are consequences for leaking intelligence information to the world. The average American citizen doesn’t know and can’t know what the foreign consequences will be. Before you go out and cheer the next leak, consider what the consequences might be for our country now.  What does it mean when we lose our intelligence capabilities and our enemies don’t? What does it mean when our enemies and allies know just how, when, and most importantly, who managed to hack them?

EXPOSED: How These Scammers Tried To Use LinkedIn To Steal Our Client’s Passwords

Earlier this morning one of our more savvy customers received an email from [email protected]. The email contained a “New Message Received” notification allegedly sourced from CEO Tom Morgan. Contained in the email was a link that read, “Click here to sign in and read your messages”. Fortunately we had already provided training to this particular customer that covered Social Engineering and Phishing threats. So, rather than click on the link they forwarded the email to Netragard’s Special Project Team, which is like throwing meat to the wolves. The actual email is provided below in figure 1.
Figure 1
Figure1
The first step in learning about who was behind this threat was to follow the “click here” link. The link was shortened using the URL shortener ow.ly and so we used curl to expand it. While we were hopeful that the URL would deliver some sort of awesome zeroday or malware, it didn’t. Instead it served up a fake LinkedIn page (Figure 2) designed to steal login and password information.
Figure 2
figure2
The server hosting the phishing site was located in Lebanon and of course was not maintained or patched properly. Quick reconnaissance showed that directory listing was enabled, that the server was using an outdated and very exploitable version of cPanel, and that the server had been breached by at least four other parties (there were at least 4 backdoors installed). We used one of the backdoors to gain access to the system in the hopes of learning more (Figure 3).
Figure 3figure3
 
Our team quickly zeroed in on the “linkd.php” file that was used to generate the phishing page shown in Figure 2.   We explored the file looking for information related to where stolen passwords were being kept. Initially we expected to see the passwords logged to a text file but later found that they were being emailed to an external Gmail account. We also looked for anything that might provide us with information about who was being targeted with this attack but didn’t find much on the system.
We were able to identify the victims of the campaign by making hidden modifications to the attackers phishing platform. These modifications allowed us to track who submitted their credentials to the phishing site. When studying the submission data it quickly became apparent that the attackers were almost exclusively targeting Luxembourg based email addresses (.lu TLD’s) and were having a disturbingly high degree of success. Given that people often reuse passwords in multiple locations this campaign significantly increased the level of risk faced by organizations that employ the victims. More directly, chances are high that organizations will be breached using the stolen passwords.
The LinkedIn campaign was hardly the only campaign being launched from the server. Other campaigns were identified that included but may not be limited to DHL, Google, Yahoo and DropBox. The DropBox campaign was by far the most technically advanced. It leveraged blacklisting to avoid serving the phishing content to Netcraft, Kaspersky, BitDefender, Fortinet, Google, McAfee, AlienVault, Avira, AVG, ESET, Doctor Web, Panda, Symantec, and more. In addition to the blacklisting it used an external proxy checker to ensure page uptime.
Finally, we tracked the IP addresses that were connecting to the system’s various backdoor.  Those IP addresses all geolocated to Nigeria and are unfortunately dynamic.
Screenshot 2016-08-18 10.24.57
 
 
 
Summary
This phishing campaign highlights two specific issues that can both be countered with careful planning.  The first is that employees are easy to phish especially when they are outside of the office and not protected by spam filters.  This is problematic because employees often reuse the same passwords at work as they do outside of work.  So stealing a LinkedIn password often provides attackers with access to other more sensitive resources which can quickly result in a damaging breach and access to an organizations critical assets.   The solution to this issue is reasonably simple.  Employees should be required to undergo regular training for various aspects of security including but not limited Social Engineering and Phishing.  Second, Employers should require employees to use password management tools similar to 1Password.  Using password management tools properly will eliminate password reuse and significantly mitigate the potential damages associated with password theft.
As for our Nigerian friends, they won’t be operating much longer.

1 2 3 7