What You Need to Know About Penetration Testing Liability

Penetration tests are designed to identify potential gaps in an organization’s cybersecurity. With an effective penetration test comes a variety of different risks.  Before engaging a penetration test provider, it is essential to understand the risks of penetration tests, how to minimize them, and why a good penetration testing firm will not be able to accept liability for actions performed in good faith.

Pen Test Risk Reward

A Good Penetration Test Carries the Potential for Damages and/or Outages

Any reputable penetration testing firm will not provide a guarantee that their services are entirely safe.  Any provider that does so is likely either deceptive or using testing tools that are so ineffective as to be essentially worthless.

The reason why safety cannot be guaranteed is that many computing systems and programs are fairly unstable during normal operations.  How many times have you had Microsoft Office or Excel crash and cause data loss during normal use?  If these and other programs were completely reliable, Microsoft wouldn’t have bothered developing Autosave.

If these systems are so unstable during “normal” use, consider the expected impacts of the very unusual conditions that they will be subjected to during a pentesting engagement.  Penetration testing is designed to identify the bugs in software that an attacker would exploit as part of their attacks.  The best way to locate and determine the potential impact of these vulnerabilities is to use the same tools and techniques that a real attacker would.  This poking and prodding, while carried out with the best of intentions, falls outside the definition of “normal” use for this software.

While the probability that a penetration test will cause a significant failure or damage is less than 1%, it is still possible.  For this reason, when undergoing a penetration test designed to provide an accurate assessment of an organization’s systems and cyber risk, it is impossible for the testing provider to accept liability for outages and other damages caused by reasonable testing activities performed in good faith.

The Level of Risk Depends on the Services Provided

All penetration tests carry some risk of outages or other damages to the systems under test.  However, not all penetration tests are created equal, and different types of tests carry varying levels of risk.  Depending on the type of test performed, an organization may need to accept a higher level of risk and different types of risk.

Automated Tests are Higher Risk

One of the main determinants of risk in a penetration test is the type of penetration testing services provided.  Basic tests, which rely heavily upon automation, are much riskier than realistic threat penetration testing.

Some penetration test providers rely heavily upon automated tools such as scanners and exploitation frameworks to reduce the manual work required during a test.  While this may improve the speed, cost, and scalability of the test, it does so at the cost of significantly increased risk.  The scripted tests performed by these tools launch an attack if a system “appears” to be vulnerable without checking for strange or risky conditions.  This dramatically increases the probability that a system will experience a memory error or other issue that will cause the program to crash.

Realistic threat penetration testing, on the other hand, carries a lower level of risk because it realistically emulates what a skilled attacker would do when attempting to exploit the system.  Cybercriminals, when attacking a system, are trying not to be detected and use tools and techniques designed to minimize the probability that they will be detected before they achieve their objectives.  Testing driven by human talent, experience, and expertise is more likely to avoid potential damage and other outages and minimize risk than a penetration test more heavily reliant upon automation.

Basic Tests Carry Long-Term Risk

The risks associated with a penetration test are not limited to potential damages and outages.  A poorly-performed penetration test also carries the potential for long-term risks to an organization.

Companies commonly undergo penetration testing to fulfill regulatory requirements and often select the most basic test available in an attempt to “check the box” for compliance.  While these basic tests may earn a compliant rating, they do little to measure the organization’s true cybersecurity risk.

In the long term, this reliance upon basic tests carries significant cost to an organization.  Companies like Equifax, Target, Sony, Hannaford, and the Home Depot all were tested as compliant with applicable regulations yet suffered damaging data breaches.  In fact, the CEO of Target said, “Target was certified as meeting the standard for the payment card industry in September 2013. Nonetheless, we suffered a data breach.”

The ROI of good security is equal to the cost in damages of a single successful compromise.  You can add the cost of all of the security technologies and testing to the cost in damages as well because those things did not prevent the breach.

Is Your Vendor Delivering Genuine Penetration Testing Services?

Determining whether or not a penetration testing vendor is offering automated or manual testing services may seem difficult.  However, looking at how the provider calculates the cost of an assessment can be an easy way to accomplish this.

Any penetration testing provider will need to scope the size of the project before providing a quote.  The questions that they ask and actions performed during the scoping phase can help to determine the types of services that they provide.

Vendors that provide services dependent upon automation will ask questions like “how many IP addresses do you have?” or “how many pages is your application?”.  These questions are important to them since they determine the number of scans that they expect to perform.

However, these kinds of questions do not provide a realistic assessment of the complexity of the penetration testing engagement.  If a customer tells a vendor that they have 10 IP addresses and a web application with 10 pages the vendor might bill them $500.00 per IP and $1,000.00 per page totaling $15,000.00.  While that price sounds reasonable at face value, what happens if none of the 10 IP addresses hosts any connectable services?  In workload terms that is 0 man-hours.   Despite this the customer in our example would still pay $5,000.00 for 0 hours of work.  Does that sound reasonable?

The inverse is also true and is the exact reason why these types of vendors rely on automated scanning (rather than manual testing) for service delivery.  What happens if each of the 10 IP addresses requires 40 hours of work totaling 400 man-hours?  The cost would still be $5,000.00 as quoted which means that the vendor would need to work at an effective rate of $12.50 an hour!  Of course no vendor will work for that rate and so they compensate for the overage with automation but don’t tell their customer.

The same is true for web pages.  It is entirely possible to have 10 static web pages that cannot be attacked because they take no input.  It is also possible to have 10 web pages each of which takes significant input and could require even more than 40 hours to test per page.  So, as a general rule of thumb, companies looking for penetration testing services should avoid vendors that price based on the count of targets (IP addresses, pages, etc).  These vendors are generally the ones that deliver basic tests packaged as advanced tests.  They also face higher liability as was demonstrated when Trustwave was sued by banks for certifying Target.

Why a Penetration Testing Firm Can’t Accept Liability

A good penetration testing firm cannot accept liability for damages and outages caused by their services.  While a good, manual penetration test does everything that it can to minimize the potential for something to go wrong, unforeseen circumstances can cause unanticipated issues.

Some classic examples of penetration tests that went wrong in unexpected ways are the recent cases of Coalfire and Nissan.

In Coalfire’s case, penetration testers were hired to perform an assessment of the physical security of an Iowa courthouse building.  Due to a misunderstanding in the terms of the engagement, the penetration testers were arrested and faced felony charges when found testing the security of the courthouse after hours.

The Nissan case, on the other hand, is an example of a penetration test that was commissioned under false pretenses.  The VP of the company had the test performed without the knowledge of the CEO or key IT personnel.  The results of the test were used to gain access to the email account of the company chairman and access data used to bring charges against him for financial misconduct.

A penetration testing vendor cannot accept liability for potential damages caused by the test because some aspects are completely outside of their control.  This is why you shouldn’t be alarmed to see a release of liability statement in a penetration testing agreement and might have cause for concern if a vendor provides a contract that lacks such a clause.

How To Scope a Penetration Test (The Right Way)

How to Define the Scope of Your Next Pentest Engagement

One of the most important factors in the success of a penetration test is its scope.  Scope limitations are an understandable and even common desire.  However, they can make the results of a pentest worse than useless by providing a false sense of security.  Read on to learn why it is important to work with and trust your pentest provider when scoping your next pentest engagement.

How to Scope a Penetration Test Flow

Scope Limitation

One of the common challenges encountered when defining the terms of a penetration test is scope limitation.  When defining what techniques and systems are “fair game” and which ones are “off limits”, many organizations often narrow the scope of the engagement too far.

In some cases, implementing scope limitations make sense.  For example, limiting the use of destructive techniques on production systems is an understandable limitation since they can negatively impact business operations.

However, other scope limitations are more harmful than helpful.  A common example of this is forbidding the use of social engineering during a penetration test to avoid hurting the feelings of employees that might fall for the attack.  Since social engineering is one of the tactics most commonly used by cybercriminals, ignoring it as a potential attack vector places the company at risk.

Scope limitations enable an organization to decrease the cost of a penetration test and avoid uncomfortable attack vectors.  However, this comes at the cost of the effectiveness of the exercise and the accuracy of the results.

The Crown Jewels

Often, customers will focus on their “crown jewels” when defining the scope of their pentest engagement.  This focus on the security of the crown jewels makes perfect sense since these databases and systems are essential to the company’s ability to do business.  Additionally, like the layers of an onion, the further you go from the core (crowned jewels), the larger the scope can get and the more expensive the pentest engagement can become.

The problem is that, while limiting the scope of an engagement to a few security-hardened servers may be sufficient for compliance purposes, it could also give a false sense of security.

Inside the Head of an Attacker

Attackers are lazy. They will almost always choose the path of least resistance.

Compromise rarely begins by attacking directly a fully-patched Active Directory server. Creating a zero-day exploit is time-consuming, expensive, and not worth the effort for attacking most organizations.

Instead, an attacker will likely start their search for an entry point by looking at an organization’s legacy systems.  Even if these systems contain little of value, they are more likely to contain vulnerabilities that make it easier for an attacker to compromise them.  Once they have established a foothold inside the network, it is much easier for an attacker to move laterally within the network and pivot to better-secured systems.

We take the same approach when planning and performing a penetration test.  Any vulnerability scanner can check to see if your “crown jewel” systems have an exploitable vulnerability, and you’ve probably already performed the scan and fixed the issues before calling us in.  As penetration testers, we look for the easy but often-overlooked attack vectors that a real attacker is likely to take advantage of.  In the past, we’ve fully compromised networks via printers, IoT devices, legacy systems ready to be decommissioned, etc.

Your Network is Only as Secure as Your Least Secure Asset

If you don’t believe this, take a look at these two real-world examples from previous penetration testing engagements.

From a Public Printer to Full Control

One pentest engagement started with an audit of the security of an organization’s open guest network and ended with full access to the domain.

From this guest network, we connected to an MFP printer that was configured with an email account to be able to send scanner documents via email.  It turned out that the email account was also a valid domain user account.  We also established that this user account was added to the “Remote Access” domain group.

Using that compromised account, we were then able to connect to the corporate VPN (no 2FA), after discovering a few configuration files accessible in open network shares.  We were able to pivot, escalate privilege and gain full admin access to the domain.

All this from an insecure printer connected to the guest WiFi network…

Taking Advantage of Legacy Applications

From a legacy application, we managed to compromise the corporate network of a client (SaaS provider).

The corporate network was only used for office applications, and all their SaaS applications were hosted by a separate cloud provider. They put a lot of thought and effort into segmenting the corporate network (less critical) from their SaaS network (hosting all their customers’ information).

However, having access to the corporate environment, we also had access to their emails. We then proceeded to reset the password of one of their users that had access to the cloud provider hosting their SaaS environment. By timing the attack, we were able to recover the password reset link from that user’s email and reset his password on the cloud service provider.

This granted us access to the support ticket system for that cloud provider, and, browsing through the old tickets, we found several VPN configurations (including credentials and seeds to generate OTP for 2FA).

In the end, from a legacy application, we were able to pivot through the corporate network, gain access to the email server, and then pivot again to their cloud service provider.

Properly Scoping a Pentest Engagement

Engaging with a service provider for a penetration test requires trust in that provider.  No network is 100% secure, which means – with high probability – a good pentester will be able to find a vulnerability that gives them access to and control over your network, systems, and data.

If you trust your pentest provider with that level of access, it makes sense to trust them to help you to define the scope of your engagement with them.  Sit down with your provider and tell them your vision for the engagement, then ask for their opinion.  If there are things that you are wanting to place “out of scope”, a good pentester should be able to tell you the associated risks and help you to make an informed decision.

A pentest engagement should be a partnership, and this includes the planning stages of the engagement.  If, when talking to a potential provider, you don’t feel comfortable with letting them help you define the scope of the engagement (based upon their knowledge and experience), then don’t let them anywhere near your networks.

How To Become A Hacker – CyberSecurity Careers

With Cybersecurity Career Talks

Do you want to know “How To Become A Hacker” let us learn from world-renowned hackers, cybersecurity experts, social engineering experts. Adriel Desautels, Jayson E. Street and Philippe Caturegli share the mindset, training, experience and education (if any) required for a cybersecurity career.

Who is a hacker? A person who finds innovative ways of solving problems. Attributes required for breaking into cybersecurity mindset, soft skills, and other requirements. How important is networking and social media in a job search? Tips for a career changer. We will discuss, what works for these experts, what they have seen has worked for friends and how you can plan a successful cyber security investigation career.

The dark side of bug bounties

Bug Bounty companies (often called crowd sourced penetration tests) are all the hype.  The primary argument for using their services is that they provide access to a large crowd of testers, which purportedly means that customers will always have a fresh set of eyes looking for bugs.  They also argue that traditional penetration testing teams are finite and, as a result, tend to go stale in terms of creativity, depth, and coverage.  While these arguments seem to make sense at face value, are they accurate?

Penetration Testing Company

The first thing to understand is that the quality of any penetration test isn’t determined by the volume of potential testers, but instead by their experience, talent, and overall capabilities.  A large group of testers with average talent will never outperform a small group of highly talented testers in terms of depth and quality.  A great parallel example of this is when the world’s largest orchestra played ninth symphonies of Dvorák and Beethoven.  While that orchestra was made up of 7,500 members, the quality of their song was nothing compared to that which is produced by The Boston Symphony Orchestra (which is made up of 91 musicians).

Interestingly, it appears that bug hunters are incentivized to spend as little time as possible per bounty.  This is because bug hunters need to maintain a profitable hourly rate while working or their work won’t be worth their time.  For example, a bug hunter might spend 15 minutes to find a bug and collect a $4,000.00 bounty, which is an effective rate of $16,000.00 per hour!  In other cases, a bug hunter might spend 40 hours to find a bug and collect a $500.00 bounty which is a measly $12.50 per hour in comparison.  Even worse they might spend copious time finding a complex bug only to learn that it is a duplicate and collect no bounty (wasted time).

This argument is further supported when we appraise the quality of bugs disclosed by most bug bounty programs.  We find that most of the bugs are rudimentary in terms of ease of discovery, general complexity, and exploitability.  The bugs regularly include cross-site scripting vulnerabilities, SQL injection vulnerabilities, easily spotted configuration mistakes, and other common problems.  On average they appear to be somewhat more complex than what might be discovered using industry standard automated vulnerability scanners and less complex than what we’ve seen exploited in historical breaches.  To be clear, this doesn’t suggest that all bug hunters are low talent individuals, but, instead, that they are not incentivized to go deep.

In contrast to bug bounty programs, genuine penetration testing firms are incentivized to bolster their brand by delivering depth, quality, and maximal coverage to their customers.  Most operate under a fixed cost agreement and are not rewarded based on volume of findings, but instead by the repeat business that is earned through the delivery of high-quality services.  They also provide substantially more technical and legal safety to their customers than bug bounty programs do.

For example, we evaluated the terms and conditions for several bug bounty companies and what we learned was surprising.  Unlike traditional penetration testing companies, bug bounty companies do not accept any responsibility for the damages or losses that might result from the use of their services.  They explicitly state that the bug hunters are independent third parties and that any remedy with respect to loss or damages that a customer seeks to obtain is limited to a claim against that bug hunter.  What’s more is that the vetting process for bug hunters is lax at best.  In nearly all cases, background checks are not run and even when they are run the bug hunter could provide a false identity. The validation around who a bug hunter really is, is also lacking. To sign up to most programs you simply need to validate your email address. In simple terms, organizations that use bug bounty programs accept all risk and have no realistic legal recourse, even if a bug hunter acts in a malicious manner.

To put this into context, bug bounty programs effectively provide anyone on the internet with a legitimate excuse to attack your infrastructure.  Since these attacks are expected as a part of the bug bounty program, it may impact your ability to differentiate between an actual attack and an attack from a legitimate bug hunter.  This creates an ideal opportunity for bona fide malicious actors to hide behind bug bounty programs while working to steal your data. When you combine this, with the fact that it takes an average of ~200 days for most organizations to detect a breach, the risk becomes even more apparent.

There’s also the issue of GDPR. GDPR increases the value of personal data on the black market and to organizations alike.  Under GDPR, if personal data of a European citizen is breached, the organization that suffered the breach can face heavy fines, penalties, and more. In article 4 of the GDPR, a personal data breach is defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data transmitted, stored or otherwise processed”. While bug bounty programs target configurations, systems, and implementations, they do not incentivize bug hunters to go after personal data.  However, because of GDPR, a malicious bug hunter who exploits a vulnerability that discloses personal data (accidental or not), may be incentivized to ransom their finding for a higher dollar value. Likewise, organizations might be incentivized to pay the ransom and report it as a bounty to avoid having to notify the Data Protection Authorities (“DPA”) as is required by GDPR.

On a positive note, many of our customers use bug bounty programs in tandem with our Realistic Threat Penetration Testing services.  Customers who use bug bounty programs have far less vulnerabilities in terms of low-hanging-fruit than ones who don’t.  In fact, we are confident that bug bounty programs are pointedly more effective at finding bugs than automated vulnerability scanning could ever be. It’s also true that these programs are more effective than penetration testing vendors who deliver services based on the product of automated vulnerability scans.  When compared to a research driven penetration test, however, the bug bounty programs pale in comparison.

Industry standard penetration testing and the false sense of security.

Our clients often hire us to as a part of their process for acquiring other businesses.   We’ve played a quiet role in the background of some of the largest acquisitions to hit the news and some of the smallest that you’ve never heard of.  In general, we’re tasked with determining how well secured the networks of the organization to be acquired are prior to the acquisition.   This is important because the acquisitions are often focused on sensitive intellectual property like patents, drug formulas, technology, etc.   Its also important because in many cases networks are merged after an acquisition and merging into a vulnerable situation isn’t exactly ideal.

Recently we performed one of these tests for a client but post rather than pre-acquisition.  While we can’t (and never would) disclose information that could be used to identify one of our clients, we do share the stories in a redacted and revised format.  In this case our client acquired an organization (we’ll call it ACME) because they needed a physical presence to help grow business in that region of the world.   ACME alleged that their network had been designed with security best practices in mind and provided our client with several penetration testing reports from three well known vendors to substantiate their claims.

After the acquisition of ACME our client was faced with the daunting task of merging components of ACME’s network into their own.  This is when they decided to bring our team in to deliver a Realistic Threat Penetration Test™ against ACME’s network.  Just for perspective, Realistic Threat Penetration Testing™ uses a methodology called Real Time Dynamic Testing™ which is derived from our now infamous zeroday vulnerability research and exploit development practices. In simple terms it allows our team to take deep research-based approach to penetration testing and provides greater depth than traditional penetration testing methodologies.

When we deliver a Realistic Threat Penetration Test we operate just like the bad guys but in a slightly elevated threat context. Unlike standard penetration testing methodologies, Real Time Dynamic Testing™ can operate entirely devoid of automated vulnerability scanning.  This is beneficial from a quality perspective because automated vulnerability scanners produce generally low-quality results. Additionally, automated vulnerability scanners are noisy, increase overall risk of outages and damage, and generally can’t be used in a covert way.  When testing in a realistic capacity being discovered is most certainly disadvantageous.  As master Sun Tzu said, “All warfare is based on deception”.

When preparing to beach an organization accurate and actionable intelligence is paramount.  Good intelligence can often be collected without sending any packets to the target network (passive reconnaissance).  Hosts and IP addresses can be discovered using services like those provided by domaintools.com or via google dorks.  Services, operating systems, and software versions can be discovered using other tools like censys.io, shodan and others.  Internal information can often be extracted by searching forums, historical breaches, or pulling metadata out of available materials on the Internet.   An example of how effective passive reconnaissance can be is visible in the work we did for Gizmodo related to their story about Crosscheck.

Using passive reconnaissance against ACME we discovered a total of three externally connectable services.  One of those services was a VPN endpoint, the other was a web service listening on port 80, and the other was the same service listening on port 443.  According to passive recon the services on 80 and 443 were provided by a web-based database management software package.  This was obviously an interesting target and something that shouldn’t be internet exposed.  We used a common web browser to connect to the service and were presented with a basic username and password login form.  When we tried the default login credentials for this application (admin/admin) they worked.

At this point you might be asking yourself why we were able to identify this vulnerability when the three previous penetration testing reports made no mention of it.  As it turns out, this vulnerability would have been undetectable using traditional methodologies that depend on automated vulnerability scanning.  This is because the firewall used by ACME was configured to detect and block the IP addresses (for 24 hours) associated with any sort of automated scan.  It was not configured to block normal connection attempts.  Since we did passive reconnaissance, the first packet we sent to the target was the one that established the connection with the database software.   The next series of packets were related to successful authentication.

After using the default credentials to authenticate to the management application, we began exploring the product.  We realized that we had full control over a variety of databases that varied from non-sensitive to highly sensitive.  These included customer databases, password management, internal chat information, an email archive, and much more.  We couldn’t find any known vulnerabilities for the management software, but it didn’t seem particularly well written from a security perspective.   In short time we found a vulnerability in an upload function and used that to upload a back door to the system.  When we connected to the backdoor, we found that it was running with SYSTEM privileges.  What’s even more shocking is that we quickly realized we were on a Domain Controller.  Just to be clear, the Internet connectable database management software that was accessible using default credentials was running on a domain controller.

The next step was for us to determine what the impact of our breach was.  Before we did that though we exfiltrated the password database from the domain controller for cracking.  Then we created a domain admin account called “Netragard” in an attempt to get caught.  While we were waiting to get caught by the networking team we proceeded with internal reconnaissance.   We quickly realized that we were dealing with a flat network and that not everything on the network was domain connected.  So, while our compromise of the domain controller was serious it would not provide us with total control.  To accomplish that we needed to compromise other assets.

Unfortunately for ACME this proved to be far too easy of a task.  While exploring file shares we found a folder aptly named “Network Passwords”.   Sure enough, contained within that folder was an excel spreadsheet containing the credentials for all the other important assets on the network.  Using these credentials we were able to rapidly escalate our access and take full control of ACME’s infrastructure including but not limited to its firewall, switches, financial systems, and more.

Here are a few important takeaways from this engagement:

  • The penetration testing methodology matters. Methodologies that depend on automated scanners, even if whitelisted, will fail to detect vulnerabilities that won’t be missed by attackers using a hands-on research based approach.
  • Default configurations should always be changed as a matter of policy to avoid easy compromise.
  • Use two factor authentication for all internet connectable services.
  • Do not expose sensitive administrative applications to the internet. Instead, configure a VPN with two factor authentication and use that to access sensitive applications.
  • Domain controllers should be treated like Domain controllers and not like web application servers.
  • Domain controllers should not be Internet connectable or offer internet connectable services.
  • Do not store passwords in documents even if they are encrypted (we can crack them).
  • Always doubt your security posture and never allow yourself to feel safe. The moment you feel safe is the moment that you’ve adopted a false sense of security.

 

The reality behind hospital and medical device security.

We recently presented at the DeviceTalks conference in Boston Ma about the vulnerabilities that affect hospitals and medical devices (insulin pumps, pacemakers, etc.).  The goal of our presentation wasn’t to instill fear but sometimes fear is a reasonable byproduct of the truth.  The truth is that of all the networks that we test, hospital networks are by far the easiest to breach.  Even more frightening is that the medical devices contained within hospital networks are equally if not more vulnerable than the networks that they are connected to.   It seems that the healthcare industry has spent so much time focusing on safety that they’ve all but lost sight of security.

The culprit behind this insecurity is mostly convenience.  Hospitals are generally run by healthcare experts with a limited understanding of Information Technology and an even more limited understanding of IT Security. It would be unreasonable to expect healthcare experts to also be IT security experts given the vast differences between both fields. When healthcare experts hire IT experts and IT Security experts they do it to support the needs of the hospital.  Those needs are defined by the doctors, nurses, and other medical professionals tasked with running the hospital.  Anything that introduces new complexity or significant changes will be slow to adopt or perhaps not adopted at all.  Unfortunately, good security is the antithesis of convenience and so good security often falls by the wayside despite best efforts by IT and security personnel.

Unfortunately, in many respects the IT security industry is making the situation worse with false advertising.  If antivirus solutions worked as well as they are advertised, then malware would be a thing of the past.  If Intrusion  Prevention Solutions worked as well as advertised, then intrusions would be a thing of the past.  This misrepresentation of the capabilities provided by security solutions produces a false sense of security.  We aren’t suggesting that these solutions are useless, but we are encouraging organizations to carefully test the performance and effectiveness of these solutions rather than simply trusting the word of the vendor.

After we breach a network there exists a 30-minute window of susceptibility to ejection from the network. Most malicious hackers have a similar or larger window of susceptibility.  If a breach is responded to within that window, then we will likely lose access to the network and be back to square one (successful damage prevention by the client).   If we are not detected before that window expires, then the chance of successful ejection from the network is close to zero.  Astonishingly, the average length of time it takes for most organizations to identify a breach is 191 days.  Rather than focusing on breach prevention (which is an impossibility) organizations should be focusing on breach detection and effective incident response (which is entirely attainable).  An effective incident response will prevent damage.

Within about 40 minutes of breaching a hospital network our team takes inventory.  This process involves identifying systems that are network connected and placing them into one of two categories.  Those are the medical device category and the IT systems category.  Contained within the IT systems category are things like domain controllers, switches, routers, firewalls and desktops.  Contained within the medical device category are things like imaging systems, computers used to program pacemakers, insulin pumps etc.  On average the systems in the medical device category run antiquated software and are easier to take control of than the IT devices.  This is where security and safety intersect and become synonymous.

These medical device vulnerabilities afford attackers the ability to alter the operation of life-critical systems.  More candidly, computer attackers can kill patients that depend on medical devices.  The reality of medical device vulnerability is nothing new and it doesn’t seem to be getting any better. This is clearly evidenced by the ever-increasing number of medical device recalls triggered by discovered cybersecurity vulnerabilities. These vulnerabilities exist because the security of the software being deployed on medical devices is not sufficiently robust to safeguard the lives of the patients that rely on them.

More frightening is that attackers don’t need to breach hospital networks to attack medical devices.  They can attack medical devices such as implants from afar using a laptop and a wireless antenna.  This was first demonstrated in 2011 by security researcher Barnaby Jack.   He proved the ability to wirelessly attack an insulin pump from a distance of 90 meters causing it to repeatedly deliver its maximum dose of 25 units until its full reservoir of 300 units was depleted.  In simple terms Barnaby demonstrated how easily an attacker could kill someone with a keyboard and make it look like a malfunction.  He also did the same thing with a pacemaker causing it to deliver a lethal 840-volt shock to its user.  Similar attacks are still viable today and affect a wide variety of life supporting devices.

To solve this problem two things needs to happen.  The first is that medical device manufacturers need to begin taking responsibility for the security of their devices.  They need to recognize that security is in many cases a fundamental requirement of safety.  They also need to begin taking a proactive approach to security rather than reactive.  In our experience medical device manufacturers are unfriendly when interfacing with vulnerability researchers.  They might want to reconsider and even offer bug bounties as a step in the right direction.

Hospitals also need to make some significant changes too.  They need to begin to put security above convenience when it has the potential to impact patient safety.  This might mean installing good password managers and enforcing strong passwords with two factor authentications, increasing security budgets, or even paying for good security training programs.  Most hospitals are patient safety focused but fail to recognize that IT security and patient safety are now synonymous.

We protect voters from people like us.

Dear Kris Kobach,
We recently read an article published by Gizmodo about the security of the network that will be hosting Cross Check.  In that article we noticed that you said “They didn’t succeed in hacking it.” referring to the Arkansas state network.  First, to address your point, no we did not succeed in hacking the network because we didn’t try.  We didn’t try because hacking the network without contractual permission would be illegal and we really don’t want to do anything illegal.  Our goal here at Netragard is to protect people, their data, and their privacy through the delivery of Realistic Threat Penetration Testing services.
We would like to offer you our Realistic Threat Penetration Testing services one time free of charge as a way to help protect the privacy of the American people. In exchange for this we would like a public statement about your collaboration with Netragard to help improve your security.
 
Sincerely,
Netragard, Inc.
 
 
 

Whistleblower Series – The real problem with China isn’t China, its you.

Terms like China, APT and Zero-Day are synonymous with Fear, Uncertainty and Doubt (FUD).  The trouble is that, in our opinion anyway, these terms and respective news articles detract from the actual problem.  For example, in 2011 only 0.12% of compromises were attributed to zero-day exploitation and 99.88% were attributed to known vulnerabilities.  Yet, despite this fact the media continued to write about the zero-day threat as if it was a matter of urgency.  What they really should have been writing about is that the majority of people aren’t protecting their networks properly.  After all, if 99.88% of all compromises were the result of the exploitation of known vulnerabilities then someone must not have been doing their job. Moreover, if people are unable to protect their networks from the known threat then how are they ever going to defend against the unknown? All of the recent press about China and their Advanced Persistent Threat is the same, it detracts from the real problem.  More clearly, the problem isn’t China, Anonymous, LulzSec, or any other FUD ridden buzzword.  The problem is that networks are not being maintained properly from a security perspective and so threats are aligning with risks to successfully affect penetration.  A large part of the reason why these networks are such soft targets is because  their maintainers are sold a false sense of security from both the services and technology perspective. In this article we’ll show you how easy it was for us to hack into a sensitive government network that was guarded by industry technologies and testing best practices.  Our techniques deliberately mimicked those used by China.  You’ll notice that the  techniques aren’t particularly advanced (despite the fact that the press calls them Advanced) and in fact are based largely on common sense.  You’ll also notice that we don’t exploit a single vulnerability other than those that exist in a single employee (the human vulnerability).  Because this article is based on an actual engagement we’ve altered certain aspects of our story to protect our customer and their respective identity.  We should also make mention that since the delivery of this engagement our customer has taken effective steps to defeat this style of attack. Here’s the story… We recently (relatively speaking anyway) delivered a nation state style attack against one of our public sector customers.  In this particular case our testing was unrestricted and so we were authorized to perform Physical, Social and Electronic attacks.  We were also allowed to use any techniques and technologies that were available should we feel the need. Lets start off by explaining that our customers network houses databases of sensitive information that would be damaging if exposed.  They also have significantly more political weight and authority than most of the other departments in their area.  The reason why they were interested in receiving a nation-state style Penetration Test is because another department within the same area had come under attack by China.  We began our engagement with reconnaissance.  During this process we learned that the target network was reasonably well locked down from a technical perspective.  The external attack surface provided no services or application into which we could sink our teeth.  We took a few steps back and found that the supporting networks were equally protected from a technical perspective.  We also detected three points where Intrusion Prevention was taking place, one at the state level,  one at the department level, and one functioning at the host level. (It was because of these protections and minimalistic attack surface that our client was considered by many to be highly secure.) When we evaluated our customer from a social perspective we identified a list of all employees published on a web server hosted by a yet another  department.  We were able to build micro-dossiers for each employee by comparing employee names and locations to various social media sites and studying their respective accounts.  We quickly constructed a list of which employees were not active within specific social networking sites and lined them up as targets for impersonation just in case we might need to become someone else. We also found a large document repository (not the first time we’ve found something like this) that was hosted by a different department.  Contained within this repository was a treasure trove of Microsoft Office files with respective change logs and associated usernames.  We quickly realized that this repository would be an easy way into our customers network.  Not only did it provide a wealth of META data, but it also provided perfect pretexts for Social Engineering… and yes, it was open to the public. (It became glaringly apparent to us that our customer had overlooked their social points of risk and exposure and were highly reliant on technology to protect their infrastructure and information.  It was also apparent that our customer was unaware of their own security limitations with regards to technological protections.  Based on what we read in social forums, they thought that IPS/IDS and Antivirus technologies provided ample protection.) We downloaded a Microsoft Office document that had a change log that indicated that it was being passed back and fourth between one of our customers employees and another employee in different department.  This was ideal because a trust relationship had already been established and was ripe for the taking.  Specifically, it was “normal” for one party to email this document to the other as a trusted attachment.  That normalcy provided the perfect pretext. (Identifying a good pretext is critically important when planning on launching Social Engineering attacks.  A pretext is a good excuse or reason to do something that is not accurate.  Providing a good pretext to a victim often causes a victim to perform actions that they should not perform.) Once our document was selected we had our research team build custom malware specifically for the purpose of embedding it into the Microsoft Office document.  As is the case with all of our malware, we built in automatic self-destruct features and expiration dates that would result in clean deinstallation when triggered.  We also built in covert communication capabilities as to help avoid detection. (Building our own malware was not a requirement but it is something that we do.  Building custom malware ensures that it will not be detected, and it ensures that it will have the right features and functions for a particular engagement.) Once the malware was built and embedded into the document we tested it in our lab network.  The malware functioned as designed and quickly established connectivity with its command and control servers.   We should mention that we don’t build our malware with the ability to propagate for liability and control reasons.  We like keeping our infections highly targeted and well within scope. When we were ready to begin our attack we fired off a tool designed to generate a high-rate of false positives in Intrusion Detection / Prevention systems and Web Application Firewalls alike.  We ran that tool against our customers network and through hundreds of different proxies to force the appearance of multiple attack sources.  Our goal was to hide any real network activity behind a storm of false alarms. (This demonstrates just how easily Intrusion Detection and Prevention systems can be defeated.   These systems trust network data and generate alerts based on that network data.  When an attacker forges network data then that attacker can also forge false alerts.   Generate enough false alerts and you disable ones ability to effectively monitor the real threat.) While that tool was running we forged an email to our customers employee from the aforementioned trusted source.  That email contained our infected Microsoft Office document as an attachment.  Within less than three minutes our target received the email and opened the infected attachment thus activating our malware.  Once activated our malware connected back to its command and control server and we had successfully penetrated into our customer’s internal IT infrastructure.  We’ll call this initial point of Penetration T+1.  Shortly after T+1 we terminated our noise generation attack. (The success of our infection demonstrates how ineffective antivirus technologies are at preventing infections from unknown types of malware.  T+1 was using a current and well respected antivirus solution.  That solution did not interfere with our activities.  The success of our penetration at this point further demonstrates the risk introduced by the human factor.  We did not exploit any technological vulnerability but instead exploited human trust.) Now that we had access we needed to ensure that we kept it.  One of the most important aspects of maintaining access is monitoring user activity.  We began taking screenshots of T+1’s desktop every 10 seconds.  One of those screenshots showed T+1 forwarding our email off to their IT department because they thought that the attachment looked suspicious.  While we were expecting to see rapid incident response, the response never came.  Instead, to our surprise, we received a secondary command and control connection from a newly infected host (T+2). When we began exploring T+2 we quickly realized that it belonged to our customers head of IT Security.  We later learned that he received the email from T+1 and scanned the email with two different antivirus tools.  When the email came back as clean he opened the attachment and infected his own machine.  Suspecting nothing he continued to work as normal… and so did we. Next we began exploring the file system for T+1. When we looked at the directory containing users batch files we realized that their Discretionary Access Control Lists (DACL’s) granted full permissions to anyone (not just domain users).  More clearly, we could read, write, modify and delete any of the batch files and so decided to exploit this condition to commit a mass compromise. To make this a reality, our first task was to identify a share that would allow us to store our malware installer.  This share had to be accessible in such a way that when a batch file was run it could read and execute the installer.  As it turned out the domain controllers were configured with their entire C: drives shared and the same wide open DACL’s. We located a path where other installation files were stored on one of the domain controllers.  We placed a copy of our malware into that location and named it “infection.exe”.  Then using a custom script we modified all user batch files to include one line that would run “infection.exe” whenever a user logged into a computer system.  The stage was set… In no time we started receiving connections back from desktop users including but not limited to the desktops belonging to each domain admin.  We also received connections back from domain controllers, exchange servers, file repositories, etc.  Each time someone accessed a system it became infected and fell under our control. While exploring the desktops for the domain administrators we found that one admin kept a directory called “secret”.   Within that directory were backups for all of the network devices including intrusion prevention systems, firewalls, switches, routers, antivirus, etc.  From that same directory we also extracted viso diagrams complete with IP addresses, system types, departments, employee telephone lists, addresses, emergency contact information, etc. At this point we decided that our next step would be to crack all passwords.  We dumped passwords from the domain controller and extracted passwords from the backed up configuration files.  We then fed our hashes to our trusted cracking machine and successfully cracked 100% of the passwords. (We should note that during the engagement we found that most users had not changed their passwords since their accounts had been created.  We later learned that this was a recommendation made by their IT staff.  The recommendation was based on an article that was written by a well-respected security figure.) With this done, we had achieved an irrecoverable and total infrastructure compromise.  Had we been a true foe then there would be no effective way to remove our presence from the network.  We achieved all of this without encountering any resistance, without exploiting a single technological vulnerability, without detection (other than the forwarded email of course) and without the help of China.  China might be hostile but then again so are we.

Whistleblower Series – Don’t be naive, take the time to read and understand the proposal.

In our last whistleblower article, we showed that the vast majority of Penetration Testing vendors don’t actually sell Penetration Tests. We did this by deconstructing pricing methodologies and combining the results with common sense. We’re about to do the same thing to the industry average Penetration Testing proposal. Only this time we’re not just going to be critical of the vendors, we’re also going to be critical of the buyers.
A proposal is a written offer from seller to buyer that defines what services or products are being sold. When you take your car to the dealer, the dealer gives you a quote for work (the proposal). That proposal always contains an itemized list for parts and labor as well as details on what work needs to be done. That is the right way to build a service-based proposal.

The industry average Network Penetration Testing proposal fails to define the services being offered. Remember, the word ‘define’ means the exact meaning of something. When we read a network penetration testing proposal and we have to ask ourselves “so what is this vendor going to do for us?” then the proposal has clearly failed to define services.

For example, just recently we reviewed a proposal that talked about “Ethos” and offered optional services called “External Validation” and “External Quarterlies” but completely failed to explain what “External Validation” and “External Quarterlies” were. We also don’t really care about “Ethos” because it has nothing to do with the business offering. Moreover, this same proposal absolutely failed to define methodology and did not provide any insight into how testing would be done. The pricing section was simply a single line item with a dollar value, it wasn’t itemized. Sure the document promised to provide Penetration Testing services, but that’s all it really said (sort of).

This is problematic because Penetration Testing is a massively dynamic service that contains a potentially infinite amount of techniques (attacks and tests) for penetration attempts. Some of those techniques are higher threat than others; some are even higher risk than others. If a proposal doesn’t define the tests that will be done, how they will be done, what the risks are, etc.,  then the vendor is free to do whatever they want and call it a day. Most commonly this means doing the absolute minimum amount of work while making it look like a lot.

Here’s some food for thought…

Imagine that we are a bulletproof vest Penetration Testing Company. It’s our job to test the effectiveness of bulletproof vests for our customers so that they can guarantee the safety of their buyers. We deliver a proposal to a customer that is the same quality as the average Network Penetration Testing proposal and our customer signs the proposal.

A week later, we receive a shipment of vests for testing. We hang those vests on dummies made up of ballistics gel in our firing range. We then take our powerful squirt guns, stand ten feet down range and squirt away. After the test is complete, we evaluate the vests and determine that they were not penetrated and so passed the Penetration Test. Our customer hears the great news and begins selling the vest on the open market.

In the scenario above, both parties are to blame. The customer did not do their job because they failed to validate the proposal, to demand clear definitions, to assess the testing methodology, etc. Instead they naively trusted the vendor. The vendor failed to meet their ethical responsibilities because they offered a misleading and dishonest service that would do nothing more than promote a false sense of security. In the end, the cost in damages (loss of life) will be significantly higher than the cost of receiving genuine services. In the end, the customer will suffer as will their own customers.

Unfortunately, this is what is happening with the vast majority of Network Penetration Tests. Vendors are perceived as experts by their customers and are delivering proposals like the ones described above. Customers then naively evaluate proposals assuming that all vendors are created equal and make buying decisions based largely on cost. They receive services (usually not a genuine penetration test), put a check in the box and move onto the next task. In reality, the only thing they’ve bought is a false sense of security.

How do we avoid this?

While we can’t force Network Penetration Testing firms to hold themselves to a higher standard, their customers can. If customers took the time to truly evaluate Network Penetration Testing proposals (or any proposal for that matter) then this problem would be eradicated. The question is do customers really want high quality testing or do they just want a check in the box? In our experience, both types of customers exist but most seem to want a genuine and high-quality service.

Here are a few things that customers can do to hold their Network Penetration Testing vendor to a higher standard.

  • Make sure the engagement is properly scoped (we discussed this in our previous article)
  • Make sure the proposal uses terms that are clearly defined and make sense. For example, we saw a proposal just one week before writing this article that was for “Non-intrusive Network Penetration Testing.” Is it possible to penetrate into something without being intrusive? No.
  • Make sure that the proposal defines terms that are unique to the vendor. For example, the proposal that we mentioned previously talks about “External Quarterlies” but fails to explain what that means. Why are people signing proposals that make them pay for an undefined service? Would you sign it if it had a service called “Goofy Insurance”?
  • Make sure the vendor can explain how they came to the price points that are reflected in the proposal. Ask them to break it down for you and remember to read our first article so that you understand the differences between count based pricing (wrong) and attack surface based pricing (right).
  • (We’ll provide more points in the next article).

As the customer, it is up to you to hold a vendor’s feet to the fire (we expect it). When you purchase poor quality services that are mislabeled as “Penetration Tests” then you are enabling the snake-oil vendors to continue. This is a problem because it confuses those who want to purchase genuine and high-quality services. It makes their job exceedingly difficult and in some cases causes people to lose faith in the Network Penetration Testing industry as a whole.

If you feel that what we’ve posted here is inaccurate and can provide facts to prove the inaccuracy then please let us know. We don’t want to mislead anyone and will happily modify these entries to better reflect the truth.

How to find a genuine Penetration Testing firm

There’s been a theme of dishonesty and thievery in the Penetration Testing industry for as long as we can remember.  Much in the same way that merchants sold “snake-oil” as a cure-all for what ails you, Penetration Testing vendors sell one type of service and brand it as another thus providing little more than a false sense of security.  They do this by exploiting their customers lack of expertise about penetration testing and make off like bandits.  We’re going to change the game; we’re going to tell you the truth.

Last week we had a new financial services customer approach us.  They’d already received three proposals from three other well-known and trusted Penetration Testing vendors. When we began to scope their engagement we quickly realized that the IP addresses that they’d been providing were wrong.  Instead of belonging to them they belonged an e-commerce business that sold beer-making products!  How did we catch this when the other vendors didn’t?  Simple, we actually take the time to scope our engagements carefully because we deliver genuine Penetration Testing services.

Most other penetration testing vendors do what is called count based pricing which we think should be a major red-flag to anyone.  Count based pricing simply says that you will pay X dollars per IP address for Y IP addresses. If you tell most vendors that you have 10 IP addresses they’ll come back and quote you at around $5,000.00 for a Penetration Test ($500.00 per IP). That type of pricing is not only arbitrary but is fraught with serious problems. Moreover, it’s a solid indicator that services are going to be very poor quality.

Scenario 1: The Overcharge (Too much for too little)

If you have 10 IP addresses but none of those 10 IP addresses are running any connectable services then there’s zero seconds worth of work to be done.  Do you really want to pay $5,000.00 for zero seconds worth of work?  Moreover, is it ethical for the vendor to charge you $5,000.00 for testing targets that are not really testable? While we don’t think its ethical we’ve seen many, many vendors do this very thing.

Scenario 2: The Undercharge (Too little money for too much work)

What if those 10 IP addresses were serving up medium complexity web applications? Lets assume that each web application would take 100 hours to test totaling 1,000 hours of testing time (not including the reporting, presentation, etc.).  If you do the math then that equates to an absolutely absurd hourly rate of $5.00 per hour for the Penetration Tester!  Of course, no penetration tester is going to work for that much money so what are you really paying $5,000.00 for?
Well, lets assume that the very-low-end cost of a penetration tester is around $60.00 per hour (its actually higher than that).  In order to deliver 1,000 hours of work at $5,000.00 the test would need to be 92.7% automated resulting in an exact hourly rate of $60.24 per hour. Do you really want to pay $5,000.00 for a project that is 92.7% automated? Moreover, is that even a Penetration Test?

The terms Penetration Test and Vulnerability Scan only have one correct definition.  The definition of Penetration Test is a test that is designed to identify the presence of points where something can make its way into or through something else.  Penetration Tests are not assessments (best guesses) of any kind.  A Penetration Test either successfully penetrates or it does not not, there is no grey zone; there are no false positives.  (This is why we can guarantee the quality of our penetration testing services.)

A Vulnerability Assessment on the other hand is a best guess or an educated guess as to how susceptible something is to risk or harm.  Because it is an assessment and not  a test there is room for error (guessing wrong) and so false positives should be expected.  A Vulnerability Scan is similar to a Vulnerability Assessment only instead of a human doing the guess work a computer program (with a much higher margin of error) does the guess work.
So, if 92.7% of a service is based on Vulnerability Scanning then how is it that Penetration Testing vendors can label such a service a Penetration Test?  They should call it what it really is, which is a Vetted Automated Vulnerability Scan; and a Vetted Automated Vulnerability Scan is about as effective as Penetration Testing a bulletproof vest with a squirt gun.  We’re not sure about you but we wouldn’t want to wear that vest into battle. These types of services provide little more than a false sense of security.

Back on track…
The question becomes how should a vendor price their services and build their proposals? While we won’t disclose our methodology here because we don’t want to enable copycats, we will provide you with some insight through an analogy.
Your car breaks down and you call a random mechanic.  You tell the mechanic what sort of car you drive and how many miles are on it but never provide any details as to what happened.  The mechanic then quotes you $300.00 to fix your car (without ever really diagnosing it) and $50.00 for a tow.  Would you bring your car to that mechanic? How can he afford to fix your car for $300.00 and make a profit?  To accomplish that must have an arsenal of junk parts gnome slaves working for peanuts, right?  Would you really trust the quality of his work? That is count based pricing.

Fortunately automobile mechanics (most of them anyway) are more ethical than that (and gnome slaves don’t really exist anyway).  Most of them won’t deliver a quote until after they’ve evaluated your car and successfully diagnosed the problem.  Once diagnosed they’ll provide you with an itemized quote that includes parts, labor, taxes, and a timeframe for service delivery.  They won’t negotiate much on price because in most cases you are getting what you pay for.

Genuine Penetration testing vendors are no different than genuine mechanics. All Penetration Testing vendors should be held to the same standard (including us).  What you pay for in services should be a direct reflection the amount of work that needs to be done. The workload requirement should be determined through a careful, hands-on assessment. This means that when pricing is done right there is no room to adjust pricing other than to change workload. Any vendor that offers a lower price if you close before the end of the month is either offering arbitrary pricing or they are padding their costs.

What if a vendor truly needs to discount services? When we deliver Penetration Testing services we don’t charge for automation.  If we automate 10% then our services are discounted 10%.  If we automate 100% then our services are delivered to our customers free of charge. (Yes, that’s right, we’ll scan you for free while other vendors might charge thousands of dollars for that). Why charge for automation when it takes less than 3 minutes of our time to kick off a scan?  Just because we can charge for something doesn’t mean that it’s ethically right.

Anyway, this article is one of many to come in our whistle blower series. Please feel free to share or contact us with any questions.

If you feel that what we’ve posted here is inaccurate and can provide facts to prove the inaccuracy then please let us know.  We don’t want to mislead anyone and will happily modify these entries to better reflect the truth.