The Truth About Breaching Retail Networks

How we breached a retail network using our manual penetration testing methodology

Netragard
We recently delivered an Advanced Persistent Threat  (APT) Penetration Test to one of our customers. People who know us know that when we say APT we’re not just using buzz words.  Our APT services maintain a 98% success rate at compromise while our unrestricted methodology maintains a 100% success at compromise to date.  (In fact we offer a challenge to back up our stats.  If we don’t penetrate with our unrestricted methodology then your test is free.)  Lets begin the story about a large retail customer that wanted our APT services.
When we deliver covert engagements we don’t use the everyday and largely ineffective low and slow methodology.  Instead, we use a realistic offensive methodology that incorporates distributed scanning, the use of custom tools, zero-day malware (RADON) among other things.  We call this methodology Real Time Dynamic Testing™ because it’s delivered in real time and is dynamic.  At the core of our methodology are components normally reserved for vulnerability research and exploit development.  Needless to say, our methodology has teeth.
Our customer (the target) wanted a single /23 attacked during the engagement. The first thing that we did was to perform reconnaissance against the /23 so that we knew what we were up against.  Reconnaissance in this case involved distributed scanning and revealed a large number of http and https services running on 149 live targets.  The majority of the pages were uninteresting and provided static content while a few provided dynamic content.
While evaluating the dynamic pages we came across one that was called Make Boss. The application was appeared to be custom built for the purpose of managing software builds. What really snagged our attention was that this application didn’t support any sort of authentication.  Instead anyone who visited the page had access to use the application.
We quickly noticed that the application allowed us to generate new projects.  Then we noticed that we could point those new projects at any SVN or GIT repo local or remote.  We also identified a hidden questionable page named “list-dir.php” that enabled us to list the contents of any directory that the web server had permission to access.
We used “list-dir.php” to enumerate local users by guessing the contents of “C:\document~1” (Documents and Settings folder). In doing so we identified useful directories like “C:\MakeBoss\Source” and “C:\MakeBoss\Compiled”.  The existence of these directories told us that projects were built on and fetched from same server.
The next step was to see if in fact we could get the Make Boss application to establish a connection with a repository that we controlled.  To do this we setup an external listener using netcat at our lab .  Then we configured a new project called “_Netragard” in Make Boss in such a way that it would connect to our listener. The test was a success as is shown by the redacted output below.

[[email protected]:~]$ nc -l -p 8888 -v
listening on [any] 8888 …
xx.xx.xx.xx: inverse host lookup failed: Unknown server error : Connection timed out
connect to [xx.xx.xx.xx] from (UNKNOWN) [xx.xx.xx.xx] 1028
OPTIONS / HTTP/1.1
Host: lab1.netragard.com:8888
User-Agent: SVN/1.6.4 (r38063) neon/0.28.2
Keep-Alive:
Connection: TE, Keep-Alive
TE: trailers
Content-Type: text/xml
Accept-Encoding: gzip
DAV: http://subversion.tigris.org/xmlns/dav/svn/depth
DAV: http://subversion.tigris.org/xmlns/dav/svn/mergeinfo
DAV: http://subversion.tigris.org/xmlns/dav/svn/log-revpropsContent-Length: 104
Accept-Encoding: gzip
 
<?xml version=”1.0″ encoding=”utf-8″?><D:options xmlns:D=”DAV:”><D:activity-collection-set/></D:options>

With communications verified we setup a real instance of SVN and created a weaponized build.bat file.  We selected the build.bat because we knew that Make Boss would execute the build.bat server-side and if done right we could use it to infect the system. (A good reference for setting up subversion can be found here  http://subversion.apache.org/quick-start).  Our initial attempts at getting execution failed due to file system permissions.  We managed to get successful execution of our build.bat by changing our target directory to “C:\TEMP” rather than working from the standard webserver directories.
With execution capabilities verified we modified our build.bat file so that it would deploy RADON (our home-grown 0-day pseudo-malware).  We used Make Boss to fetch and run our weaponized build.bat, which in turn infected the server running the Make Boss application.  Within seconds of infection our Command & Control server received a connection from the Make Boss server.  This represented our first point of penetration.
A note about RADON…
RADON is “safe” as far as malware goes because each strand is built with a pre-defined expiration date.  During this engagement RADON was set to expire 5 days after strand generation.  When RADON expires it quietly and cleanly self-destructs leaving the infected system in its original state which is more than what can be said for other “whitehat” frameworks (like Metasploit, etc).
RADON is also unique in that it’s designed for our highest-threat engagements (nation-state style).  By design RADON will communicate over both known and unknown covert channels.  Known channels are used for normal operation while covert channels are used for more specialized engagements.  All variants of RADON can be switched from known to covert and visa-versa from the Command & Control server.
Finally, it’s almost impossible to disrupt communication between RADON and its Command & Control center.  This is in part because of the way that RADON leverages key protocols that all networks depend on to operate.  Because of this, disrupting RADON’s covert channels would also disrupt all network functionality.
Back to the hack…
With the system infected by RADON we were able to take administrative control of the Make Boss server.  From there we identified domain administrator credentials that the server was happy to relinquish. We used those credentials to authenticate to and extract all current and historical passwords from the domain controller.   Then we used one of our specialized GPU password cracking machines to process the hashes and deliver us the keys to the kingdom.
With that accomplished we had established dominant network position. From this position we were able to propagate RADON to all endpoints and affect an irrecoverable network compromise.  Irrecoverable if we were the bad guys of course, but luckily we’re the good guys and our customer recovered just fine.  Never the less we had access to everything including but not limited to desktops, points of sale, web servers, databases, network devices, etc.
Not surprisingly our customers managed security service provider didn’t detect any of our activity, not even the mass infection.  They did however detect what we did next…
As a last step and to satisfy our customer we ran two different popular vulnerability scanners.  These are the same scanners that most penetration testing vendors rely on to deliver their services.  One of the scanners is more network centric and the other combines network and web application scanning.  Neither of the scanners identified a single viable vulnerability despite the existence of the (blatantly obvious) one that we exploited above.  The only things that were reported were informational findings like “port 80 open”, “deprecated SSL”, etc.
It’s really important to consider this when thinking about the breaches suffered by businesses like Hannaford, Sony, Target, Home Depot and so many.  If the penetration tests that you receive are based on the product of vulnerability scanners and those scanners fail to detect the most obvious vulnerabilities then where does that leave you?  Don’t be fooled by testers that promise to deliver “manual penetration tests” either.  In most cases they just vet scan reports and call the process of vetting “manual testing” which it isn’t.

What you don’t know about compliance…

People are always mystified by how hackers break into major networks like Target, Hannaford, Sony, (government networks included), etc.  They always seem to be under the impression that hackers have some elite level of skill.  The truth is that it doesn’t take any skill to break into most networks because they aren’t actually protected. Most network owners don’t care about security because they don’t perceive the threat as real.  They suffer from the “it won’t ever happen to me” syndrome.
As a genuine penetration testing company we take on dozens of new opportunities per month.  Amazingly, roughly 80% of businesses that request services don’t want quality security testing, they want a simple check in the compliance box. They perceive quality security testing as an unnecessary and costly annoyance that stands in the way of new revenue.  These businesses test because they are required to, not because they want to.  These requirements stem from partners, customers, and regulations that include but are not limited to PCI-DSS, HIPAA, etc.
Unfortunately these requirements make the problem worse rather than better.  For example, while PCI requires merchants to receive penetration tests it completely fails to provide any effective or realistic baseline against which to measure the test results.  This is also true of HIPAA and other third party testing requirements.  To put this into perspective, if the National Institute of Justice set their V50 or V0 standards in the same manner then it would be adequate and acceptable to test bulletproof vests with  squirt guns.  Some might argue that poor testing is better than nothing but we’d disagree.  Testing at less than realistic levels of threat does nothing to prevent the real threat from penetrating.
Shoddy testing requirements and a general false sense of security have combined to create a market where check in the box needs take priority over genuine security.  Vendors that sell into this market compete based on cost, free service add-ons and free software licenses rather than quality of service and team capability, and price illogically based on IP count. Most testing vendors exacerbate the problem because they falsely advertise compliance testing (check in the box) services as best quality.  This creates and perpetuates a false sense of security among non-security expert customers and also lures in customers who have a genuine security need.
The dangers associated with this are evidenced by the many businesses that have suffered damaging compromises despite the fact that they are in compliance with various regulations.  The recent Target breach (certified as PCI compliant by Trustwave) is just one high-profile example.  Target’s former CEO, Gregg Steinhafel was quoted saying “Target was certified as meeting the standard for the payment card industry (PCI) in September 2013. Nonetheless, we suffered a data breach”.  Another high-profile example is the Hannaford breach (Rapid7’s customer at the time) back in 2008.  Hannaford, like Target, claims that they too were PCI compliant.
It’s our responsibility as security experts to deliver truth to our customers rather than to bank on their lack of expertise.  Sure we’re in this to make money but we also have an ethical responsibility.  If we take the time to educate our customers about the differences between compliance testing and genuine penetration testing and they still select compliance testing then that’s fine (its their risk).  But if we lie to our customers and sell them compliance testing while we assert that it’s best in class then we should be held responsible.  After all, it’s our job to protect people isn’t it?
The irony is that Compliance testing typically cost more than genuine penetration testing because it uses an arbitrary count based pricing methodology.  Specifically, if customer has 10 IP addresses but only 1 of those IP addresses is live the customer will still be billed for testing all 10 IP addresses.  Genuine penetration testing costs less because it uses an Attack Surface Pricing (ASP) methodology.  If a customer has 10 IP addresses and only one is live then ASP will identify that and the customer will only be charged for that 1 IP.  Moreover, the customer will be charged based on the complexity of the services provided by that one IP.
If the Return on Investment (RoI) of good security is equal to the cost in damages of a single successful compromise and if quality penetration testing services cost less (on average) than compliance testing services, doesn’t it make sense to purchase quality penetration testing services?
 

The Truth About PCI Compliance. What They Don’t Want You To Know

All of the recent news about Target, Neiman Marcus, and other businesses being hacked might be a surprise to many but it’s no surprise to us. Truth is that practice of security has devolved into a political image focused designed satisfy technically inept regulatory requirements that do little or nothing to protect critical business assets. What’s worse is that many security companies are capitalizing on this devolution rather than providing effective solutions in the spirit of good security. This is especially true with regards to the penetration testing industry.

We all know that money is the lifeblood of business and that a failure to meet regulatory requirements threatens that lifeblood. After all, when a business is not in compliance it runs the risk of being fined or not being allowed to operate. In addition the imaginary expenses associated with true security are often perceived as a financial burden (another lifeblood threat). This is usually because the RoI of good security is only apparent when a would-be compromise is prevented. Too many business managers are of the opinion that “it won’t happen to us” until they become a target and it does. These combined ignorant views degrade the overall importance of real security and make the satisfaction of regulatory requirements the top priority. This is unfortunate given that compliance often has little to do with actual security.

Most regulatory requirements are so poorly defined they can be satisfied with the most basic solution. For example PCI-DSS requires merchants to undergo regular penetration tests and yet it completely fails to define the minimum level of threat (almost synonymous with quality) that those tests should be delivered at. This lack of clear definition gives business owners the ability to satisfy compliance with the cheapest most basic of testing services. To put this into perspective, if the standards used to test bulletproof vests (NIJ and HOSDB test methods) were replaced by PCI–DSS then bulletproof vest testing could be satisfied with a squirt gun.

These substandard regulatory requirements combined with business owners lacking in true security expertise formed a market where exceedingly low quality, low-threat, easy to deliver security-testing services are in high-demand. This market has been answered by a horde of self-proclaimed security experts that in almost all cases are little more than marginally capable script-kids and yet they inaccurately market their services as best in class. Take away their third party tools (scripts, scanners, Metasploit, etc.) and those vendors will be dead in the water. Take the tools away from a bona fide researcher or hacker and they’ll write new tools then hack you with a vengeance.

The saturation of the penetration testing industry with charlatans makes the process of identifying a quality vendor difficult for business managers that actually care about security. In many cases the consumer is a non-technical (or non-security expert) buyer and not able to truly assess the technical capabilities of the vendor. As a result they often make buy decisions based on the non-technical exploration of things like the number customers serviced, annual revenue, size of company, etc. While these are important factors when evaluating any business, they are by no means a measure of service quality and testing vendor capability. With regards to penetration testing services, quality of service is of the utmost importance and it is a highly technical matter. This is why we wrote a guide to vendor selection that sets a standard of quality and was featured on Forbes.

It is unfortunate that most business owners don’t seem to operate in spirit of good security but instead operate with revenue focused tunnel vision. The irony of this is that the cost of a quality penetration test is equal to a small fraction of the cost of a single successful compromise. For example, in 2011 Sony suffered a compromise that resulted in over 170 million dollars in damages (not including fines). This compromise was the result of the exploitation of a basic SQL Injection vulnerability in a web server (like Target). The average cost of Netragard’s web application penetration testing services in 2013 was $14,000.00 and our services would have detected the basic SQL Injection vulnerability that cost Sony so much money. Perhaps its time to rethink the value of genuine penetration testing? Clearly genuine penetration testing has a positive revenue impact through prevention. Specifically the Return on Investment of a genuine penetration test is equal to the cost in damages of a single successful compromise.

So what of Target.
We know that target was initially compromised through the exploitation of a vulnerability in one of their web servers (just like Sony and so many others). This vulnerability went unidentified for some time even after the initial malicious compromise. Why did malicious hackers find this vulnerability before Target? Why was Target unaware of their existing paths to both compromise data exfiltration? Who determined that Target was PCI compliant when PCI specifically requires environmental segregation and Target’s environment was clearly not properly segregated?

A path to compromise is the path that an attacker must take in order to access sensitive information. In the case of Target this information was cardholder data. The attackers had no issue exploiting a path to compromise and propagating their attack from their initial point of compromise to the Point of Sale terminals. The existence of the path to compromise should have resulted PCI failure, why didn’t it?

A path for data exfiltration is the method that an attacker uses to extract data from a compromised network. In the case of Target the attackers were able to extract a large amount of information before any sort of preventative response could be taken. This demonstrates that a path for data exfiltration existed and may still exist today. As with the path to compromise, the path for data exfiltration should have resulted in a PCI failure, why didn’t it?

We also know that Target’s own security monitoring capabilities were (and probably still are) terrible. Based on a Krebs on Security article, the hackers first uploaded malware to Targets points of sale. Then they configured a control server on Target’s internal network to which the malware would report cardholder data. Then the hackers would login to the control server remotely and repeatedly to download the stolen cardholder data. Why and how did target fail to detect this activity in time to prevent the incident?
If we use the little information that we have about Targets compromise as a light-weight penetration testing report we can provide some generic, high-level methods for remediation. What we’re suggesting here is hardly a complete solution (because the full details aren’t known) but it’s good advice nonetheless.

    • Deploy a web application firewall (ModSecuirty or something similar). This would address the issue of the web application compromise (in theory). If one is already deployed then it’s in need of a serious reconfiguration and whoever is charged with its monitoring and upkeep should be either trained properly or replaced (sorry). Most web application vulnerabilities are not exploited on the first try and their exploitation often generates significant noise. That noise likely went without notice in the case of Target.

 

    • Deploy a host-based monitoring solution on public facing servers. This would further address the issue of web server security and would help to prevent distributed metastasis. For companies that want an open source solution we’d suggest something like OSSEC with the appropriate active response configurations. For example, tuning OSSEC to monitor and react to system logs, web application firewall incidents, network intrusion incidents, etc. can be highly effective and its free. OSSEC can also be used to build blacklists of IP addresses that continually exhibit hostile behavior.

 

    • Deploy a network intrusion prevention solution (like snort) on the internal network at the demarcation points of the cardholder environment. If done properly this would help to block the path to compromise and path for data exfiltration. This solution should be tuned with custom rules to watch for any indication of cardholder data being transmitted out over the network. It should also be tuned to watch for anomalous connections that might be the product of rootkit’s, etc. In the event that something is detected it should respond in a way that ensures that the affected source and targets are isolated from the rest of network.

 

    • Deploy a second host based solution like bit9 on the Points of Sale (PoS). (No we’re not a reseller and have no affiliation with bit9). This solution should be deployed on the points of sale assuming that bit9 will run on them. This will address the issue of malware being deployed on the points of sale and being used to steal credit card information. Especially malware that was first created in 2013 (BlackPOS).

 

    • Hire a real Penetration Testing vendor. Given what we know about this compromise, Target hasn’t been selecting penetration testing vendors using the right criteria. Any genuine penetration testing vendor that delivers quality services at realistic levels of threat would have identified many of these issues. In fact, its fair to say that following the methods for remediation that come from a genuine penetration test would have prevented the compromise of cardholder data.

Finally, who do you think did a better job at testing, the hackers that compromised Target or the Penetration Testing firm that said that they were PCI Compliant?

 

How to Price a Penetration Test

How Much Should You Spend On Penetration Testing Services?

The most common question asked is “how much will it cost for you to deliver a penetration test to us?”. Rather than responding to those questions each time with the same exact answer to penetration testing cost, we thought it might be best to write a detailed yet simple blog entry on the subject.

This video provides and overview of the two most common methodologies for determining penetration testing cost.

Workload and Penetration Testing Cost

The price for a genuine penetration test is based on the amount of human work required to successfully deliver the test.  The amount of human work depends on the complexity of the infrastructure to be tested.  The infrastructure’s complexity depends on the configuration of each individual network connected device. A network connected device is anything including but not limited to servers, switches, firewalls, telephones, etc. Each unique network connected device provides different services that serve different purposes.  Because each service is different each service requires different amounts of time to test correctly. It is for this exact reason that a genuine penetration test cannot be priced based on the number of IP addresses or number of devices.  It does not make sense to charge $X per IP address when each IP address requires a different amount of work to test properly. Instead, the only correct way to price a genuine penetration test is to assess the time requirements and from there derive workload.

At Netragard the workload for an engagement is based on science and not an arbitrary price per IP. Our pricing is based on something that we call Time Per Parameter (TPP).  The TPP is the amount of time that a Netragard researcher will spend testing each parameter. A parameter is either a service being provided by a network connected device or a testable variable within a web application. Higher threat penetration tests have a higher TPP while more basic penetration tests have a lower TPP.  Certainly this makes sense because the more time we spend trying to hack something the higher the chances are of success. Netragard’s base LEVEL 1 penetration test is our most simple offering and allows for a TPP of 5 minutes. Our LEVEL 2 penetration test is far more advanced than LEVEL 1 and allows for a TPP of up to 35 minutes. Our LEVEL 3 penetration test is possibly the most advanced threat penetration test offered in the industry and is designed to produce a true Nation State level threat (not that APT junk).  Our LEVEL 3 penetration test has no limit on TPP or on offensive capabilities.

The details of the methodology that we use to calculate TPP is something that we share with our customers but not our competitors (sorry guys). What we will tell you is that the count based pricing methodology that is used by our competition is a far cry from our TPP based pricing. Here’s one example of how our pricing methodology saved one of our customers $49,000.00.

We were recently competing for a Penetration Testing engagement for a foreign government department.  This department received a quote for a Penetration Test from another penetration testing vendor that also created software used by penetration testers. When we asked the department how much money the competitive quote came in at they told us roughly $70,000.00. When we asked them if that price was within their budget they said yes.  Our last question was about the competitive pricing methodology. We asked the department “did the competitor price based on how many IP addresses you have or did they do a detailed workload assessment?”.  The department told us that they priced based on the number of IP addresses that they had and that the number was 64.

At that moment we understood that we were competing against a vendor that was offering a Vetted Vulnerability Scan and not a Genuine Penetration Test. If a vendor prices an engagement based on the number of IP addresses involved then that vendor is not taking actual workload into consideration.  For example, a vendor that charges $500.00 per IP address for 10 IP addresses would price the engagement at $5,000.00. What happens if those 10 IP addresses require 1,000 man-hours of work to test because they are exceedingly complex? Will the vendor really find a penetration tester to work for less than $1.00 an hour?  Of course not. The vendor will instead deliver a Vetted Vulnerability Scan and call it a Penetration Test.  They will scan the 10 IP addresses, vet the results that are produced by the scanner and exploit things where possible, then produce a report. Moreover they will call the process of vetting “manual testing” which is a blatant lie. Any vendor that does not properly evaluate workload requirements must use a Vetted Vulnerability Scan methodology to avoid running financially negative on the project.

The inverse of this (which is far more common) is what happened with the foreign government department.  While our competitor priced the engagement at $1093.75 per IP for 64 IP’s which equates to $70,000.00, we priced at $21,000.00 for 11 IP’s (each of which offered between 2 and 6 moderately complex Internet connectable services). More clearly, our competitor wanted to charge the department $57,968.75 for testing 54 IP addresses that were not in use which equates to charging for absolutely nothing! When we presented our pricing to the department we broke our costs down to show the exact price that we were charging per internet connectable service.  Needless to say the customer was impressed by our pricing and shocked by our competitor, we won the deal.

While we wish that we could tell you that being charged for nothing is a rare occurrence, it isn’t.  If you’ve received a penetration test then you’ve probably been charged for nothing.  Another recent example involves a small company that was in need of Penetration Testing for PCI. They approached us telling us that they had already received quotes from other vendors and that the quotes were all in the thousands of dollars range. We explained that we would evaluate their network and determine workload requirements.  When we did that we found that they had zero responding IP addresses and zero Internet connectable services which equates to zero seconds of work. Instead of charging them for anything we simply issued them a certificate that stated that as of the date of testing no attack surface was present.  They were so surprised by our honesty  that they wrote us this awesome testimonial about their experience with us.

Finally, our TPP based pricing doesn’t need to be expensive.  In fact, we can deliver a Penetration Test to any customer with any budget.  This is because we will adjust the engagement’s TPP to match your budget. If your budget only allows for a $10,000.00 spend then we will reduce the TPP to adjust the project cost to meet your budgetary requirements. Just remember that reducing the TPP means that our penetration testers will spend less time testing each parameter and increasing the TPP means that they will spend more time.  The more the time, the higher the quality. If we set your TPP at 10 but we encounter services that only require a few seconds of time to test then we will allocate the leftover time to other services that require more time to test.  Doing this ensures that complex services are tested very thoroughly.


Selling zero-day’s doesn’t increase your risk, here’s why.

The zero-day exploit market is secretive. People as a whole tend to fear what they don’t understand and substitute fact with speculation.  While very few facts about the zero-day exploit market are publicly available, there are many facts about zero-days that are available.  When those facts are studied it becomes clear that the legitimate zero-day exploit market presents an immeasurably small risk (if any), especially when viewed in contrast with known risks.

Many news outlets, technical reporters, freedom of information supporters, and even security experts have used the zero-day exploit market to generate Fear Uncertainty and Doubt (FUD).  While the concept of a zero-day exploit seems ominous reality is actually far less menacing.  People should be significantly more worried about vulnerabilities that exist in public domain than those that are zero-day.  The misrepresentations about the zero-day market create a dangerous distraction from the very real issues at hand.

One of the most common misrepresentations is that the zero-day exploit market plays a major role in the creation of malware and malware’s ability to spread.  Not only is this categorically untrue but the Microsoft Security Intelligence Report (SIRv11) provides clear statistics that show that malware almost never uses zero-day exploits.  According to SIRv11, less than 6% of malware infections are actually attributed to the exploitation of general vulnerabilities.  Of those successful infections nearly all target known and not zero-day vulnerabilities.

Malware targets and exploits gullibility far more frequently than technical vulnerabilities.  The “ILOVEYOU” worm is a prime example.  The worm would email its self to a victim with a subject of “I LOVE YOU” and an attachment titled “LOVE-LETTER-FOR-YOU.txt.vbs”. The attachment was actually a copy of the worm.  When a person attempted to read the attachment they would inadvertently run the copy and infect their own computer.  Once infected the worm would begin the process again and email copies of its self to the first 50 email addresses in the victims address book.  This technique of exploiting gullibility was so successful that in the first 10 days over 50 million infections were reported.  Had people spent more time educating each other about the risks of socially augmented technical attacks then the impact may have been significantly reduced.

The Morris worm is an example of a worm that did exploit zero-day vulnerabilities to help its spread.  The Morris was created in 1988 and proliferated by exploiting multiple zero-day vulnerabilities in various Internet connectable services.  The worm was not intended to be malicious but ironically a design flaw caused it to malfunction, which resulted in a Denial of Service condition of the infected systems.  The Morris worm existed well before zero-day exploit market was even a thought thus proving that both malware and zero-day exploits will exist with or without the market.  In fact, there is no evidence that shows the existence of any relationship between the legitimate zero-day exploit market and the creation of malware, there is only speculation.

Despite these facts, prominent security personalities have argued that the zero-day exploit market keeps people at risk by preventing the public disclosure of zero-day vulnerabilities. Bruce Schneier wrote, “a disclosed vulnerability is one that – at least in most cases – is patched”.  His opinion is both assumptive and erroneous yet shared by a large number of security professionals.  Reality is that when a vulnerability is disclosed it is unveiled to both ethical and malicious parties. Those who are responsible for applying patches don’t respond as quickly as those with malicious intent.

According to SIRv11, 99.88% of all compromises were attributed to the exploitation of known (publicly disclosed) and not zero-day vulnerabilities.  Of those vulnerabilities over 90% had been known for more than one year. Only 0.12% of compromises reported were attributed to the exploitation of zero-day vulnerabilities. Without the practice of public disclosure or with the responsible application of patches the number of compromises identified in SIRv11 would have been significantly reduced.

The Verizon 2012 Data Breach Investigations Report (DBIR) also provides some interesting insight into compromises.  According to DBIR 97% of breaches were avoidable through simple or intermediate controls (known / detectable vulnerabilities, etc.), 92% were discovered by a third party and 85% took two weeks or more to discover. These statistics further demonstrate that networks are not being managed responsibly. People, and not the legitimate zero-day exploit market, are keeping themselves at risk by failing to responsibly address known vulnerabilities.  A focus on zero-day defense is an unnecessary distraction for most.

Another issue is the notion that security researchers should give their work away for free.  Initially it was risky for researchers to notify vendors about security flaws in their technology.  Some vendors attempted to quash the findings with legal threats and others would treat researchers with such hostility that it would drive the researchers to the black market.  Some vendors remain hostile even today, but most will happily accept a researchers hard work provided that its delivered free of charge.  To us the notion that security researchers should give their work away for free is absurd.

Programs like ZDI and what was once iDefense (acquired by VeriSign) offer relatively small bounties to researchers who provide vulnerability information.  When a new vulnerability is reported these programs notify their paying subscribers well in advance of the general public.  They do make it a point to work with the manufacturer to close the hole but only after they’ve made their bounty.  Once the vendors have been notified (and ideally a fix created) public disclosure ensues in the form of an email-based security advisory that is sent to various email lists.  At that point, those who have not applied the fix are at a significantly increased level of risk.

Companies like Google and Microsoft are stellar examples of what software vendors should do with regards to vulnerability bounty programs.  Their programs motivate the research community to find and report vulnerabilities back to the vendor.  The existence of these programs is a testament to how seriously both Google and Microsoft take product security. Although these companies (and possibly others) are moving in the right direction, they still have to compete with prices offered by other legitimate zero-day buyers.  In some cases those prices offered are as much as 50% higher.

Netragard is one of those entities. We operate the Exploit Acquisition Program (EAP), which was established in early 2000 as a way to provide ethical security researchers with top dollar for their work product. In 2011 Netragard’s minimum acquisition price (payment to researcher) was $20,000.00, which is significantly greater than the minimum payout from most other programs.  Netragard’s EAP buyer information, as with any business’ customer information, is kept in the highest confidence.  Netragard’s EAP does not practice public vulnerability disclosure for the reasons cited above.

Unlike VUPEN, Netragard will only sell its exploits to US based buyers under contract.  This decision was made to prevent the accidental sale of zero-day exploits to potentially hostile third parties and to prevent any distribution to the Black Market.  Netragard also welcomes the exclusive sale of vulnerability information to software vendors who wish fix their own products.  Despite this not one single vendor has approached Netragard with the intent to purchase vulnerability information.  This seems to indicate that most software vendors are sill more focused on revenue than they are end-user security.  This is unfortunate because software vendors are the source of vulnerabilities.

Most software vendors do not hire developers that are truly proficient at writing safe code (the proof is in the statistics). Additionally, very few software vendors have genuine security testing incorporated into their Quality Assurance process.  As a result, software vendors literally (and usually accidentally) create the vulnerabilities that are exploited by hackers and used to compromise their customer’s networks. Yet software vendors continue to inaccurately tout their software as being secure when in fact t isn’t.

If software vendors begin to produce truly secure software then the zero-day exploit market will cease to exist or will be forced to make dramatic transformations. Malware however would continue to thrive because it is not exploit dependent.  We are hopeful that Google and Microsoft will be trend setters and that other software vendors will follow suit.  Finally, we are hopeful that people will do their own research about the zero-day exploit markets instead of blindly trusting the largely speculative articles that have been published recently.

Netragard on Exploit Brokering

Historically ethical researchers would provide their findings free of charge to software vendors for little more than a mention.  In some cases vendors would react and threaten legal action citing violations of poorly written copyright laws that include but are not limited to the DMCA.  To put this into perspective, this is akin to threatening legal action against a driver for pointing out that the breaks on a school bus are about to fail. This unfriendliness (among various other things) caused some researchers to withdraw from the practice of full disclosure. Why risk doing a vendor the favor of free work when the vendor might try to sue you? Organizations like CERT help to reduce or eliminate the risk to security researchers who wish to disclose vulnerabilities.  These organizations work as mediators between the researchers and the vendors to ensure safety for both parties.  Other organizations like iDefense and ZDI also work as middlemen but unlike CERT earn a profit from the vulnerabilities that they purchase. While they may pay a security researcher an average of $500-$5000 per vulnerability, they charge their customers significantly more for their early warning services.  Its also unclear (to us anyway) how quickly they notify vendors of the vulnerabilities that they buy. The next level of exploit buyers are the brokers.  Exploit brokers may cater to one or more of three markets that include National, International, or Black.  While Netragard’s program only sells to National buyers, companies like VUPEN sell internationally.  Also unlike VUPEN, Netragard will sell exploits to software vendors willing to engage in an exclusive sale.   Netragard’s Exploit Acquisition Program was created to provide ethical researchers with the ability to receive fair pay for their hard work; it was not created to keep vulnerable software vulnerable.  Our bidding starts at $10,000 per exploit and goes up from there. Its important to understandwhat a computer exploit is and is not.  It is a tool or technique that makes full use of and derives benefit from vulnerable computer software.   It is not malware despite the fact that malware may contain methods for exploitation.  The software vulnerabilities that exploits make use of are created by software vendors during the development process.  The idea that security researchers create vulnerability is absurd.  Instead, security researchers study software and find the already existing flaws. The behavior of an exploit with regards to malevolence or benevolence is defined by the user and not the tool.  Buying an exploit is much like buying a hammer in that they can both be used to do something constructive or destructive.  For this reason it’s critically important that any ethical exploit broker thoroughly vet their customers before selling an exploit.  Any broker that does not thoroughly vet their customers is operating irresponsibly. What our customers do with the exploits that they buy is none of our business just as what you do with your laptop is not its vendors business.   That being said, any computer system is far more dangerous than any exploit.  An exploit can only target one very specific thing in a very specific way and has a limited shelf life. It is not entirely uncommon for vulnerabilities to be accidentally fixed thus rendering a 0-day exploit useless.  A laptop on the other hand has an average shelf life of 3 years and can attack anything that’s connected to a network.   In either case,  its not the laptop or the exploit that represents danger it’s the intent of its user. Finally, most of the concerns about malware, spyware, etc. are not only unfounded and unrealistic, but absolutely absurd.  Consider that businesses like VUPEN wants to prevent vendors from fixing vulnerabilities.  If VUPEN were to provide an exploit to a customer for the purpose of creating malware then that would guarantee the death of the exploit.  Specifically, when malware spreads antivirus companies capture and study it.  They would most certainly identify the method of propagation (the exploit) that in turn would result in the vendor fixing the vulnerability.

Why DISSECTING THE HACK: The F0rb1dd3n Network was written. By: Jayson E. Street

Note: This blog entry was written by Jayson E. Street and published on his behalf.

The consumer, the corporate executive, and the government official. Regardless of your perspective, DISSECTING THE HACK: The F0rb1dd3n Network was written to illustrate the issues of Information Security through story. We all tell stories. In fact, we do our best communicating through stories. This book illustrates how very real twenty-first century threats are woven into the daily lives of people in different walks of life.

Three kids in Houston, Texas. A mid-level Swiss businessman traveling abroad. A technical support worker with a gambling problem. An international criminal who will do anything for a profit (and maybe other motives). FBI agents trying to unravel a dangerous puzzle. A widower-engineer just trying to survive. These are just some of the lives brought together in a story of espionage, friendship, puzzles, hacks, and more. Every attack is real. We even tell you how some of these attack are done. And we tell you how to defend against varied attacks as well.

DISSECTING THE HACK: The F0rb1dd3n Network is a two-part work. The first half is a story that can be read by itself. The second half is a technical reference work that can also be read alone. But together, each provides texture and context for the other. The technical reference – called the STAR or “Security Threats Are Real” – explains the “how” and “why” behind much of the story. STAR addresses technical material, policy issues, hacker culture context, and even explains “Easter Eggs” in the story.

This book is the product of a community of Information Security professionals. It is written to illustrate how we are all interesting targets for various reasons. We may be a source of money for criminals through fraud, we might have computing resources that can be used to launch attacks on someone else, or we may be responsible for protecting valuable information. The reasons we are attacked are legion – and so are the ways we are attacked. Our goal is to raise awareness in a community of people who are under-served. Few of us really want dry lectures about how we should act to protect ourselves. But stories of criminals, corporate espionage, friendship and a little juvenile delinquency – now that is the way to learn.

They will protect my data (won’t they?)

So the other day I was talking with my buddy Kevin Finisterre.  One of the things that we were discussing was people who just don’t feel that security is an important aspect of their business because their customers don’t ask for it.  That always makes my brain scream “WHAT!?”. Here’s a direct quote from a security technology vendor “We don’t perform regular penetration tests because our customers don’t ask us to do that.”
Isn’t it the service provider’s/vendor’s responsibility to properly manage and maintain the security of their infrastructure?  Don’t they have an ethical obligation to their customers to protect the service that they are offering and any information that the customers decide to store on their systems?
The real question is, how many customers would they lose if the customers heard them say that? That is after all just like saying “We don’t care about security because our customers aren’t asking us to care about it.”  
So who have I heard this from? Here’s the (very) short list:
  • Vendors that make security software (like email gateways, anti-virus technology, Intrusion Prevention Systems, etc).
  • Vendors that make technology that is used to control our Nuclear Power Plants, Water Purification Plants, Traffic Control Systems, etc.
  • Vendors that sell business enabling technologies like PHP based Content Management Systems, Commercial Web Servers, Server based applications, Web Applications, etc.
  • Vendors that sell desktop applications like Financial Tracking Systems, Invoicing Systems, File Sharing Systems, Backup Solutions, etc.
  • I’ve also heard this from MAJOR Service Providers such as Web Hosting Providers, Email Providers, Backup Service Providers, etc.
  • The list goes on….
I think that people need a wake up call.  This strikes me as a serious ethical issue, what about you? Leave me a comment I’m very interested in feedback on this one. 

ROI of good security.

The cost of good security is a fraction of the cost of damages that usually result from a single successful compromise. When you choose the inexpensive security vendor, you are getting what you pay for. If you are looking for a check in the box instead of good security services, then maybe you should re-evaluate your thinking because you might be creating a negative Return on Investment.

Usually a check in the box means that you comply with some sort of regulation, but that doesn’t mean that you are actually secure. As a matter of fact, almost all networks that contain credit card information and are successfully hacked are PCI compliant (a real example). That goes to show that compliance doesn’t protect you from hackers, it only protects you from auditors and the fines that they can impose. Whats more is that those fines are only a small fraction of the cost of the damages that can be caused by a single successful hack.

When a computer system is hacked, the hacker doesn’t stop at one computer. Standard hacker practice is to perform Distributed Metastasis and propagate the penetration throughout the rest of the network. This means that within a matter of minutes the hacker will likely have control over the most or all of the critical aspects of your IT infrastructure and will also have access to your sensitive data. At that point you’ve lost the battle… but you were compliant, you paid for the scan and now you’ve got a negative Return on that Investment (“ROI”).

So what are the damages? Its actually impossible to determine the exact cost in damages that result from a single successful hack because its impossible to be certain of the full extent of the compromise. Never the less, here are some of the areas to consider when attempting to calculate damages:

  • Man hours to identify every compromised device
  • Man hours to reinstall and configure every device
  • Man hours required to check source code for malicious alterations
  • Man hours to monitor network traffic for hits of malicious traffic or access
  • Man hours to educate customers
  • Penalties and fines.
  • The cost of downtime
  • The cost of lost customers
  • The cost of a damaged reputation
  • etc.

(The damages could *easily* cost well over half a million dollars on a network of only ~50 or so computers. )

Now lets consider the Return on Investment of *good* security. An Advanced Penetration Test against a small IT Infrastructure (~50 computers in total) might cost something around $16,000.00-$25,000 for an 80 hour project. If that service is delivered by a quality vendor then it will enable you to identify and eliminate your risks before they are exploited by a malicious hacker. The ROI of the quality service would be equal to the cost in damages of a single successful compromise minus the cost of the services. Not to mention you’d be complaint too…

(Note: the actual cost of services varies quite a bit depending on what needs to be done, etc.)

So why is it that some vendors will do this work for $500.00 or $2,000.00, etc? Its simple, they are not delivering the same quality service as the quality vendor. When you pay $500.00 for a vulnerability scan you are paying for something that you could do yourself for free (go download nessus). Never the less, when you pay $500.00 you are really only paying for about 5 minutes of manual labor, the rest of the work is automated and done by the tools. (If you broke that down to an hourly rate you’d be paying something like $6000.00 an hour since you’re paying $500.00 per 5 minutes). In the end you might end up with a check in your compliance box but you’ll still just as vulnerable as you were in the beginning.

Insecure *Security* Technologies

There is not a single piece of software that exists today that is free from flaws and many of those flaws are security risks. Every time a new security technology is added to an Infrastructure, a host of flaws are also introduced.  The majority of these flaws are undiscovered but in some cases the vendor already knows about them.

As an example, we encountered a Secure Email Gateway during an Advanced External Penetration Test for a customer. When a user sends an email, the email can either be sent from the gateway’s webmail gui, or from outlook.  If it is sent from outlook then the gateway will intercept the email and store the message contents locally. Then instead of actually sending the sensitive email message to the recipient, the gateway sends a link to the recipient. When the recipient clicks on the link their browser launches and they are able to access the original message content.
While this all looked fine, there was something about that gateway that made me want to learn more (a strange jboss version response), so I did… I called the vendor and ask to speak to a local sales rep.  When the rep got on the phone I told him that I had an immediate need for 50 gateways but wouldn’t make any purchases until I knew that his technology was compatible with my infrastructure. He got really excited and asked me what I needed in order to verify compatibility. I told the rep that I needed a list of all Open Source libraries and software that had been built into the gateway along with version information.  The rep said that he didn’t really understand what I was asking him but that he’d go to someone in development and figure it out.  Within about fifteen minutes I received an email with a .xls attachment.  Shortly after that I received an email from the rep asking me to delete the .xls attachment because he wasn’t supposed to share that particular one…. go figure…Â
(I deleted it after I read it)
When I studied the document I realized that the gateway was nothing more than a common bloated linux box with a bunch of very, very old Open Source software installed on it.  In fact, based on the version information provided, the newest package that was installed was OpenSSL and that was 3 years old!  The JBoss application sever was even older than that and was also vulnerable as hell (but it was hacked and reported incorrect version information). Needless to say we managed to penetrate the secure email gateway by using a published exploit that was also about 3 years old. Once we got in our client decided that their secure gateway wasn’t so secure any more and did away with it.  We did contact the vendor by the way and they weren’t receptive or willing to commit to any sort of fix.Â
The fact of the matter is that we run into technology like this all the time, especially with appliances.  We’ve seen this same sort of issue with patch management technologies, distributed policy enforcement technologies, anti-virus technologies, HIDS technologies, etc.  In almost every case we are able to use these technologies to penetrate or at least to assist in the penetration of our target.  While most of these technologies introduce more risk than the risk that they resolve, there are a few good ones.  My recommendation is to have a third party assess the technology before you decide to use it, just make sure that they are actually qualified and not Fraudulent Security Experts.  Â