What Thieves Know About Anti-Phishing Solutions & What This Means To You

Without taking proper precautions, your computer is a veritable smörgåsbord for hackers. Hackers have developed an array of techniques to infiltrate your system, extract your data, install self-serving software, and otherwise wreak havoc on your system. Every network in the world is vulnerable to hacking attempts; it’s simply a matter of which systems the hackers deem worth the effort. Preventing hackers from successfully compromising your data requires an understanding of the various solutions. However, very few of those solutions are truly effective.

The Differences Between Phishing and Spear Phishing

Phishing casts a wide net to hundreds, thousands or even millions of email addresses. Phishing can be used to steal passwords, perform wide-scale malware deployment (think WannaCry), or even as a component of disinformation campaigns (think Russia). More often than not phishing is carried out by financially motivated criminals. In most cases, the phishing breaches are not detected until it is too late and it is nearly impossible to prevent damages.
Spear phishing, like the name implies, is a more targeted version of phishing. Spear phishing campaigns are generally conducted against companies, specific individuals, or small groups of individuals. The primary goal of spear phishing campaigns is to make entry into a target network. The DNC hack for example, was accomplished by using spear phishing as an initial method of breach. Once the breach was affected the hackers began performing Distributed Metastasis (aka pivoting) and secured access to sensitive data.
In nearly all cases, businesses and governments are ill prepared to defend against phishing attacks. This is in part because the solutions that exist today are largely ineffective. Most commercial phishing platforms provide the same basic level of benefit as automated vulnerability scanners. If you really want to defend against phishing then you need to use a solution designed specifically for you and your network.

Real (not commercial) Tactics For Phishing and Spear Phishing

An email will go out, supposedly from a trusted source. In reality, it will be a chameleon domain set up specifically by the hackers to leverage your trust. A chameleon domain is a domain which appears to be the same as your company’s domain or a high profile domain but isn’t. (The domains are often accompanied by a clone website with a valid SSL certificate.) For example, instead of linkedin.com, the chameleon domain might be 1inkedin.com. These two domains might look identical at a glance, but in the second the L of LinkedIn is exchanged for the number one. Historically, hackers used Internationalized Domain Name (IDN) homograph attacks to create chameleon domains, but that methodology is no longer reliable.
An email might also arrive from a different Top Level Domain (TLD). Let’s say, linkedin.co, linkedin.org, or even linkedin.abc. There are many opportunities for deception when it comes to creating a chameleon domain. All of these oppotrunities exist because the human brain will read a word the same way so long as the first and last letter of the word are in the correct place. For example, you will likely fall victim to phishing if you just the word “opportunities” and didn’t notice that we swapped the places of the letters “T” and “R”. Experienced hackers are masters at exploiting this human tendency.  (https://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/)
When (spear) phishing is combined with malware it becomes a powerful weapon. A common misconception is that antivirus and antimalware software will protect you from infection. If that were in fact true, then things like the recent WannaCry (MS17-010) threat would never have been a problem.  The reality is that antivirus technologies aren’t all that effective at preventing infections. In fact, Intrusion Prevention Systems (IPS) also aren’t all that effective at preventing intrusions. If they were then we would not be seeing an ever-increasing number of breached businesses (nearly all which use some form of IPS or third party MSSP).
The bad guys may target 3 or 30 people with a spear phishing attack. To be successful with a well-crafted attack they only need a single victim.  That victim usually becomes their entry point into a network and from there it is only a matter of time until the network is fully compromised.  With a normal phishing attack, campaigns with larger numbers of victims are desirable. More victims equates to more captured data.

Businesses Making Money from Anti-Phishing

For some companies, there’s not a week that goes by without a phishing attempt landing in their email server. They are the consternation of companies everywhere.
Security companies, concerned about the devastation that phishing and spear phishing efforts can rain, have taken up the mantle of offering education about phishing to their clients. They have special programs for mid- and large- level corporations to combat phishing efforts.
Once a company signs up for education it’s common to test the company soon afterward to see what needs to be covered. For instance, a phishing attempt is made against half or all of a company. It will be a typical, run-of-the-mill ‘attack,’ where the users are given a convenient link and encouraged to go there to ‘make it right’ again.
After clicking on the link, the user is taken to a site which informs them that they were phished, how they were phished, and safety measures to prevent future successful phishing. Information about the success rate of the phishing attempt is also gathered, so the security company has a baseline. From that information, educational materials are given to the company for further training.
A set amount of time later, usually a few months, the security company runs the same type of phishing attempt on the employees of the target company. The success rates are then compared (the second try usually has fewer people who were fooled) and the target company receives certification that they are safer from phishing attempts now that they have been educated.

How Effective Are Anti-Phishing Companies?

Employing an anti-phishing security firm can provide a false sense of security for companies that would be vulnerable to phishing attempts. Going through the education prevents the likelihood of a blatant and basic phishing attempt from being successful, but it usually does not do much to prevent a real-world, targeted attack, especially a spear phishing one.
Anti-phishing companies generally use automated systems to test a company’s phishability. They use the most rudimentary phishing techniques, but many advertise that their solutions will be more effective than they actually are against real-world phishing attempts. In other words, these anti-phishing companies generally provide a political solution rather than a real solution to the problem of phishing and spear phishing. This very similar to how vulnerability scanning companies market themselves.
The people who want to break into a company’s system are patient. They custom-create a strategy to get into your systems, not send a blanket email to everyone in the company. It’s too blatant. Their attempts to socially engineer a favorable outcome are most likely going undetected.
The biggest question that an anti-phishing company has to ask itself is whether they are providing the level of security that they are promoting. By certifying employees as being phish-proof, does that mean that those employees are truly savvy enough to detect ANY phishing attempt? Is the security company simply marketing, or is it truly interested in protecting their clients against phishing?
Before going with a company that advertises anti-phishing education, keep in mind that spear phishing is highly customized and most likely won’t come to you as an email from Paypal, LinkedIn, or another popular site. It will most likely come to you from someone you know, possibly within your own company. Ask them what measures they plan to take to help you truly fight against the spear phishing attacks at your company.

What they are not telling you about the CIA leaks.

_95042278_f3058f0e-4e13-44a3-8c07-6d42b3597598The CIA leaks are making huge waves across the world. In a nutshell, the documents claim to reveal some of the hacking capabilities that the CIA has. Many privacy advocates believe that exposure of secrets like these is a net benefit for citizens because it provides transparency in government action. The media also likes leaks like these because it provides excellent story fodder.
But there is one thing that no one is talking about with these leaks that has serious long-term consequences with all of our foreign relationships. The concept is called attribution in the intelligence field, and it’s important that everyone get an idea of what it is and why it is important so they can put the real danger of these leaks into the proper context.

What is Attribution?

Attribution is the ability to accurately trace back evidence of a situation back to whoever did it. Even if you don’t know the term, these examples will make it quite clear. Let’s say you’re a child on a school playground. You tell your best friend a secret that you don’t want anyone to know about. A few days later, the whole school knows. If you know you didn’t tell anyone, who told the secret? The obvious one to blame is the best friend. That breach of trust could end your friendship.
That’s a simple example. A more complex one is a murder case. Let’s say that your neighbor kills your best friend in your house, but isn’t caught. Instead, you are accused and you spend a lot of money on lawyers to get the charges dismissed. Your reputation is damaged, but you stay out of jail. The case grows cold.
Now, let’s say over time you become close friends with your neighbor. Later, for whatever reason, the neighbor gets his DNA analyzed and there is a match to the old murder. The neighbor might get arrested, but how would you react?
In the first case, the fact that only one other person knew the secret and leaked it makes us able to attribute the link to the person. In the second, a telltale fingerprint that’s impossible to forge creates an attribution that wasn’t there before and provides ironclad evidence that you weren’t involved.

Leaking and Attribution

 Put bluntly, the general public and the media are overreacting in how much the CIA might or might not be using the things leaked to spy on them. A much more serious concern is what every other government in the world is thinking about the information in these leaks. Here’s why.
One of the roles of any government is to protect the interests of the country and its citizens. Countries use intelligence networks, spies, hacking, and other espionage techniques to gather information in advance about what their enemies and their allies might do next. Failing to get that knowledge puts the country at risk of something called information asymmetry. Other countries can get more information about you than you can about them. It’s like they can peek at your hand in a game of poker before the betting round, but you can’t.
The CIA’s role in America’s spy networks is international intelligence. The CIA isn’t going to turn their attention to people inside of the U.S. unless there is an extraordinarily good reason, despite what conspiracy theorists may think. But foreign governments definitely know the CIA will have at least thought about spying on them at some point. However, unless a spy was caught red-handed and confessed they were a CIA operative, it’s hard for a country to accuse us of spying on them in a specific instance. In short, there is no attribution. Just guesses.
What the CIA leaks do is give information to every government who wants to know how we might hack them. It is extremely difficult to attribute a hacking attack to a specific state actor, despite what the media and television might lead you to believe. You might be able to detect the attack and gather forensic evidence about a hacking incident, but until you can get definitive proof that another country knew about that particular exploit at the time of the attack and had the tools necessary to leverage it, you can’t say for certain. The leak now gives other governments details they can use to analyze their old forensic data and see if there is a match, much like the DNA evidence in the earlier example.
In short, now they can prove that we peeked at their poker hands and know how we did it. The how is also crucial not just for attribution, but for how hacks are conducted between governments.

Hidden Exploits

99.9% of all breaches are the result of the exploitation of known vulnerabilities (for which patches exist), many of which have been published (open to the public) for over a year. But those aren’t the vulnerabilities that governments generally want to exploit. They want to target 0-day vulnerabilities with 0-day exploits. A 0-day vulnerability is a bug in software that is unknown to the vendor or the public.  A 0-day exploit is the software that leverages a 0-day vulnerability usually to grant its user access to and control over the target. 0-day’s are the secret in the playground of geopolitical hacking.
Governments want to keep some 0-day exploits as state secrets. The time for a defense to be built against a revealed exploit can be as little as 24 hours. A 0-day exploit can be used for 6 months or even years. That is a lot of time for a government. But governments don’t want to use these too often anyway. Each time a 0-day exploit is used successfully, it leaves behind some form of forensic evidence that could be used later to gain attribution. The first time might be a surprise. The second will reveal similar patterns with the two attacks. The third time runs the risk of getting caught.
The value of these exploits varies and is determined by operational need, how rare the exploit is, how likely it is to be discovered or detected, etc.  Governments can pay as little as tens of thousands of dollars to as much as several million dollars for a single zeroday exploit.. Each time a 0-day exploit is used its lifespan is shortened significantly.  In some cases, a 0-day is only used once before it is exposed (burnt).  In other cases, 0-day exploits may last years before they are burnt.  One thing is always true.  If governments are going to spend millions of dollars on 0-day exploits, then they are not likely to use them on low-value targets like everyday civilians or for easily detected mass exploitation. They are far more likely to be used for high-value, well protected targets where detection of breach simply isn’t an option.
Because these are not open secrets, when 0-day exploit information is released in a leak it makes it extremely easy to attribute attacks to a state and it diminishes that states’ intelligence capabilities. Furthermore, now every other government has leverage against that state, and could even have grievances. They could feel like the unjustly accused murder suspect. And unlike the suspect, states have options that citizens do not in terms of how they can retaliate such as levying sanctions or declaring war. Worse, they could even gain the moral high ground even though they might be doing the same thing because the managed to keep their intelligence information secret.
Regardless of whether you think leakers and whistleblowers are heroes or traitors, there are consequences for leaking intelligence information to the world. The average American citizen doesn’t know and can’t know what the foreign consequences will be. Before you go out and cheer the next leak, consider what the consequences might be for our country now.  What does it mean when we lose our intelligence capabilities and our enemies don’t? What does it mean when our enemies and allies know just how, when, and most importantly, who managed to hack them?

EXPOSED: How These Scammers Tried To Use LinkedIn To Steal Our Client’s Passwords

Earlier this morning one of our more savvy customers received an email from [email protected]. The email contained a “New Message Received” notification allegedly sourced from CEO Tom Morgan. Contained in the email was a link that read, “Click here to sign in and read your messages”. Fortunately we had already provided training to this particular customer that covered Social Engineering and Phishing threats. So, rather than click on the link they forwarded the email to Netragard’s Special Project Team, which is like throwing meat to the wolves. The actual email is provided below in figure 1.
Figure 1
The first step in learning about who was behind this threat was to follow the “click here” link. The link was shortened using the URL shortener ow.ly and so we used curl to expand it. While we were hopeful that the URL would deliver some sort of awesome zeroday or malware, it didn’t. Instead it served up a fake LinkedIn page (Figure 2) designed to steal login and password information.
Figure 2
The server hosting the phishing site was located in Lebanon and of course was not maintained or patched properly. Quick reconnaissance showed that directory listing was enabled, that the server was using an outdated and very exploitable version of cPanel, and that the server had been breached by at least four other parties (there were at least 4 backdoors installed). We used one of the backdoors to gain access to the system in the hopes of learning more (Figure 3).
Figure 3figure3
Our team quickly zeroed in on the “linkd.php” file that was used to generate the phishing page shown in Figure 2.   We explored the file looking for information related to where stolen passwords were being kept. Initially we expected to see the passwords logged to a text file but later found that they were being emailed to an external Gmail account. We also looked for anything that might provide us with information about who was being targeted with this attack but didn’t find much on the system.
We were able to identify the victims of the campaign by making hidden modifications to the attackers phishing platform. These modifications allowed us to track who submitted their credentials to the phishing site. When studying the submission data it quickly became apparent that the attackers were almost exclusively targeting Luxembourg based email addresses (.lu TLD’s) and were having a disturbingly high degree of success. Given that people often reuse passwords in multiple locations this campaign significantly increased the level of risk faced by organizations that employ the victims. More directly, chances are high that organizations will be breached using the stolen passwords.
The LinkedIn campaign was hardly the only campaign being launched from the server. Other campaigns were identified that included but may not be limited to DHL, Google, Yahoo and DropBox. The DropBox campaign was by far the most technically advanced. It leveraged blacklisting to avoid serving the phishing content to Netcraft, Kaspersky, BitDefender, Fortinet, Google, McAfee, AlienVault, Avira, AVG, ESET, Doctor Web, Panda, Symantec, and more. In addition to the blacklisting it used an external proxy checker to ensure page uptime.
Finally, we tracked the IP addresses that were connecting to the system’s various backdoor.  Those IP addresses all geolocated to Nigeria and are unfortunately dynamic.
Screenshot 2016-08-18 10.24.57
This phishing campaign highlights two specific issues that can both be countered with careful planning.  The first is that employees are easy to phish especially when they are outside of the office and not protected by spam filters.  This is problematic because employees often reuse the same passwords at work as they do outside of work.  So stealing a LinkedIn password often provides attackers with access to other more sensitive resources which can quickly result in a damaging breach and access to an organizations critical assets.   The solution to this issue is reasonably simple.  Employees should be required to undergo regular training for various aspects of security including but not limited Social Engineering and Phishing.  Second, Employers should require employees to use password management tools similar to 1Password.  Using password management tools properly will eliminate password reuse and significantly mitigate the potential damages associated with password theft.
As for our Nigerian friends, they won’t be operating much longer.

How we tricked your HR lady into giving us access to every customers credit card number

We recently completed the delivery of a Realistic Threat PCI focused Penetration Test for a large retail company. As is always the case, we don’t share customer identifiable information, so specific details about this engagement have been altered to protect the innocent. For the sake of this article we’ll call the customer Acme Corporation.

When we were first approached by the Acme Corporation we noticed that they seemed well versed with regards to penetration testing. As it turned out, they had been undergoing penetration testing for more than a decade with various different penetration testing vendors. When we asked them how confident they were about their security they told us that they were highly confident and that no vendor (or hacker to their knowledge) had ever breached their corporate domain let alone their Cardholder Data Environment (CDE). We were about to change that with our Realistic Threat Penetration Testing services.

Realistic Threat Penetration Tests have specific characteristics that make them very different from other penetration tests.

The minimum characteristics that must be included for a penetration test to be called Realistic Threat are:

  1. IT/Security Staff must not be aware of the test.
  2. Must include solid reconnaissance.
  3. Must not depend on automated vulnerability scanners.
  4. Must include realistic Social Engineering not just elementary phishing.
  5. Must include the use of undetectable (and non-malicious) malware.
  6. Must be covert as to enable propogation of compromise.
  7. Must allow legitimate incident response from the customer.

Lets begin…

As with all engagements Netragard’s team began with reconnaissance. Reconnaissance is the military term for the passive gathering of intelligence about an enemy prior to attacking the enemy. It is what enables our team to construct surgical plans of attack that allow for undetected penetration into targeted networks. During reconnaissance we focus on mapping out all in-scope network connected devices using truly passive techniques and without making direct network connections. We also focus on passive social reconnaissance using everything from Facebook to LinkedIn to Jigsaw.

When Netragard finished performing reconnaissance against Acme Corporation it became apparent that direct technological attacks would likely not succeed. Specifically, Acme Corporation’s externally facing systems were properly patched and properly configured. Their web applications were using a naturally secure framework, appeared to follow secure coding standards, and existed behind a web application firewall. Firing off technological attacks would do little more than alert their IT staff and we didn’t want that (their IT staff were deliberately unaware of the test).

Reconnaissance also identified a related job opportunity posted on LinkedIn for a Sr. IT Security Analyst. Interestingly the opportunity was not posted on Acme Corporation’s website. When Netragard reviewed the opportunity it contained a link that redirected Netragard to a job application portal that contained a resume builder web form. This form was problematic because it worked against our intention to submit a RADON infected resume to HR. We backtracked and began chatting on LinkedIn with the lady who posted the job opportunity. We told her that the form wasn’t loading for us but that we were interested in applying for the job. Then she asked us if we could email our resume to her directly, and of course we happily obliged.

Our resume contained a strand of RADON 2.0. RADON is Netragard’s zeroday malware generator designed specifically with customer well-being and integrity in mind. A strand is the actual malware that gets generated.   Every strand of RADON is configured with an expiration date. When the expiration date is reached the strand entirely removes itself from the infected system and it cannot be run again. RADON was created because other tools including but not limited to Metasploit’s Meterpreter are messy and leave files or even open backdoors behind. RADON is fully undetectable and uses multiple, non-disruptable covert channels for command and control. Most importantly when RADON expires it leaves systems in a clean, unaltered, pre-infection state.

Shortly after delivering our infected resume, RADON called home and had successfully infected the desktop belonging to the nice HR lady that we chatted with on LinkedIn. Our team covertly took control of her computer and began focusing on privilege escalation. RADON was running with the privileges of the HR employee that we infected. We quickly learned that those privileges were limited and would not allow our team to move laterally through the network. To elevate privileges we impersonated the HR employee that we compromised and forwarded our infected resume to an IT security manager.   The manager, trusting the source of the resume, opened the resume and was infected.

In short time RADON running on the IT security manager’s desktop called home. It was running with the privileges of the IT security manger who also happened to have domain administrative privileges.  Our team ran procdump on his desktop to dump the memory of the LSASS process.  This is important because the LSASS process contains copies of credentials that can be extracted from a dump.  The procdump command is “safe” because it is a Microsoft standard program and does not trigger security alerts.   However the process of extracting passwords from the dump often does trigger alerts.  To avoid this we transferred the dump to our test lab where we could safely run mimikatz to extract the credentials.

Then we used the credentials to access all three of Acme Corporation’s domains and extracted their respective password databases. We exfiltrated those databases back to our lab and successfully cracked 93% of all the current and historical passwords for all employees at Acme Corporation. The total elapsed time between initial point of entry and password database exfiltration was 28 minutes. At this point we’d reached an irrevocable foothold in Acme Corporation’s network. With that accomplished it was time to go after our main target, the CDE.

The process of identifying the CDE required aggressive reconnaissance. Our team searched key employee desktops for any information that might contain credentials, keys, vpn information, etc.   Our first search returned thousands of files that spanned over a decade. We then ordered the files based on date of modification and content and quickly found what we were looking for. The CDE environment could only be accessed by two users via VPN from within Acme Corporation. Making things more complex was that the VPN was configured with two-factor authentication was not tied into the domain.

Fortunately for us, this is not the first time we’ve run into this type of configuration. Our first step towards breaching the CDE was to breach the desktop of the CDE maintenance engineer. This engineer’s job was to maintain the systems contained with in the CDE from both a functionality and security perspective. To do this we placed a copy of RADON on his desktop and executed it as a domain administrator using RPC. The new RADON instance running on the CDE maintenance engineer’s desktop called home and we took control.

We quickly noticed that various VPN processes were already running on the CDE maintenance engineer’s desktop. So we checked the routing table looking for IP addresses that we knew to be CDE related (from the files that we gathered earlier) and sure enough they existed. This confirmed that an there was an active VPN session from our newly compromised desktop into the CDE. Now all we had to do was hijack this session, breach the CDE, and take what we came for.

We used the net shell command (netsh) and created a port forward rule from the infected desktop to the CDE. We then used a standard windows RDP client to connect to the CDE server but when we tried to authenticate, it failed. Rather than risking detection, we decided to take a step back and explore the CDE maintenance engineer’s desktop to see if we could find credentials related to the CDE.   Sure enough we found an xls document in a folder named “Encrypted” (which it wasn’t) that contained the credentials that we were looking for. Those credentials allowed us to to log into the CDE without issue.

When we breached the CDE we noticed that our user was a domain administrator for that environment. As a result not only did we have full control over the CDE but our activity would appear as if it were normal maintenance rather than hacker related. In short time we were able to locate customer credit card data, which was properly encrypted. Despite this we were confident that we’d be able to decrypt it by leveraging discoveries from our previous reconnaissance efforts (we did not make that effort at the customers request).

When we began exploring avenues for data exfiltration we found that the CDE had no outbound network controls. As a result, if we were bad actors we could have sent the credit card data to any arbitrary location on the Internet.

In summary, there were three points of failure that enabled our team to breach the CDE. The first point of failure is unfortunately common; network administrators tend to work from accounts that have domain administrative privileges. What network administrators should do instead is to use privileged accounts only when needed. This issue is something that we encounter in nearly every test that we do and it almost always allows us to achieve network dominance.

The second point of failure was the VPN that created a temporary bridge from the LAN to the CDE. That VPN was configured with split tunneling. It should have been configured in such a way that when the computer was connected to the CDE it was disconnected / unreachable from the corporate network. That configuration would have prevented our team from breaching the CDE with the described methodology.

The third point of failure was that the CDE did not contain any outbound network controls.   We were able to establish outbound connections on any port to any IP address of our choosing on the Internet. This means that we were in a position to extract all of Acme Corporation’s credit card data without detection and without issue. Clearly, the correct configuration would be one that is highly restrictive and that alarms on unexpected outbound connections.

Finally, the differences between compliance and security are vast. In the past decade we’ve seen countless businesses suffer damaging compromises at the hands of malicious hackers. These hackers get in because they test with more talent, more tenacity, and more aggression than nearly all of the penetration testing vendors operating today. For this reason we can’t stress how important it is that businesses select the right vendor and test at realistic threat levels. It is impossible to build effective defenses without first understanding how a real threat will align with your unique risks. At Netragard, we protect you from people like us.

Ukrainian hacker admits stealing business press releases for $30M, What they’re NOT telling you -Netragard

The sensationalized stories about the hacking of PR Newswire Association, LLC., Business Wire, and Marketwired, L.P. (the Newswires) are interesting but not entirely complete.  The articles that we’ve read so far paint the Newswires as victims of some high-talent criminal hacking group.  This might be true if the Newswires actually maintained a strong security posture, but they didn’t.  Instead their security posture was insufficiently robust to protect the confidentiality, integrity or availability of the data contained within their networks.  We know this because enough telling details about the breach were made public (see the referenced document at the end of this article).
In this article we first provide a critical analysis of the breaches based on public information primarily from the published record.   We do make assumptions based on the information provide and our own experience with network penetration to fill in some of the gaps. We call out the issues that we believe allowed the hackers to achieve compromise and cause damage to the Newswires.   Later we provide solutions that could have been used (and can be used by others) to prevent this type of breach from happening again. If we miss something, or if we can add to the solutions that we provide please feel free to comment and we will update this article accordingly.
From the published record we know that Marketwired was hacked via the exploitation of SQL Injection vulnerabilities.  We know that the hacking was ongoing for a three-year period.  Additionally, according to the records the SQL Injection attacks happened on at least 390 different occasions over a three-month span (between April 24th 2012 and July 20th, 2012).  We assume that Marketwired was unaware of this activity because no responsive measures were taken until years after the initial breach and well after damage was apparently realized.
With regards to SQL Injection, an attacker usually needs build the attack through a process of trial and error, which generates an abundance of suspicious error logs.  In rare cases when an attacker doesn’t need to build the attack the actual attack will still generate a wealth of suspicious log events.  Moreover, SQL Injection made its debut 17 years ago in a 1998 release of phrack magazine, a popular hacking zine.   Today SQL Injection is a well-known issue and relatively easy to mitigate, detect, and/or defeat.  In fact almost all modern firewalls and security appliances have SQL Injection detection / prevention built in. When considering normally overt nature of SQL Injection, the extended timeframe of the activity, and the apparent lack of detection by Marketwired, it strongly suggests that Marketwired’s security was (and may still be) exceptionally deficient.
It is Netragard’s experience from delivering Platinum level (realistic threat) Penetration Tests that businesses have a 30-minute to 1-hour window in which to detect and respond to an initial breach. When Netragard’s team breaches a customer network, if the customer fails to detect and revoke Netragard’s access within the aforementioned timeframe then the customer will likely not be able to forcefully expel Netragard from its network. Within a 30-minute window of initial penetration Netragard is 89% likely to compromise its customers domain controller(s) and achieve total network dominance. Within a 1-hour window of initial penetration Netragard in 98% likely to compromise its customers domain controller(s) and achieve total network dominance.
We know that Marketwired’s failure to detect the initial breach (and subsequent attacks) provided the hackers with ample time to metastasize their penetration throughout the network. The published record states that the hackers “installed multiple reverse shells”. The record also states “in or about March 2012, the Hackers launched an intrusion into the networks of Marketwired whereby they obtained contact and log-in credential information for Marketwired’s employees, clients, and business partners.” We assume that the compromise of “log-in credential information” means that the hackers successfully compromised Marketwired’s domain controllers and exfiltrated / cracked their database of employee usernames and passwords. Given the fact that people tend to use the same passwords in multiple places (discussed later as well) the potential impact of this almost immeasurable.
While considerably less information about the breach into PRN’s network is available, the information that is public outlines significant security deficiencies existed. According to the published record PRN detected the intrusions into their network well after the network was breached which represents a failure at effective incident detection. Moreover, it appears that PRN’s response was largely ineffective because PRN ejected the hackers from the network but the hacker’s regained access. According to the record it appears that the dance of ejection and re-breach happened at least three times.
The third breach into PRN is very telling. This time the hackers purchased a list of logins taken from a compromised social networking website. The hackers then “reviewed and collected usernames and logins for PRN employees” from that list and used the collected information “to access the Virtual Private Network (“VPN”) or PRN”. Clearly PRN did not use two-factor authentication for its VPN, which would have prevented this method of penetration. It is also important to note that two-factor authentication is necessary to satisfy some regulatory requirements. Additionally, PRN’s policy around password usage and password security is seriously deficient or not being adhered to. Specifically, PRN employees were using the same passwords on social media websites (and possibly other places) as they were for PRN’s network.   As with Marketwired’s breach, PRN’s were very likely preventable.
Even less information is available about Business Wire’s breach. According to the records Business Wire’s network was initially breached via SQL Injection (like Marketwired) by another hacker at an earlier time. Iermolovych (the name of the hacker who hacked the Newswires) purchased access to Business Wire’s network from the other hacker. As with Marketwired and PRN, Business Wire’s own detection and response capabilities were (and may still be) lacking. It is unclear from the record as to how long the hackers were able to operate within Business Wire’s network but it is clear that the initial SQL Injection attack and subsequent breach was not detected or responded to in a timely manner.
Unfortunately, based on our own experience, most businesses are as vulnerable as the Newswires. The reasons for this are multifaceted we may cover them in another article at a later time. For now, we’ll focus on what could have been done to prevent the damages that resulted from this breach. It’s important to stress that every network will be breached at some point during its lifetime. The question is will the Incident Response be effective at detecting the breach and preventing it from becoming damaging.
To understand the solution we must first understand the problem. Damaging breaches have two common characteristics, which are poor network security and ineffective Incident Response. We know from studying historical breach data from the Verizon DBIR and OWASP that approximately 99.8% of all breaches are the product of the exploitation of known vulnerabilities for which CVE’s have already been published (may for over one year). This validates our first characteristic. The second characteristic is validated by the ever increasing number of damaging braches that are reported each year. The fact that these breaches are damaging shows that Incident Response has failed.
Most of the reported breaches in the past decade could have been avoided by proactively countering the two aforementioned points of failure. Countering these failure points requires actionable intelligence about how a threat will align with the unique risks of each associated network and how sensitive data will be accessed. The best method of assembling this intelligence is to become the victim of a breach not through malicious hacking but instead through high-quality, realistic-threat penetration testing.   Unfortunately this isn’t as easy as it sounds. The industry standard penetration test is a vetted vulnerability scan which is far from realistic and provides no real protective benefit. There are a few realistic threat penetration-testing vendors in operation but finding them can be a challenge.
Some of the characteristics of a realistic threat penetration test include but are not limited to social engineering with solid pretexts, undetectable malware, the non-automated identification and exploitation of network and web application vulnerabilities, exploit customizations, and stealth penetration. A realistic penetration testing team will never request that its IP addresses be whitelisted nor will they request credentials (unless perhaps for web application testing). The team will similarly not be dependent on (and may elect not to even use) automated tools like Nessus, Nexpose, The Metasploit Framwork, etc. Automated tools are useful for basic security & maintenance purposes but not for the production of realistic threats. Do you think the hackers that hacked Target, Sony, Hannaford, LinkedIn, The Homedepot, Ashley Madison, or The Newswires used those scanners?
The report generated by a realistic penetration test should cover the full spectrum of vulnerabilities as well as the Path to Compromise (PTC). The PTC represents the path(s) that an attacker must follow to compromise sensitive data from a defined source (Internet, LAN, etc.). Identifying the PTC is arguably more important from a defensive perspective than vulnerability identification. This is because it is technically impossible to identify every vulnerability that exists in a network (or in software) and so there will always exist some level of gap. Identifying the PTC allows a business mitigate this gap by creating an effective IR plan capable of detecting and responding to a breach before it becomes damaging. Netragard’s platinum level Network Penetration Testing services produce a high-detail PTC for exactly this reason.
The Newswires (and many other businesses) could likely have prevented their breach if they had done the following.

  1. Deployed a response-capable Web Application Firewall and configured the firewall specifically for the application(s) that it was protecting. This would have prevented the SQL Injection attacks from being successful.
  2. Deployed a Network Intrusion Detection / Prevention solution to monitor network traffic bidirectionally. This would likely have enabled them to detect the reverse-shells.
  3. Deployed a Data Loss Prevention solution. This would likely have prevented some if not all of the released from being exfiltrated.
  4. Deployed a SEIM capable of receiving, correlating and analyzing feeds from system logs, security appliances, firewalls, etc. This would likely have allowed the Newswires to detect and respond to the initial attacks before breach and well before damage.
  5. Purchased realistic-threat penetration testing that produced a report containing a detailed PTC and then implemented the suggested methods for mitigation, remediation, and hardening provided in the report. The test would enable them to measure the effectiveness of their existing security solutions and to close any gaps that might exist.
  6. To deploy an internal honeypot solution (like Netragard’s) that would detect lateral movement (Distributed Metastasis) inside of their networks and allow the Newswires to respond prior to experiencing any damage.

Records for reference

Enemy of the state

A case study in Penetration Testing
We haven’t been blogging as much as usual largely because we’ve been busy hacking things.   So, we figured that we’d make it up to our readers by posting an article about one of our recent engagements. This is a story about how we covertly breached a highly sensitive network during the delivery of a Platinum level Penetration Test.

First, we should make clear that while this story is technically accurate certain aspects have been altered to protect our customer’s identity and security. In this case we can’t even tell you if this was for a private or public sector customer. At no point will ever write an article that would put any of our customers at risk. For the sake of intrigue lets call this customer Group X.

The engagement was designed to produce a level of threat that would exceeded that which Group X was likely to face in reality. In this case Group X was worried about specific foreign countries breaching their networks. Their concern was not based on any particular threat but instead based on trends and what we agreed was reasonable threat intelligence.   They were concerned with issues such as watering holes, spear phishing, 0-day malware, etc. They had reason for concern given that their data was and still is critically sensitive.

We began work like any experienced hacker would, by performing social reconnaissance. Social reconnaissance should always be used before technical reconnaissance because it’s passive by design. Social reconnaissance when done right will provide solid intelligence that can be used to help facilitate a breach. In many cases social reconnaissance can eliminate the need for active technical reconnaissance.

Just for the sake of explanation, technical reconnaissance includes active tasks like port scanning, web server scanning, DNS enumeration, etc. Technical reconnaissance is easier to detect because of its active methods. Social reconnaissance, when done right, is almost impossible to detect because it is almost entirely passive. It leverages tools like Google, Maltego, Censys, etc. to gather actionable intelligence about a target prior to attack.

Our social reconnaissance efforts identified Group X’s entire network range, a misconfigured public facing document repository (that did not belong to Group X but was used by them and their partners/vendors), and a series of news articles that were ironically focused on how secure Group X was. One of the articles went so far as to call Group X the “poster child of good security”.

We began by exploring the contents of the aforementioned document repository. The repository appeared to be a central dumping ground for materials Group X wanted to share with third parties, including vendors. While digging through the information all of it appeared to be non-sensitive and mostly intended for public consumption. As we dug further we uncovered a folder called WebServerSupport and contained within that folder was a file called “encyrypted.zip”. Needless to say we downloaded the file.

We were able to use a dictionary attack to guess the password for the zip file and extract its contents. The extracted files included a series of web server administration guides complete with usernames, passwords, and URL’s. One of the username, password and URL combinations was for Group X’s main website (https://www.xyxyxyxyxy.com/wp-admin,username,password). When we browsed to https://www.xyxyxyxyxy.com/wp-admin we were able to login using the credentials. With this level of access we knew that it was time to poison the watering hole. (https://en.wikipedia.org/wiki/Watering_Hole)

To accomplish this we deployed our malware framework into the webserver (www.xyxyxyxyxy.com). Our framework is specifically designed to allow us to control who is infected. We are able to select targets based on their source IP address and other information identifying information. When a desired target connects to the watering hole (infected website) our framework deploys our 0-day pseudo-malware (RADON) into the victim’s computer system.   RADON then establishes persistence and connects back to our command and control server. From there we are able take complete control of the newly infected computer.
Netragard RADON v2.0 Strand Generator
RADON is not the same RADON used by the National Security Agency (NSA) as was speculated by the InfoSec institute. RADON does appear similar in some respects. It relies on side channel communications that cannot be disrupted without breaking core network protocols. It was designed to be far safer than other tools that tend to leave files behind (like Metasploit’s meterpreter). All strands of RADON are built with an expiration date that when reached trigger a clean uninstall and render the original source inert. We designed RADON specifically because we needed a save, clean and reliable method to test our customers at high levels of threat.

After the malware framework was deployed and tested, we scheduled it to activate the next business day. The framework was designed to infect one target then sleep until we instructed it to infect the next. This controlled infection methodology helps to maintain stealth. By 9:30 AM EST our first RADON strand called home. When we reviewed the connection we learned that we had successfully infected a desktop belonging Group X’s CIO’s assistant. We confirmed control and were ready to begin taking the domain (which as it turns out was ridiculously easy).

One of the first things we do after infecting a host is to explore network shares. During this test we quickly located the “scripts” share, which contained all of the login scripts for domain users. What we didn’t expect was that we’d be able to read, write, and otherwise modify every single one of those startup scripts. We were also able to create new scripts, new files and directories within the scripts directory. So uploaded RADON to the share and added a line to every login script that would run RADON every time a user logged into a computer. We quickly infected everything on the network.

After parsing through the onslaught of new inbound RADON connections we were able to identify personal user accounts belonging to network administrators. As it turned out most of the administrators personal accounts also had domain admin privileges. We leveraged those accounts to download the username and password database (ntds.dit) from the domain controller. Then we used RADON to exfiltrate the password database and dump it on one of our GPU password-cracking machines. We were able to crack all of the current and historical passwords in less than 2 hours time. What really surprised us was that 90% of the passwords were exactly identical.

Initially we thought that this was due to an error.   But after further investigation we realized that this common password could be used to login using all of the different domain accounts. It became even more interesting when we began to explore the last password change dates. We found that nearly 100% of the passwords had never been changed and that some of the accounts were over a decade old, still active, but no longer being used by anyone. We later found out that employees that had been terminated never had their accounts deactivated. When we confronted the customer with this they told us that it was their policy to not change passwords. When we asked them why they pointed to an article written by Bruce Schneier. (Sorry Bruce, but this isn’t the first time you’ve made us question you.)

At this point in the engagement we had more control over our customer’s infrastructure than they did. We were able to control all of their critical devices including but not limited to antivirus solutions, firewalls, intrusion detection and prevention systems, log correlation systems, switches, routers and of-course their domain. We accomplished this without triggering a single event and without any suspicion.

The last two tasks that remained were trophy gathering and vulnerability scanning. Trophy gathering was easy given the level of access that we had. We simply searched the network for .pdf, .doc,.docx,.xlsx, etc. and harvested files that looked interesting. We did find about a dozen reports from other penetration testing vendors as well. When we looked at those reports they presented Group X’s network as well managed and well protected. The only vulnerabilities that were identified were low and medium level vulnerabilities, none of which were exploitable.

When we completed our final task which was vulnerability scanning and vetting, our scanners produced results that were nearly identical to the other penetration testing vendor reports that we exfiltrated. Things like deprecated SSL, open ports, etc. were reported but nothing that could realistically lead to a network compromise. When we scanned Group X’s network from the perspective of an Internet based threat no vulnerabilities were reported. Our scans resulted in their security team becoming excited and proud because they “caught” and “prevented our intrusion attempt. When we told them to check their domain for a Netragard domain admin account, their excitement was over.

Exploit Acquisition Program Shut Down

We’ve decided to terminate our Exploit Acquisition Program (again).   Our motivation for termination revolves around ethics, politics, and our primary business focus.  The HackingTeam breach proved that we could not sufficiently vet the ethics and intentions of new buyers. HackingTeam unbeknownst to us until after their breach was clearly selling their technology to questionable parties, including but not limited to parties known for human rights violations.  While it is not a vendors responsibility to control what a buyer does with the acquired product, HackingTeam’s exposed customer list is unacceptable to us.  The ethics of that are appalling and we want nothing to do with it.

While EAP was an interesting and viable source of information for Netragard it was not nor has it ever been Netragard’s primary business focus. Netragard’s primary focus has always been the delivery of genuine, realistic threat penetration testing services.  While most penetration testing firms deliver vetted vulnerability scans, we deliver genuine tests that replicate real world malicious actors.  These tests are designed to identify vulnerabilities as well as paths to compromise and help to facilitate solid protective plans for our customers.

It is important to mention that we are still in strong favor of ethical 0-day development, brokering and  sales.  The need for 0-days is very real and the uses are often both ethical and for the greater good. One of the most well known examples was when the FBI used a FireFox 0-day to target and eventually dismantle a child pornography ring.  People who argue that all 0-day’s are bad are either uneducated about 0-days or have questionable ethics themselves.  0-days’s are nothing more than useful tools that when placed in the right hands can benefit the greater good.

If and when the 0-day market is correctly regulated we will likely revive EAP.  The market needs a framework (unlike Wassenaar) that holds the end buyers accountable for their use of the technology (similar to how guns are regulated in the US).  Its important that the regulations do not target 0-days specifically but instead target those who acquire and use them.  It is important to remember that hackers don’t create 0-day’s but that software vendors create them during the software development process.  0-day vulnerabilities exist in all major bits of software and if the good-guys aren’t allowed to find them then the bad-guys will

What real hackers know about the penetration testing industry that you don’t.

The information security industry has become politicized and almost entirely ineffective as is evidenced by the continually increasing number of compromises. The vast majority of security vendors don’t sell security; they sell political solutions designed to satisfy the political security needs of third parties. Those third parties often include regulatory bodies, financial partners, government agencies, etc.   People are more concerned with satisfying the political aspects of security than they are with actually protecting themselves, their assets, or their customers from risk and harm.

For example, the Payment Card Industry Data Security Standard (PCI-DSS) came into existence back on December 15th, 2004. When the standard was created it defined a set of requirements that businesses needed to satisfy in order to be compliant. One of those requirements is that merchants must undergo regular penetration testing. While that requirement sounds good it completely fails to define any realistic measure against which tests should be performed. As a result the requirement is easily satisfied by the most basic vetted vulnerability scan so long as the vendor calls it a penetration test (same is still largely true for PCI 3.0).

To put this into perspective the V0 and V50 ballistics testing standards establish clear requirements for the performance of armor. These requirements take into consideration the velocity of a projectile, size of a projectile, number of strikes, etc. If penetration is achieved when testing against the standards then the armor fails and is not deployable.   If PCI-DSS were used in place of the V0 and V50 standards then it would suffice to test a bulletproof vest with a squirt gun.   In such a case the vest would be considered ready for deployment despite its likely failure in a real world scenario.

This is in part what happened to Target and countless others. Target’s former CEO, Gregg Steinhafel was quoted saying “Target was certified as meeting the standard for the payment card industry (PCI) in September 2013. Nonetheless, we suffered a data breach.” What does that tell us about the protective effectiveness of PCI? What good is a security regulation if it fails to provide the benefit that it was designed to deliver? More importantly, what does that say about the penetration testing industry as a whole?

While regulations are ineffective it is the customers choice to be politically oriented or security focused. In 2014, 80% of Netragard’s customers opted to receive political security testing services (check in the box) rather than genuine security testing services even after having been educated about the differences between both. Most businesses consider the political aspect of receiving a check in the box to be a higher priority than good security (this is also true of the public sector).

This political agenda motivates decision makers to select penetration testing vendors (or other security solutions) based on cost rather than quality. Instead of asking intelligent questions about the technical capabilities of a penetration testing team they ask technically irrelevant questions about finances, the types of industries that vendor may have serviced, if a vendor is in Gartner’s magic quadrant, etc. While those questions might provide a vague measure (at best) of vendor health they completely fail to provide any insight into real technical capability.   The irony is that genuine penetration testing services maintain both lower average upfront costs and lower average long-term costs than political penetration testing services.

The lower average upfront cost of genuine penetration testing comes from the diagnostic pricing methodology (called Attack Surface Pricing or ASMap Pricing) that genuine penetration testing vendor’s depend on. ASMap pricing measures the exact workload requirement by diagnosing every in-scope IP address and Web Application (“Target”) during the quote generation process. Because each Target offers different services, each one also requires a different amount of testing time for real manual testing. ASMap pricing never results in an overcharge or undercharge and is a requirement for genuine manual penetration testing. In fact, diagnostic pricing is the de facto standard for all service based industries with the exclusion of political penetration testing (more on that later).

The lower long-term costs associated with genuine penetration testing stem from the protective nature of genuine penetration testing services. If the cost in damages of a single successful compromise far exceed the cost of good security then clearly good security is more cost effective. Compare the average cost in damages of any major compromise to the cost of good security. Good security costs less, period.

Political penetration testing (the industry norm) uses a Count Based Pricing (“CBP”) methodology that almost always results in an overcharge. CBP takes the number of IP addresses that a customer reports to have and multiplies it by a cost per IP. CBP does not diagnose the targets in scope and is a blind pricing methodology. What happens if a customer tells a vendor that they have 100 IP addresses that need testing but only 1 IP address offers any connectable services? If CBP is being used then the customer will be charged for testing all 100 IP addresses when they should only be charged for 1. Is that ethical pricing?

A good example of CBP overcharge happened to one of our customers last year. This customer approached Netragard and another well-known Boston based firm.   The other firm produced a proposal using CBP based on the customer having 64 IP addresses. We produced a proposal using the ASMap methodology. When we presented our proposal to the customer ours came in over $55,000.00 less than the other vendor.   When the customer asked us how that was possible we explained that of their 64 IP addresses only 11 were live. Of the 11 only 2 presented any real testable surface. Needless to say the other vendor didn’t win the engagement.

CBP cannot be used to price a manual penetration testing engagement because it also runs the risk of undercharging. Any engagement priced with the CBP methodology is dependent on vulnerability scanning. This is because CBP is a blind pricing methodology that does not diagnose workload. If a customer is quoted at $5,000 to test 10 IP addresses CBP assumes the workload for the 10 IP addresses.

What happens if each IP address requires 10 hours of manual labor? Engagements priced with CBP rely on automated scanners to compensate for these potential overages and to ensure that the vendor always makes a profit.   Unfortunately this dependence on automated scanning degrades the quality of the engagement significantly.  The political penetration testing industry falsely promises manual services when in fact the final deliverable is more often than not a vetted vulnerability scan. This promotes a false sense of security that all too often leads to compromise.

Customers can choose to be lazy and make naïve, politically oriented security decisions or they can self-educate, choose good security and save themselves considerable time and money.   While the political security path appears simple and easy at the onset the unforeseen complexities and potential damages that lie are all too often catastrophic. How much money is your business worth and what are you doing to truly protect it?

We’re offering a challenge to anyone willing to accept. If you think that your network is secure then let us test it with our unrestricted methodology. If we don’t compromise your network then the test is done free of charge. If we do compromise then you pay cost plus 15%.   During the test we expect you to respond the same way that you would a real threat. We don’t expect to be whitelisted and we don’t expect you to lower your defenses. Before you accept this challenge let it be known that we’ve never failed. To date our unrestricted methodology maintains a 100% success rate with an average time to compromise of less than 4 hours. Chances are that you won’t know we’re in until it’s too late.

Do you accept?

The Truth About Breaching Retail Networks

How we breached a retail network using our manual penetration testing methodology

We recently delivered an Advanced Persistent Threat  (APT) Penetration Test to one of our customers. People who know us know that when we say APT we’re not just using buzz words.  Our APT services maintain a 98% success rate at compromise while our unrestricted methodology maintains a 100% success at compromise to date.  (In fact we offer a challenge to back up our stats.  If we don’t penetrate with our unrestricted methodology then your test is free.)  Lets begin the story about a large retail customer that wanted our APT services.
When we deliver covert engagements we don’t use the everyday and largely ineffective low and slow methodology.  Instead, we use a realistic offensive methodology that incorporates distributed scanning, the use of custom tools, zero-day malware (RADON) among other things.  We call this methodology Real Time Dynamic Testing™ because it’s delivered in real time and is dynamic.  At the core of our methodology are components normally reserved for vulnerability research and exploit development.  Needless to say, our methodology has teeth.
Our customer (the target) wanted a single /23 attacked during the engagement. The first thing that we did was to perform reconnaissance against the /23 so that we knew what we were up against.  Reconnaissance in this case involved distributed scanning and revealed a large number of http and https services running on 149 live targets.  The majority of the pages were uninteresting and provided static content while a few provided dynamic content.
While evaluating the dynamic pages we came across one that was called Make Boss. The application was appeared to be custom built for the purpose of managing software builds. What really snagged our attention was that this application didn’t support any sort of authentication.  Instead anyone who visited the page had access to use the application.
We quickly noticed that the application allowed us to generate new projects.  Then we noticed that we could point those new projects at any SVN or GIT repo local or remote.  We also identified a hidden questionable page named “list-dir.php” that enabled us to list the contents of any directory that the web server had permission to access.
We used “list-dir.php” to enumerate local users by guessing the contents of “C:\document~1” (Documents and Settings folder). In doing so we identified useful directories like “C:\MakeBoss\Source” and “C:\MakeBoss\Compiled”.  The existence of these directories told us that projects were built on and fetched from same server.
The next step was to see if in fact we could get the Make Boss application to establish a connection with a repository that we controlled.  To do this we setup an external listener using netcat at our lab .  Then we configured a new project called “_Netragard” in Make Boss in such a way that it would connect to our listener. The test was a success as is shown by the redacted output below.

[[email protected]:~]$ nc -l -p 8888 -v
listening on [any] 8888 …
xx.xx.xx.xx: inverse host lookup failed: Unknown server error : Connection timed out
connect to [xx.xx.xx.xx] from (UNKNOWN) [xx.xx.xx.xx] 1028
Host: lab1.netragard.com:8888
User-Agent: SVN/1.6.4 (r38063) neon/0.28.2
Connection: TE, Keep-Alive
TE: trailers
Content-Type: text/xml
Accept-Encoding: gzip
DAV: http://subversion.tigris.org/xmlns/dav/svn/depth
DAV: http://subversion.tigris.org/xmlns/dav/svn/mergeinfo
DAV: http://subversion.tigris.org/xmlns/dav/svn/log-revpropsContent-Length: 104
Accept-Encoding: gzip
<?xml version=”1.0″ encoding=”utf-8″?><D:options xmlns:D=”DAV:”><D:activity-collection-set/></D:options>

With communications verified we setup a real instance of SVN and created a weaponized build.bat file.  We selected the build.bat because we knew that Make Boss would execute the build.bat server-side and if done right we could use it to infect the system. (A good reference for setting up subversion can be found here  http://subversion.apache.org/quick-start).  Our initial attempts at getting execution failed due to file system permissions.  We managed to get successful execution of our build.bat by changing our target directory to “C:\TEMP” rather than working from the standard webserver directories.
With execution capabilities verified we modified our build.bat file so that it would deploy RADON (our home-grown 0-day pseudo-malware).  We used Make Boss to fetch and run our weaponized build.bat, which in turn infected the server running the Make Boss application.  Within seconds of infection our Command & Control server received a connection from the Make Boss server.  This represented our first point of penetration.
A note about RADON…
RADON is “safe” as far as malware goes because each strand is built with a pre-defined expiration date.  During this engagement RADON was set to expire 5 days after strand generation.  When RADON expires it quietly and cleanly self-destructs leaving the infected system in its original state which is more than what can be said for other “whitehat” frameworks (like Metasploit, etc).
RADON is also unique in that it’s designed for our highest-threat engagements (nation-state style).  By design RADON will communicate over both known and unknown covert channels.  Known channels are used for normal operation while covert channels are used for more specialized engagements.  All variants of RADON can be switched from known to covert and visa-versa from the Command & Control server.
Finally, it’s almost impossible to disrupt communication between RADON and its Command & Control center.  This is in part because of the way that RADON leverages key protocols that all networks depend on to operate.  Because of this, disrupting RADON’s covert channels would also disrupt all network functionality.
Back to the hack…
With the system infected by RADON we were able to take administrative control of the Make Boss server.  From there we identified domain administrator credentials that the server was happy to relinquish. We used those credentials to authenticate to and extract all current and historical passwords from the domain controller.   Then we used one of our specialized GPU password cracking machines to process the hashes and deliver us the keys to the kingdom.
With that accomplished we had established dominant network position. From this position we were able to propagate RADON to all endpoints and affect an irrecoverable network compromise.  Irrecoverable if we were the bad guys of course, but luckily we’re the good guys and our customer recovered just fine.  Never the less we had access to everything including but not limited to desktops, points of sale, web servers, databases, network devices, etc.
Not surprisingly our customers managed security service provider didn’t detect any of our activity, not even the mass infection.  They did however detect what we did next…
As a last step and to satisfy our customer we ran two different popular vulnerability scanners.  These are the same scanners that most penetration testing vendors rely on to deliver their services.  One of the scanners is more network centric and the other combines network and web application scanning.  Neither of the scanners identified a single viable vulnerability despite the existence of the (blatantly obvious) one that we exploited above.  The only things that were reported were informational findings like “port 80 open”, “deprecated SSL”, etc.
It’s really important to consider this when thinking about the breaches suffered by businesses like Hannaford, Sony, Target, Home Depot and so many.  If the penetration tests that you receive are based on the product of vulnerability scanners and those scanners fail to detect the most obvious vulnerabilities then where does that leave you?  Don’t be fooled by testers that promise to deliver “manual penetration tests” either.  In most cases they just vet scan reports and call the process of vetting “manual testing” which it isn’t.

What you don’t know about compliance…

People are always mystified by how hackers break into major networks like Target, Hannaford, Sony, (government networks included), etc.  They always seem to be under the impression that hackers have some elite level of skill.  The truth is that it doesn’t take any skill to break into most networks because they aren’t actually protected. Most network owners don’t care about security because they don’t perceive the threat as real.  They suffer from the “it won’t ever happen to me” syndrome.
As a genuine penetration testing company we take on dozens of new opportunities per month.  Amazingly, roughly 80% of businesses that request services don’t want quality security testing, they want a simple check in the compliance box. They perceive quality security testing as an unnecessary and costly annoyance that stands in the way of new revenue.  These businesses test because they are required to, not because they want to.  These requirements stem from partners, customers, and regulations that include but are not limited to PCI-DSS, HIPAA, etc.
Unfortunately these requirements make the problem worse rather than better.  For example, while PCI requires merchants to receive penetration tests it completely fails to provide any effective or realistic baseline against which to measure the test results.  This is also true of HIPAA and other third party testing requirements.  To put this into perspective, if the National Institute of Justice set their V50 or V0 standards in the same manner then it would be adequate and acceptable to test bulletproof vests with  squirt guns.  Some might argue that poor testing is better than nothing but we’d disagree.  Testing at less than realistic levels of threat does nothing to prevent the real threat from penetrating.
Shoddy testing requirements and a general false sense of security have combined to create a market where check in the box needs take priority over genuine security.  Vendors that sell into this market compete based on cost, free service add-ons and free software licenses rather than quality of service and team capability, and price illogically based on IP count. Most testing vendors exacerbate the problem because they falsely advertise compliance testing (check in the box) services as best quality.  This creates and perpetuates a false sense of security among non-security expert customers and also lures in customers who have a genuine security need.
The dangers associated with this are evidenced by the many businesses that have suffered damaging compromises despite the fact that they are in compliance with various regulations.  The recent Target breach (certified as PCI compliant by Trustwave) is just one high-profile example.  Target’s former CEO, Gregg Steinhafel was quoted saying “Target was certified as meeting the standard for the payment card industry (PCI) in September 2013. Nonetheless, we suffered a data breach”.  Another high-profile example is the Hannaford breach (Rapid7’s customer at the time) back in 2008.  Hannaford, like Target, claims that they too were PCI compliant.
It’s our responsibility as security experts to deliver truth to our customers rather than to bank on their lack of expertise.  Sure we’re in this to make money but we also have an ethical responsibility.  If we take the time to educate our customers about the differences between compliance testing and genuine penetration testing and they still select compliance testing then that’s fine (its their risk).  But if we lie to our customers and sell them compliance testing while we assert that it’s best in class then we should be held responsible.  After all, it’s our job to protect people isn’t it?
The irony is that Compliance testing typically cost more than genuine penetration testing because it uses an arbitrary count based pricing methodology.  Specifically, if customer has 10 IP addresses but only 1 of those IP addresses is live the customer will still be billed for testing all 10 IP addresses.  Genuine penetration testing costs less because it uses an Attack Surface Pricing (ASP) methodology.  If a customer has 10 IP addresses and only one is live then ASP will identify that and the customer will only be charged for that 1 IP.  Moreover, the customer will be charged based on the complexity of the services provided by that one IP.
If the Return on Investment (RoI) of good security is equal to the cost in damages of a single successful compromise and if quality penetration testing services cost less (on average) than compliance testing services, doesn’t it make sense to purchase quality penetration testing services?