How to protect against Modern Ransomware Attacks

In 2019, over half of businesses were the victims of ransomware attacks with an average cost of $761,106. In 2020, attacks grew even worse with an estimated total price tag of $20 billion. Successful ransomware attacks are growing increasingly common despite the dozens of solutions that claim to provide 100% protection against ransomware. So, what’s going wrong?

Ransomware “Solutions” Aren’t Working

Most companies are aware of the threat of ransomware and have taken steps to protect against it. However, the number of successful attacks demonstrates that these approaches aren’t working. Most common anti-ransomware solutions fail because they don’t address the real problem.

Anti-Phishing Training

Many organizations’ cybersecurity awareness training discusses the threat of ransomware and how to protect against it. They talk about the risks of phishing emails and why it’s important not to click on a link or open a suspicious attachment. They also push the benefits of antivirus. However, ransomware attacks are still occurring, and in fact, growing even more common. The reason is that most anti-ransomware training and strategies are not aligned with today’s real threat.

In 2020, the main ways in which organizations were infected by ransomware was not via email or other automated processes. Instead, it was by human actors manually targeting and penetrating organizations using various software and tolls such as the Remote Desktop Protocol (RDP) or Virtual Private Networks (VPNs) with credentials that were purchased on the darkweb. In cases where the credentials didn’t work the operators would leverage brute force attacks. These aren’t “fire and forget” phishing emails designed to drop ransomware on a target system. They’re human-driven campaigns where an attacker gains access to an organization’s network, explores it, exfiltrates sensitive data, and runs ransomware exactly where and when they want to.

Endpoint Protection

Ransomware is malware, so an anti-malware solution, aka endpoint protection solutions, seem like the perfect protection against ransomware. In theory, installing and frequently running an up-to-date endpoint protection solution should fix the problem, but does it?

While endpoint solutions can defeat most known variants of malware, they can be evaded with relative ease. To effectively detect malware these solutions must have intelligence about the malware in advance of a real-world encounter. When a new, never-before-seen variant of malware surfaces (zero-day malware) , the effectiveness of these solutions is marginal at best. Complicating things further is that the attackers often test their malware against endpoint security solutions in advance of deployment to ensure that it remains fully undetectable.

What’s more problematic is that it takes organizations an average of 280 days to detect a data breach and it takes attackers less than 30 minutes to establish what amounts to an irrevocable foothold. This means that the attackers can explore victim networks for an extended period of time, steal credentials, deploy additional malware, and more. Given this fact, breached organizations can not realistically guarantee the security or safety of their networks without a complete overhaul.


Backups can be an invaluable tool for recovering from a ransomware attack. The traditional ransomware model is based on denying access to data. Assuming that your backup is very recent and wasn’t encrypted as well, then it can be cheaper and easier to restore from it than to pay the ransom.

The problem is that ransomware gangs know this too and have adapted their tactics. In recent years, ransomware gangs have begun performing “double extortion” attacks, which involve data theft on top of the data encryption. If the victim refuses to pay the ransom, then their data is posted publicly or sold to the highest bidder.

These types of attacks mean that relying on backups is not an effective strategy. Regulators don’t care that you’ve restored your data if the exposed data is protected by law. On the bright side, if you don’t have backups, double extortion attacks mean that you can restore your data by downloading a copy, just like everybody else!

Paying the Ransom

Some companies take the approach of paying the ransom demand. In theory, this puts an end to the problem by allowing them to restore their data and making the cybercriminals go away. In reality, this approach does not always work. In some cases, ransomware gangs fail to hand over the decryption key when the ransom is paid. In others, the promised decryptor doesn’t work as well as advertised. This was the case in the recent Colonial Pipeline breach, where the company shelled out $4.4 million for a decryptor that was so slow that the company went back to restoring from backups.

Making the Colonial Pipeline breach even more interesting is that, for the first time ever, the FBI was able to recover most of the funds. To pay the ransom, Colonial needed to exchange ~$4.4 million into 63.7 Bitcoin (BTC) and then transfer the BTC to one of the DarkSide wallets. In a short time, the FBI was able to compromise the private key belonging to that specific wallet and recover all 63.7 BTC. This may sound like a victory but between the time the ransom was paid and recovered the value of BTC declined sharply. As a result, the value of the recovered 63.7 BTC ~$2.3 million resulting in a loss of $2.1 million dollars. Moreover, it’s very likely that any data that was stolen will be published.

Paying a ransom also doesn’t mean that the cybercriminals will go away. In fact, it labels a company as a mark that’s willing to pay up. We’ve witnessed this firsthand. Just recently, a new customer engaged Netragard because they had been the victim of ransom attacks three times by the same group over the span of 4 years. Our consulting team helped them to drastically improve their overall security posture and to try and prevent a fourth incident.

These breaches never go without at least some public notice, even if a victim pays up. Attackers often advertise their victims on the darkweb which entices other attackers to either buy access to their networks or to attack them as “soft” targets. Two screenshots of such sites are provided below just as an example.

Wall of Shame

The Modern Ransomware Attack

Cybercrime has become a business, and that business is maturing. A major part of this increased maturity is the emergence of role specialization on a macro scale. Not all cybercriminals are wunderkids who can do everything. Instead, cybercrime groups are specializing and forming their own “as a Service” economy.

The modern ransomware threat landscape is a perfect example of this. Today’s ransomware campaigns are broken up into two main stages: gaining access and achieving objectives.
Increasingly, groups like the DarkSide behind the recent Colonial Pipeline hack are offering “Ransomware as a Service”. They create the ransomware and other teams (specialized in gaining access to corporate networks) deliver it. Alternatively, a cybercrime group will gain a foothold in an enterprise network and sell it to someone else to use. This is likely what happened in the Equifax hack and is a common part of ransomware operations today.

This evolution of the ransomware campaign creates significant challenges for enterprise cybersecurity. A defense strategy built around antivirus and “don’t click on the link” training won’t deter a professional, well-researched attack campaign. Having a strong lock on the front door doesn’t help much if they come in through the back window.

Ransomware Attack Prevention

If traditional approaches to ransomware prevention are not effective, then what is?

Modern ransomware attacks are human driven. Sophisticated cybercriminals can gain entry to a network through a variety of different ways, including many that a vulnerability scanner, industry standard penetration test, or anti-phishing solutions, etc. will never catch.

Preventing these types of breaches requires forward-thinking intelligence about how today’s threat is most likely to align with an organization’s existing points of risk and exposure. The most effective way to gather this intelligence is to experience a real-world attack at the hands of a qualified team that you trust and control. This is where Realistic Threat Penetration Testing comes into play. Realistic Threat Penetration Tests are not provided by most penetration testing firms and are notably different than Red Team engagements. Some of the key characteristics include, but are not limited to:

  • The ability to match or exceed the level of threat being produced by today’s bad actors.
  • Utilizing human experience & expertise with little to no dependency on tools like automated vulnerability scanners or commercial off-the-shelf testing tools. Ideally the team should be comprised of professionals with demonstrable expertise in performing vulnerability research and zero-day exploit development.
  • The use of custom-built pseudo-malware to simulate ransomware or other malware. Pseudo-malware should deliver the same or better capabilities than what the real-world threat actors are using and must be fully undetectable (covert). The primary difference between malware and pseudo-malware is that pseudo-malware is built with safety in mind which includes automated clean removal capabilities at a pre-defined expiration date.
  • Leverage experts who understand the inner workings of various security technologies as to help ensure successful subversion and/or evasion. For example, EDR’s, Application Whitelisting, Antivirus, etc.
  • The ability to develop new exploits on-the-fly with minimal risk and minimal detection.
  • The ability to erect a doppelganger infrastructure including SSL certificates and services as to help facilitate advanced phishing.
  • And more…

The product of a Realistic Threat Penetration Test is a technically detailed report that contains the intelligence required to defend against bad actors. This intelligence generally includes information about what vulnerabilities exist, areas where lateral and/or horizontal movement are possible, misconfigurations, gaps in detection capabilities, suggestions for hardening and defending, and more. Of course, the report is the starting point for building a plan and a roadmap to remediate the weaknesses and make the job harder, if not impossible for the bad actors!

To learn more about Realistic Threat Penetration Testing, and how to render your environments more secure, please contact Netragard at [email protected] or [email protected]

Protecting Your Business From Your Remote Workforce

A significant portion of your workforce is currently moving to perform full- or part-time remote work as a result of COVID-19.  As you modify your business processes and workflows to accommodate this change, it’s important to understand how remote work affects your cybersecurity posture and what openings and opportunities exist for cybercriminals to take advantage of you.  We would like to take this opportunity to provide advice on how to orient your security posture to account for this increased threat vector and illustrate several common patterns of weakness.


Long touted as the safest and most-reliable way to enable remote work, Virtual Private Networks (VPNs) allow a user to access internal enterprise resources and applications from any internet connection.  VPN connections are encrypted, preventing untrusted network operators (such as your local coffee shop) from snooping on sensitive traffic, but they don’t solve every security problem.


  • VPNs weaken the network boundary by allowing additional devices into the most vulnerable part of a company’s IT infrastructure – its internal network
  • Compromised user accounts can give attackers direct access to many internal resources
  • Granting VPN access to untrusted devices is equivalent to plugging that device directly into your network, along with any infections it might have

The more users which utilize your VPN, the more likely it is that you are giving an attacker access to your internal network by way of a compromised user device.  When VPN is allowed on non-corporate provisioned machines, this risk is even greater.  If an attacker does gain this access, it can be devastating because frequently internal enterprise networks are the most vulnerable parts of an enterprise network.


  • Create a separate User Account specifically for VPN access for each user
  • Place VPN user accounts into a restricted Organizational Unit with as few privileges as possible. For example, if you run Citrix, only allow VPN user accounts to sign onto Citrix desktops.
  • Set up Two-Factor Authentication (2FA) for all users and VPN user accounts to increase difficulty for attackers
  • Install a Honeypot on your internal network to help identify suspicious network activity coming from one remotely connected device

The Vexing VPN - in a split tunnel, security solutions only see traffic destined for the enterprise.
A Note on VPN Configurations:

VPNs also have the option to perform Full or “Split” tunneling.  Full tunneling forces all network traffic to go over the VPN connection including traffic unrelated to the corporate network such as YouTube or Skype.   In a split tunnel VPN, only traffic destined for internal corporate services directly would travel over the VPN connection.

Split tunnel is therefore less secure than a full tunnel configuration because in a full tunnel your remote users will still be protected by your existing network security appliances such as content filters and/or next-gen firewalls.  This comes with an expensive tradeoff, though – you must have enough bandwidth to serve all your users browsing habits!

Two Factor Authentication (2FA)

It’s extremely important that you have 2FA deployed within your organization.  It helps prevent compromise when user credentials are leaked as a part of a breach and makes it more difficult to obtain user credentials through phishing attacks.  With that said, you should be aware that 2FA is not a silver bullet for protecting user credentials on all services because 2FA can be bypassed when user devices have been compromised.

Two Factor Hangover


  • Compromised devices which are used to prompt the user for a 2FA token may relay the token to an attacker
  • Compromised devices may allow an attacker to steal session information and impersonate affected users

As an example, by stealing/intercepting a session cookie for a service to which the user has already authenticated, an attacker may gain direct access to the application without needing to authenticate. Many applications (e.g. Cloud-Based email, Collaboration tools) do not tie their session cookie to a single device/source IP/location because if they did, roaming mobile users would have to reauthenticate as their device switches from WIFI to 4G or 5G connections. As a result, it is usually possible for an attacker to reuse the same session as a legitimate user.


  • Monitor your application logs for access from suspicious geographical locations unrelated to your typical user or business locations
  • Do not share sensitive information such as passwords in email or chat
  • Train your employees to report suspicious activity such as disappearing incoming email, email switching from read to unread without explanation, or password reset emails

EndPoint Security

When your users work from home, they have a greater exposure to cybersecurity threats because inevitably they will be using their devices for both business and pleasure.  This increased usage is even more dangerous when paired with a split-tunnel VPN which does not force browser traffic to flow through enterprise security appliances and controls.


  • Antivirus/Antimalware solutions can be bypassed more easily as users are outside of the protections of enterprise networks
  • Traffic visibility may be significantly reduced
  • Users will use their devices for personal browsing/activities which increases their exposure

Since your users will be using their devices more (regardless of it they are corporate or personal) they will be more likely to encounter more threats, making patching and antivirus updates critical but potentially unreliable if you do not use a VPN or allow personal devices on the network.


  • Provide up-to-date devices configured with more aggressive security profiles to high-risk individuals such as Executives and Executive Assistant staff
  • Closely monitor inbound and outbound connections on your remote devices
  • Step up social engineering defense training to help combat COVID-19 related scams
  • Educate your employees not to store or share credentials outside of password safe solutions such as 1Password, Keepass, Lastpass, or Dashlane.

Final Words:

Even when lockdowns and restrictions around the coronavirus are lifted, the volume of remote workers is likely to increase.  As we’ve shown, remote users are under an increased risk because they are outside of enterprise security appliances, encountering more threats by utilizing the same devices for both business and pleasure, and aren’t necessarily covered by existing security controls.  With this in mind, it’s important to be proactive and set up increased logging, provide updated and secured devices to high-risk individuals within your organization, and limit the access that users have through VPN connections.

We hope that you stay safe, both online and off, and that you keep us in mind if you’re seeking to audit your remote worker security solutions.  In the coming week, we will be providing pricing packages specifically designed around auditing remote work solutions.

AI Series Part 2: Social Media and the Rise of “Echo Chambers”

AI Series Part 2: Social Media and the Rise of “Echo Chambers”

AI Series Part 2 of 6

This is the second post in a series discussing AI and its impacts on modern life. In this article, we’ll explore how AI is used in social media and the ramifications of training AI while defining “success” based upon the “wrong” metrics.

Social Media Is not Free

Social media platforms that offer “free” services aren’t actually free. These companies need to make a profit and pay their staff, so all of them must have some form of revenue stream.

In most cases, this source of user revenue is to” sell” or use data about their users. For advertisers, knowing about their consumer population and being able to target their advertisements to particular individuals and groups is extremely valuable.

If an organization has limited advertising dollars, they want to put their advertisements and products in front of the people that are most likely to buy them. While some products may have “universal” appeal, others are intended for niche markets (think video games, hiking gear, maternity clothes, etc.).

Social media platforms give advertisers access to their desired target markets. By observing their users and how they interact with the advertisements and other content on the site, these platforms can make good and “educated” guesses about the products that a particular user could or would be interested in and is likely to purchase. By selling access to this data to advertisers, social media both makes a profit and acts as a matchmaker for advertisers and their desired target markets.

Defining “Success” for Social Media Platforms

Most social media platforms are paid based on the number of advertisements that they are able to present to their users. The more advertisements that a particular user views, the more profitable they are to these platforms.

Maximizing the time that a user spends on a social media platform requires the ability to measure the user’s “engagement” with the content. The more interested the user is, the more likely that they’ll spend time on the platform and make it more money.

The ways that social media platforms measure engagement has evolved over the years. Earlier, the focus was on the amount of content that a particular user clicked on. This success metric resulted in the creation of “clickbait” to lure users into continually clicking on new content and links and spending time on the platform.

However, over time users have grown increasingly tired of clicking on things that look anything like clickbait. While they may be willing to spend hours on a particular platform, they want their interactions to have some level of substance. This prompted an evolution in how these platforms defined “successful” and “engaging” content.

Giving the User What They Want

The modern goal of social media platforms is to provide users with content that they find “valuable”. The belief was that continually showing users high-value content incentivizes them to spend time on the site (and make the platform more advertising money), react, comment, share, and draw in the attention of their connections.

However, measuring “value” is difficult without clear metrics. To make the system work, these platforms measure the value of content based upon the amount that a user engages with a post.

This is where AI comes into the picture. The social media platform’s content management engine observes user behavior and updates its ranking system accordingly. The posts that receive the most likes, comments, etc. are ranked as more “valuable” and have a higher probability of being shown to users. In contrast, the posts that receive negative feedback (“don’t show me this again”, etc.) are shown less often.
Social Echo Chambers
In theory, this approach should make truly valuable content bubble to the top. In practice, people tend to respond most strongly (i.e. posting comments, likes, complaints, etc.) to content that they feel strongly about. As a result, polarizing content tends to score well under these schemes as people show their support for adorable cats and the political party of their choice and complain about “fake news” (whether or not it is actually fake).

In order to keep users engaged, an AI-based system using user behavior as a metric will naturally create “echo chambers”, where users will only see posts that align with what they already believe. The primary goal of social media platforms is to keep their users happy and engaged, and “echo chambers” are an effective way of achieving this.

The Bottom Line on AI and Social Media

AI is a crucial component of modern social media, but it is important to consider who this AI is really designed to benefit. Social media platforms, like any other business, are driven by the need to make a profit and keep shareholders happy. AI in social media is designed to accomplish this goal by feeding as many ads as possible to their users.

AI Series Part 1: Introduction to the Modern Threats of AI

AI Series Part 1 of 6

This is the first post in a series discussing AI and its impacts on modern life. Artificial Intelligence is useful, powerful, and dangerous when used irresponsibly. Its being leveraged by a wide variety of industries including but not limited to social media, defense contractors and information security companies. Some of the dangers created by the use of AI are overt while others are very subtle. For example, the ongoing rapid development of autonomous weapons is overt while the use of AI in social media is subtle and possibly more damaging. The AI used in various social media platforms is in part responsible for the current divide here in the united states.

Introduction to AI

Artificial intelligence (AI) occupies an unusual position in the public consciousness. On the one hand, every cybersecurity solution – and a number in other tech industries as well – seems to contain “AI”, a claim that carries varying levels of truth. On the other hand, many of the things that most people think of as AI – such as general intelligence, conscious robots, etc. – have not yet been created.

While “traditional” popular conceptions of AI are still in the future, AI is a very real part of our daily lives today. AI already shapes how people think and behave – often without their knowledge – and other parts of daily life.

This article launches a series on the modern use of AI. This series discusses some of the ways in which AI is commonly used today and the cybersecurity considerations of AI-based systems.

The Modern Threats of AI

AI has effects on many different aspects of daily life. Some of the biggest areas where AI plays a critical role in society include:

  • Social Media: Social media is a core part of many peoples’ lives. These “free” platforms make massive profits by monetizing their users’ attention and their personal data. AI is a core part of how these platforms optimize their content to maximize the time that their users spend on them and the amount of targeted advertising that they are able to sell.
  • Facial Recognition: Facial recognition systems are a contentious topic as these systems are used by law enforcement and other organizations to automatically identify and track individuals. However, the AI behind these systems is extremely – and potentially unintentionally – biased by how they are made and trained.
  • Automated Content Creation: Trust in the digital world is driven by consensus. Articles supporting certain viewpoints and reviews or comments on pages can have a significant impact on peoples’ worldviews and how they respond to the content. Modern AI is increasingly capable of generating automated, “plausible” content and human pictures, making it possible to rapidly generate fake content or reviews about anything that looks like it came from a real human being.

These three topics will be the focus of the next few articles in this series. In each, we will dive into the details of how AI is used in each of these particular scenarios, what they do right, and where they go wrong.

AI and Cybersecurity

As we become more reliant on AI as part of our daily lives, it is also important to consider the security of AI systems. What makes an infosec AI system effective or ineffective? Is it possible to “hack” AI in ways beyond standard IT security?

The final two articles in this series will deal with the security of AI systems:

  • Hacking AI: AI systems are designed to learn and create their own decision-making models. This self-learning process, while essential to the growth and development of AI, also makes these systems vulnerable to exploitation.
  • Fixing AI: Implicit biases, underspecification, and deliberate exploitation can cause AI systems to learn to make the wrong decisions or to make decisions in the “wrong” way. Fixing and securing AI requires an understanding both of how it can be broken and the steps that can be taken to improve and secure it.

Protecting the AI-Driven Enterprise

Being “data driven” is a goal of most organizations, and AI systems are a crucial part of accomplishing this. As organizations continue to develop and deploy AI solutions, it is essential to understand the capabilities of AI and where things can go wrong.

This series dives into the modern use of AI. It explores how AI is used today, the risks and benefits to its creators and other parties, and the security considerations of AI-based systems.

AI Series Part 2 >

SolarWinds, SOX, and Corporate Responsibility for Cybersecurity

By now, most everyone has heard of the SolarWinds breach. Cybercriminals took advantage of SolarWinds’ poor cybersecurity practices to gain access to their network and implant malicious code within updates to their Orion network monitoring solution.

This Orion solution is widely used, and its compromise led to the attackers gaining access to the networks of many large enterprises and a significant percentage of US government agencies. As a result, intellectual property and sensitive government data has been compromised and much of it is being sold online. Investigations into the incident are still ongoing.

SolarWinds and SOX Disclosures

The SolarWinds breach has likely caused significant damage to the organization reputationally and financially. The damage caused by SolarWinds’ negligence is widespread, and the company will likely be the defendant in numerous lawsuits regarding the breach.

A recent class action lawsuit filed against the company’s leadership by SolarWinds shareholders demonstrates the potentially far-reaching impacts of such a breach. As a publicly-traded company, SolarWinds is subject to the Sarbanes-Oxley Act (SOX), which was passed in response to the Enron scandal to protect investors. Under SOX, a company’s CEO and CFO must sign an attestation that publicly-released statements regarding the company’s financial status are correct.

The lawsuit against SolarWinds focuses on a statement in SolarWinds’ 2019 10-K filing that acknowledges the risk of cyberattacks to the company. Based on this statement, the company acknowledges that this risk exists, that steps should be taken to mitigate this risk, and that any breach should be reported to shareholders.

SolarWinds was initially breached on September 4th, 2019, but the breach was not reported until December of the next year. Since the company has filed multiple 10-Q statements since with no reference to the breach, the plaintiffs in the SOX case allege that SolarWinds was negligent in managing its cybersecurity risk. Additionally, investigation into the incident revealed other instances of cybersecurity negligence, such as the use of a password solarwinds123 on the SolarWinds update server.

SolarWinds attack timeline
Source: Solarwinds

SOX Disclosures and the Cost of Poor Cybersecurity Due Diligence

Obviously, SolarWinds’ CEO and CFO are not directly responsible for detecting and remediating security incidents within their organization. However, they do hold overall responsibility, and the SOX Act allows them to be held personally responsible for misleading or false statements within SOX disclosures.

Any organization can suffer a security breach, but it is the responsibility of a company’s leadership to ensure that due diligence is performed to prevent incidents like the SolarWinds breach. SolarWinds failed to do their due diligence in two crucial ways:

  1. Internal Cybersecurity Failures: As SolarWinds mentions in their 10-K, it is impossible to fully protect against cybersecurity threats. However, the company failed to follow even the most basic cybersecurity best practices as demonstrated by the use of a blatantly insecure password (solarwinds123) on its update server.
  2. Failure to Perform Proper Security Testing: Passing a Penetration Test is not proof of strong cybersecurity, as demonstrated by Trustwave’s certification of Target before the 2013 breach. However, a Penetration Test should have detected the use of such a weak password on the update server. This oversight demonstrates a failure to perform proper due diligence on behalf of both SolarWinds and any organization that performed a Penetration Test for the company.

Taking Responsibility for Corporate Cybersecurity

The class action lawsuit against SolarWinds – if successful – creates a strong precedent for holding corporate executives personally responsible for their companies’ security failures. Under the SOX Act, executives can face 10 years in prison and a $1 million fine for signing off on misleading statements, and 20 years and $5 million if the deception was willful.

In cybersecurity, as in any field, mistakes can be made, and companies can be breached despite their best efforts. However, making a “good faith” effort toward strong corporate cybersecurity – including contracting regular Penetration Tests by a competent testing firm – is essential to earning forgiveness for cybersecurity failures. The appearance of good security isn’t the same as the real thing.

The Security Risks Behind Voting Machines & Mail-in Ballots

In recent months, the security of absentee voting, widely used due to the threat of the COVID-19 pandemic, has been called into question. But are these processes any less secure than the electronic voting systems used on a “normal” election day?

Is Mail-In Voting Safe?

Introduction to Electronic Voting System Security

Electronic voting systems come in a number of different forms. At the polls, a voter may experience a few different types of voting systems:

  • Paper Ballots: Paper ballot systems have voters fill out ballots by hand with paper and pens/pencils or hole punches. These ballots may then be scanned in order to rapidly tally votes.
  • Electronic Systems: Purely electronic systems allow voters to vote on a touchscreen computer. In some states, votes are only stored and tallied electronically with no backups.
  • Hybrid Systems: Some systems will allow voters to cast votes with a touchscreen, then print a paper ballot for them to verify. This leaves a paper trail of their choices, however, a study indicated that 94% of voters didn’t notice that their votes had been changed.

Known Security Issues of Electronic Voting Systems

Electronic voting machines have a number of different security issues, many of them known for over a decade. The issues with electronic voting and the challenges of fixing them have been demonstrated by a number of different cases, including:

  • Insecure Voting Machines: An assessment of the security of over 100 voting machines at the 2019 DEFCON conference found that all of them contained exploitable vulnerabilities, including weak default passwords, built-in backdoors, etc.
  • Lack of Support for Penetration Testing: Security assessments of voting machines are limited by a lack of manufacturer support, and interpretations of the Computer Fraud and Abuse Act that make such assessments illegal. An amicus brief to the Supreme Court regarding the case advocated for limiting security research to researchers authorized by the company under test, enabling the company to conceal any findings.
  • Use of Outdated Software: A survey completed in July and August 2019 of 56 election commissions and Secretaries of state found that over half of voting systems in use ran Windows Server r2 2008, which reached end-of-life January 14, 2020.

These issues point to the conclusion that a determined attacker could easily breach the US election infrastructure if they chose to do so. The fact that this has not occurred is attributed to the fact that no threat actor has chosen to do so. In fact, Russia is believed to have gained access to voter registration systems in several states in 2016 but chose not to take action on it.

However, this lack of discovered breaches may have resulted from a lack of looking for them. In 2018, Netragard performed an analysis of the Crosscheck system designed to detect voters casting multiple ballots in different jurisdictions. Based upon analysis of public information, several vulnerabilities were discovered, but they could not be followed up on because hacking election infrastructure is illegal.

After hearing of the assessment, a Kansas official claimed that our team “didn’t succeed in hacking it.” Later a different claim by another Kansas legislator claimed that a “complete scan” did not find any evidence of attackers exploiting the vulnerabilities to breach the system. This is despite the fact that no vulnerability scan could detect a breach and that no evidence exists of a digital forensics investigation occurring to identify a potential breach.

At the end of the day, the answer to the question of whether or not a hacker could breach US election infrastructure is “almost certainly”. However, no evidence exists of this occurring, potentially because no conclusive investigation has been performed.

Introduction to Mail-In Ballot Security

In most states, voting via an absentee or mail-in ballot is a two step-process. The first step is submitting an absentee ballot request. If this request is validated, an absentee ballot is sent to the voter’s registered address to be completed and returned via mail or an election dropbox.

The validation steps for absentee ballot requests and ballots vary from state to state. Each state performs at least one (and often several) of the following checks:

  • Envelope Verification: A ballot is only valid if returned in the official envelope. All ballots returned in a different envelope are discarded.
  • Signature Verification: Many states require a signed affidavit by the voter, and, in some states, election officials compare the signatures on the ballot and on a voter’s official registration. Mismatched signatures are the most common method by which voter fraud is detected.
  • Voter Identification: Many states will require a voter to submit some form of identification with their ballot, such as a photocopy of their driver’s license or part of their Social Security Number (SSN).
  • Witness Signature: Some states require the signatures of one or more witnesses or a public notary on a mail-in ballot.

Known Security Issues of Mail-In Ballots

The Heritage Foundation keeps a record of every case of alleged voter fraud that has been reported to date. This database includes a variety of different voting crimes, including fraudulent registrations, misuse of absentee voting, coercion of voters at the polls, and more. To date the Heritage Foundation has recorded 1,298 cases of alleged voter fraud between 1988 and 2020, though some of its claims are unsupported or incorrect.

Of these 1,298 cases, the Heritage Foundation claims that 207 individuals have been involved in 153 distinct cases of voter fraud that involved the use of absentee ballots. Of these cases, 39 (involving 66 individuals) have included a deliberate attempt to change the results of an election. Other cases involve people voting for a recently deceased spouse or relative, a single person voting twice in different jurisdictions, using a previous mailing address on a ballot, mailing in the ballot of a non-relative (which is illegal in many jurisdictions), and other small-scale errors or attempts at fraud.

In general, attempts to change the results of an election via mail-in voter fraud have focused on local elections with a small margin. One of the larger cases of fraud on record (Miguel Hernandez, 2017) involved an individual forging absentee ballot requests and collecting and mailing the ballots after the voters had completed them. This incident included only 700 mail-in votes, and the actual voting was performed by the authorized voters. Even if Hernandez forged the votes, the impact on a US Presidential election would be negligible.

For comparison, over 125 million votes were cast in the 2016 election. According to the Heritage Foundation, there were six attempts at absentee ballot fraud in the 2016 Presidential Election:

  • Audrey Cook voted on behalf of deceased husband
  • Steven Curtis (head of Colorado Republican Party) forged his wife’s signature on her ballot
  • Terri Lynn Rote tried to vote twice due to her fear that the election was rigged
  • Marjory Gale voted for herself and her daughter who was away at college
  • Randy Allen Jumper voted twice in two different jurisdictions
  • Bret Warren stole and submitted five absentee ballots that voters complained about never receiving and were allowed to cast provisional ballots

These cases are clear examples of voter fraud in the 2016 election. However, even if they were undetected and all voted the same way, ten votes are unlikely to have any impact on the election. In fact, an election commission looked into the claims of 3-5 million fraudulent votes being cast in the 2016 election. Claims were was disbanded with no findings.

Comparing Electronic Voting Systems and Mail-In Ballot Security

At the end of the day, there is no evidence of election interference or voter fraud using electronic voting machines or mail-in ballots. While six counts of misuse of absentee ballots were detected in the 2016 Presidential election, they comprised a total of ten votes.

If anything, the threat of glitches in electronic voting machines should be considered a major threat to election security. In 2019, analysis of the paper record of a “glitchy” voting machine led to the discovery that in a local Pennsylvania election, a candidate who only had 15 recorded votes actually won the election by over 1,000.

While mail-in ballots have their issues (like an overburdened postal system), electronic voting machines are much less secure and reliable. The fact that an unknown number of electronic voting systems are connected to the Internet, making them accessible to hackers and vulnerable to malware, creates a much greater exposure to election meddlers than absentee ballots, which must be physically collected and filled out to be used in fraud.

Inside the 2020 Ping of Death Vulnerability

What is the 2020 Ping of Death?

Ping of Death vulnerabilities are nothing new. These vulnerabilities arise from issues in memory allocation in the TCP/IP stack. If memory is improperly allocated and managed, a buffer overflow vulnerability can be created that leaves the application vulnerable to exploitation.

The original Ping of Death was discovered in 1997 and was the result of an implementation error in how operating systems handled IPv4 ICMP packets.    ICMP ECHO_REQUEST packets (aka ping) are intended to be 64 bytes, but this length was not enforced. Any ping packet with a length greater than 65536 bytes (the expected maximum value of the length field) would cause a system to crash.

In August 2011, Microsoft fixed another Denial of Service in its TCP/IP Stack that occurred when processing a sequence of specially crafted Internet Control Message Protocol (ICMP) messages

In August 2013, a third ping of death vulnerability was announced and patched in the Windows operating system. This time it was specific to the IPv6 protocol.

Yesterday (October 2020), Microsoft revealed its second IPv6 Ping of Death vulnerability as part of its October Patch Tuesday release. Exploitation of this vulnerability could allow an attacker to perform a Denial of Service attack against an application and potentially achieve remote code execution.

Inside the 2020 Ping of Death Vulnerability

2020 Ping of Death Technical Details

The Ping of Death vulnerability arises from an issue in how Microsoft’s tcpip.sys implements the Recursive DNS Server (RDNSS) option in IPv6 router advertisement packets. This option is intended to provide a list of available recursive DNS servers.

The issue that creates the Ping of Death vulnerability is that tcpip.sys does not properly handle the possibility that the router advertisement packet contains more data than it should. Microsoft’s implementation trusts the length field in the packet and allocates memory accordingly on the stack.

An unsafe copy of data into this allocated buffer creates the potential for a buffer overflow attack. This enables the attacker to overwrite other variables on the stack, including control flow information such as the program’s return address.

How the Vulnerability Can Be Exploited

In theory, the buffer overflow vulnerability can be exploited to achieve a couple of different goals:

  1. Denial of Service: Exploitation of the buffer overflow vulnerability enables “stack smashing” that can crash the application.
  2. Remote Code Execution: Using return-oriented programming, a buffer overflow exploit could cause a function to return to and execute attacker-provided shellcode.

In practice, a Denial of Service attack is the most likely use for this exploit. In order to perform a successful Denial of Service attack, all an attacker needs to do is attempt to write outside of the memory accessible to it (triggering a segmentation fault) or to overwrite a critical value within the program stack.

One of these key values is the stack canary, which is also one of the reasons why exploitation of this vulnerability is unlikely to allow RCE. A stack canary is a random value placed on the stack that is designed to detect attempts to overwrite the function return address via a buffer overflow attack. Before attempting to return from a function (by going to the location indicated by the return address), a protected program checks to see if the value of the stack canary is correct. If so, execution continues. If not, the program is terminated.

The existence of a stack canary makes it more difficult to exploit the vulnerability for RCE, and the use of Address Space Layout Randomization (ASLR), which makes functions useful to attackers harder to locate in memory, exacerbates this issue. However, it is possible to bypass both of these protections in certain cases, so an exploit may be developed that enables the 2020 version of the ping of death to be used for RCE. If this is the case, the repercussions could be severe as tcpip.sys is a kernel-level module within the Windows operating system.

Ping of Death in the Wild

A patch for this vulnerability was included in the October 2020 Patch Tuesday release of updates. At the time, the vulnerability was not publicly disclosed, meaning that (theoretically) no one knew about it previously and could develop an exploit.

Based on the Microsoft description of the vulnerability, a Proof of Concept for using it for a DoS attack has already been created. Additionally, the vulnerability has been given an exploitability value of 1, meaning that it is very likely to be exploited but has not yet been observed in the wild.

This means that we can expect to see DoS attacks using this vulnerability shortly, and the potential exists that an attacker will successfully create a RCE exploit using it as well. If this is the case, the wormability of the exploit makes it likely to be used to spread ransomware and similar malware (like Wannacry and EternalBlue).

Protecting Against the 2020 Ping of Death

The vulnerability in tcpip.sys was patched in an update included in the October 2020 Patch Tuesday release. Installing this update will fix the vulnerability and protect a system from exploitation.

Beyond installing the update, it is a good idea to minimize your attack surface by disabling unnecessary functionality. If you currently do not use the functionality, then disabling IPv6 in general or RDNSS in particular can eliminate the potential exploitability of this and any other vulnerabilities within the Microsoft implementation of this functionality. Instructions for doing so are included in Microsoft’s description of the vulnerability.

Inside Zerologon

What is the Zerologon Vulnerability?

Zerologon is a vulnerability in the Windows netlogon protocol (on Windows Server version 2008 and later) discovered by Tom Tervoort of Secura during a security review of the protocol (which had not previously undergone such a review).  Due to cryptographic and implementation errors in the protocol, an attacker can falsely authenticate and elevate their privileges to Domain Admin.  This has a number of potential impacts including:

  • Full Network Control: With Domain Administrator access, the attacker has full control over the network.
  • Credential Compromise: Netlogon enables an attacker to extract user account credentials for offline password cracking.
  • Credential Stuffing: Passwords compromised via netlogon are likely to be used on other accounts, enabling an attacker to access bank accounts, social media, etc.
  • Initial Access: With the access provided by netlogon, an attacker could steal sensitive data, deploy ransomware, etc.
  • Denial of Service Attack: Zerologon enables an attacker to change a password in Active Directory but not in the registry or LSASS.  This means that services on a rebooted machine may no longer function.

Technical Details of Zerologon

Zerologon exploits a vulnerability in the netlogon authentication process, which is performed as follows:

  1. Server and client each generate and exchange a random 8-byte challenge
  2. The shared session key is generated as the first sixteen bytes of SHA256(MD4(domain password),challenges)
  3. Client verifies the session key by encrypting and sending the original challenge that they generated encrypted with the new session key.

The zerologon vulnerability arises due to the fact that Windows netlogon uses an insecure variant of the cipher feedback (CFB) block cipher mode of operation with AES.

Normally, CFB mode is designed to encrypt 16-byte chunks of the plaintext (as shown above).  This enables the encryption or decryption of data longer than the standard 16-byte AES block size.  It starts by taking a random initialization vector, encrypting it, and XORing it with the plaintext of a block to create the ciphertext for that block.  This ciphertext is used as the input to encryption for the next block, and so on.


(via Secura)

The vulnerable version of CFB mode used in Windows netlogon (called CFB8) performs encryption one byte at a time.  To do so, it takes the following steps:

  1. The initialization vector (yellow) is encrypted using AES
  2. The first byte of the result (pink) is XORed with the first byte of plaintext (blue)
  3. The resulting byte of ciphertext is appended to the end of the IV
  4. The first byte of the IV is dropped
    1. The result is the same length as the original IV (yellow and pink above)
  5. Steps 1-4 are repeated until the entire plaintext is encrypted

While the use of a non-standard encryption algorithm is bad enough, the netlogon protocol made another critical error.  The initialization vector is hard-coded to zero, not random like it should be.

This creates a vulnerability if the first byte of encryption produces a ciphertext byte of zero (which occurs with 1/256 probability).  If this is the case, then the result of encryption will always be all zeros.  This is because the input to the encryption algorithm will always be the same (since the IV is all zeros and the value appended to the end during each round of encryption is a zero).

This makes it possible for an attacker to authenticate via netlogon with no knowledge of the domain password.  By trying to authenticate repeatedly to the system with an all-zero challenge, the attacker can trigger the 1/256 chance that the shared secret (that they don’t know) encrypts the first byte of the challenge to a zero.  They can then trivially generate the encrypted challenge (step 3 in the netlogon authentication process) since it is all zeros.

How Zerologon Can Be Exploited

The Zerologon vulnerability, by itself, only enables an attacker to successfully authenticate to the domain controller and encrypt all-zero plaintexts.  However, this is enough to successfully call the NetrServerPasswordSet2 function, which is designed to change the server password.  This function takes the following parameters:

  • Original client challenge plus the current time in POSIX notation
  • Random data
  • New password
  • New password length

Of these the original client challenge, the random data, and the new password are easily set to all zeros.  In theory, the server should verify the current time and disallow a zero-length password.  However, this is not the case, making it possible to set the domain controller’s password to empty.

While changing this password does not enable the attacker to log into the machine, it does enable them to access the Domain Replication Service Protocol.  This enables them to extract the password hashes of domain administrator accounts, enabling them to generate Kerberos golden tickets.  Additionally, these hashes could be used in a pass-the-hash attack to log into the Domain Controller and as Domain Administrator and reset the password manually.  This provides the attacker with full access and control over the network.

However, this is not even the only way to exploit the Netlogon vulnerability.  A writeup by Dirk-jan Mollema describes another method that takes advantage of the NTLM protocol to gain Domain Administrator access without changing a password (which can crash services).  However, this version of the exploit requires two vulnerable domain controllers, an available domain user account, and a print spooler service running on a DC and accessible from the network (default domain controller configuration).

Zerologon Exploitation in the Wild

The patch for Zerologon was released in August 2020, and the details of the vulnerability weren’t publicly announced until September 2020.  In theory, this provided organizations with ample opportunity to apply the patch and eliminate the vulnerability.

In practice, many organizations have not applied the patch, leaving them vulnerable to exploitation.  Microsoft publicly announced that they have detected active exploitation of the vulnerability, and the Department of Homeland Security (DHS) issued a directive on September 18th requiring federal agencies to patch the issue by September 21st (i.e. the following Monday).

This is due to the fact that the vulnerability was expected to be actively exploited by cybercriminals.  This belief is backed up by a report by Tenable that multiple different exploit executables were uploaded to Virustotal.

Protecting Against Zerologon

The Zerologon vulnerability is patched in the August 2020 set of Windows updates, and is blocked by some endpoint security solutions.  Microsoft recommends taking the following steps to fix the issue:

  1. Update Domain Controllers with the patch released in August 2020.
  2. Monitor patched Domain Controller logs for event IDs 5829, 5828, and 5829.  These events indicate a client that is using a vulnerable netlogon secure channel connection and require either a Windows or manufacturer update.
  3. Enable Domain Controller Enforcement Mode for additional visibility and protection.

After patching known domain controllers and other known affected systems, it might be wise to undergo a penetration test to discover other potentially vulnerable devices.  The vulnerability affects most versions of Windows Server, which can be deployed in a number of different environments and contexts.

What You Need to Know About Penetration Testing Liability

Penetration tests are designed to identify potential gaps in an organization’s cybersecurity. With an effective penetration test comes a variety of different risks.  Before engaging a penetration test provider, it is essential to understand the risks of penetration tests, how to minimize them, and why a good penetration testing firm will not be able to accept liability for actions performed in good faith.

Pen Test Risk Reward

A Good Penetration Test Carries the Potential for Damages and/or Outages

Any reputable penetration testing firm will not provide a guarantee that their services are entirely safe.  Any provider that does so is likely either deceptive or using testing tools that are so ineffective as to be essentially worthless.

The reason why safety cannot be guaranteed is that many computing systems and programs are fairly unstable during normal operations.  How many times have you had Microsoft Office or Excel crash and cause data loss during normal use?  If these and other programs were completely reliable, Microsoft wouldn’t have bothered developing Autosave.

If these systems are so unstable during “normal” use, consider the expected impacts of the very unusual conditions that they will be subjected to during a pentesting engagement.  Penetration testing is designed to identify the bugs in software that an attacker would exploit as part of their attacks.  The best way to locate and determine the potential impact of these vulnerabilities is to use the same tools and techniques that a real attacker would.  This poking and prodding, while carried out with the best of intentions, falls outside the definition of “normal” use for this software.

While the probability that a penetration test will cause a significant failure or damage is less than 1%, it is still possible.  For this reason, when undergoing a penetration test designed to provide an accurate assessment of an organization’s systems and cyber risk, it is impossible for the testing provider to accept liability for outages and other damages caused by reasonable testing activities performed in good faith.

The Level of Risk Depends on the Services Provided

All penetration tests carry some risk of outages or other damages to the systems under test.  However, not all penetration tests are created equal, and different types of tests carry varying levels of risk.  Depending on the type of test performed, an organization may need to accept a higher level of risk and different types of risk.

Automated Tests are Higher Risk

One of the main determinants of risk in a penetration test is the type of penetration testing services provided.  Basic tests, which rely heavily upon automation, are much riskier than realistic threat penetration testing.

Some penetration test providers rely heavily upon automated tools such as scanners and exploitation frameworks to reduce the manual work required during a test.  While this may improve the speed, cost, and scalability of the test, it does so at the cost of significantly increased risk.  The scripted tests performed by these tools launch an attack if a system “appears” to be vulnerable without checking for strange or risky conditions.  This dramatically increases the probability that a system will experience a memory error or other issue that will cause the program to crash.

Realistic threat penetration testing, on the other hand, carries a lower level of risk because it realistically emulates what a skilled attacker would do when attempting to exploit the system.  Cybercriminals, when attacking a system, are trying not to be detected and use tools and techniques designed to minimize the probability that they will be detected before they achieve their objectives.  Testing driven by human talent, experience, and expertise is more likely to avoid potential damage and other outages and minimize risk than a penetration test more heavily reliant upon automation.

Basic Tests Carry Long-Term Risk

The risks associated with a penetration test are not limited to potential damages and outages.  A poorly-performed penetration test also carries the potential for long-term risks to an organization.

Companies commonly undergo penetration testing to fulfill regulatory requirements and often select the most basic test available in an attempt to “check the box” for compliance.  While these basic tests may earn a compliant rating, they do little to measure the organization’s true cybersecurity risk.

In the long term, this reliance upon basic tests carries significant cost to an organization.  Companies like Equifax, Target, Sony, Hannaford, and the Home Depot all were tested as compliant with applicable regulations yet suffered damaging data breaches.  In fact, the CEO of Target said, “Target was certified as meeting the standard for the payment card industry in September 2013. Nonetheless, we suffered a data breach.”

The ROI of good security is equal to the cost in damages of a single successful compromise.  You can add the cost of all of the security technologies and testing to the cost in damages as well because those things did not prevent the breach.

Is Your Vendor Delivering Genuine Penetration Testing Services?

Determining whether or not a penetration testing vendor is offering automated or manual testing services may seem difficult.  However, looking at how the provider calculates the cost of an assessment can be an easy way to accomplish this.

Any penetration testing provider will need to scope the size of the project before providing a quote.  The questions that they ask and actions performed during the scoping phase can help to determine the types of services that they provide.

Vendors that provide services dependent upon automation will ask questions like “how many IP addresses do you have?” or “how many pages is your application?”.  These questions are important to them since they determine the number of scans that they expect to perform.

However, these kinds of questions do not provide a realistic assessment of the complexity of the penetration testing engagement.  If a customer tells a vendor that they have 10 IP addresses and a web application with 10 pages the vendor might bill them $500.00 per IP and $1,000.00 per page totaling $15,000.00.  While that price sounds reasonable at face value, what happens if none of the 10 IP addresses hosts any connectable services?  In workload terms that is 0 man-hours.   Despite this the customer in our example would still pay $5,000.00 for 0 hours of work.  Does that sound reasonable?

The inverse is also true and is the exact reason why these types of vendors rely on automated scanning (rather than manual testing) for service delivery.  What happens if each of the 10 IP addresses requires 40 hours of work totaling 400 man-hours?  The cost would still be $5,000.00 as quoted which means that the vendor would need to work at an effective rate of $12.50 an hour!  Of course no vendor will work for that rate and so they compensate for the overage with automation but don’t tell their customer.

The same is true for web pages.  It is entirely possible to have 10 static web pages that cannot be attacked because they take no input.  It is also possible to have 10 web pages each of which takes significant input and could require even more than 40 hours to test per page.  So, as a general rule of thumb, companies looking for penetration testing services should avoid vendors that price based on the count of targets (IP addresses, pages, etc).  These vendors are generally the ones that deliver basic tests packaged as advanced tests.  They also face higher liability as was demonstrated when Trustwave was sued by banks for certifying Target.

Why a Penetration Testing Firm Can’t Accept Liability

A good penetration testing firm cannot accept liability for damages and outages caused by their services.  While a good, manual penetration test does everything that it can to minimize the potential for something to go wrong, unforeseen circumstances can cause unanticipated issues.

Some classic examples of penetration tests that went wrong in unexpected ways are the recent cases of Coalfire and Nissan.

In Coalfire’s case, penetration testers were hired to perform an assessment of the physical security of an Iowa courthouse building.  Due to a misunderstanding in the terms of the engagement, the penetration testers were arrested and faced felony charges when found testing the security of the courthouse after hours.

The Nissan case, on the other hand, is an example of a penetration test that was commissioned under false pretenses.  The VP of the company had the test performed without the knowledge of the CEO or key IT personnel.  The results of the test were used to gain access to the email account of the company chairman and access data used to bring charges against him for financial misconduct.

A penetration testing vendor cannot accept liability for potential damages caused by the test because some aspects are completely outside of their control.  This is why you shouldn’t be alarmed to see a release of liability statement in a penetration testing agreement and might have cause for concern if a vendor provides a contract that lacks such a clause.

How To Scope a Penetration Test (The Right Way)

How to Define the Scope of Your Next Pentest Engagement

One of the most important factors in the success of a penetration test is its scope.  Scope limitations are an understandable and even common desire.  However, they can make the results of a pentest worse than useless by providing a false sense of security.  Read on to learn why it is important to work with and trust your pentest provider when scoping your next pentest engagement.

How to Scope a Penetration Test Flow

Scope Limitation

One of the common challenges encountered when defining the terms of a penetration test is scope limitation.  When defining what techniques and systems are “fair game” and which ones are “off limits”, many organizations often narrow the scope of the engagement too far.

In some cases, implementing scope limitations make sense.  For example, limiting the use of destructive techniques on production systems is an understandable limitation since they can negatively impact business operations.

However, other scope limitations are more harmful than helpful.  A common example of this is forbidding the use of social engineering during a penetration test to avoid hurting the feelings of employees that might fall for the attack.  Since social engineering is one of the tactics most commonly used by cybercriminals, ignoring it as a potential attack vector places the company at risk.

Scope limitations enable an organization to decrease the cost of a penetration test and avoid uncomfortable attack vectors.  However, this comes at the cost of the effectiveness of the exercise and the accuracy of the results.

The Crown Jewels

Often, customers will focus on their “crown jewels” when defining the scope of their pentest engagement.  This focus on the security of the crown jewels makes perfect sense since these databases and systems are essential to the company’s ability to do business.  Additionally, like the layers of an onion, the further you go from the core (crowned jewels), the larger the scope can get and the more expensive the pentest engagement can become.

The problem is that, while limiting the scope of an engagement to a few security-hardened servers may be sufficient for compliance purposes, it could also give a false sense of security.

Inside the Head of an Attacker

Attackers are lazy. They will almost always choose the path of least resistance.

Compromise rarely begins by attacking directly a fully-patched Active Directory server. Creating a zero-day exploit is time-consuming, expensive, and not worth the effort for attacking most organizations.

Instead, an attacker will likely start their search for an entry point by looking at an organization’s legacy systems.  Even if these systems contain little of value, they are more likely to contain vulnerabilities that make it easier for an attacker to compromise them.  Once they have established a foothold inside the network, it is much easier for an attacker to move laterally within the network and pivot to better-secured systems.

We take the same approach when planning and performing a penetration test.  Any vulnerability scanner can check to see if your “crown jewel” systems have an exploitable vulnerability, and you’ve probably already performed the scan and fixed the issues before calling us in.  As penetration testers, we look for the easy but often-overlooked attack vectors that a real attacker is likely to take advantage of.  In the past, we’ve fully compromised networks via printers, IoT devices, legacy systems ready to be decommissioned, etc.

Your Network is Only as Secure as Your Least Secure Asset

If you don’t believe this, take a look at these two real-world examples from previous penetration testing engagements.

From a Public Printer to Full Control

One pentest engagement started with an audit of the security of an organization’s open guest network and ended with full access to the domain.

From this guest network, we connected to an MFP printer that was configured with an email account to be able to send scanner documents via email.  It turned out that the email account was also a valid domain user account.  We also established that this user account was added to the “Remote Access” domain group.

Using that compromised account, we were then able to connect to the corporate VPN (no 2FA), after discovering a few configuration files accessible in open network shares.  We were able to pivot, escalate privilege and gain full admin access to the domain.

All this from an insecure printer connected to the guest WiFi network…

Taking Advantage of Legacy Applications

From a legacy application, we managed to compromise the corporate network of a client (SaaS provider).

The corporate network was only used for office applications, and all their SaaS applications were hosted by a separate cloud provider. They put a lot of thought and effort into segmenting the corporate network (less critical) from their SaaS network (hosting all their customers’ information).

However, having access to the corporate environment, we also had access to their emails. We then proceeded to reset the password of one of their users that had access to the cloud provider hosting their SaaS environment. By timing the attack, we were able to recover the password reset link from that user’s email and reset his password on the cloud service provider.

This granted us access to the support ticket system for that cloud provider, and, browsing through the old tickets, we found several VPN configurations (including credentials and seeds to generate OTP for 2FA).

In the end, from a legacy application, we were able to pivot through the corporate network, gain access to the email server, and then pivot again to their cloud service provider.

Properly Scoping a Pentest Engagement

Engaging with a service provider for a penetration test requires trust in that provider.  No network is 100% secure, which means – with high probability – a good pentester will be able to find a vulnerability that gives them access to and control over your network, systems, and data.

If you trust your pentest provider with that level of access, it makes sense to trust them to help you to define the scope of your engagement with them.  Sit down with your provider and tell them your vision for the engagement, then ask for their opinion.  If there are things that you are wanting to place “out of scope”, a good pentester should be able to tell you the associated risks and help you to make an informed decision.

A pentest engagement should be a partnership, and this includes the planning stages of the engagement.  If, when talking to a potential provider, you don’t feel comfortable with letting them help you define the scope of the engagement (based upon their knowledge and experience), then don’t let them anywhere near your networks.

1 2 3 8