Artificial Intelligence - Part 2

AI Series Part 2: Social Media and the Rise of “Echo Chambers”

AI Series Part 2: Social Media and the Rise of “Echo Chambers”

AI Series Part 2 of 6

This is the second post in a series discussing AI and its impacts on modern life. In this article, we’ll explore how AI is used in social media and the ramifications of training AI while defining “success” based upon the “wrong” metrics.

Social Media Is not Free

Social media platforms that offer “free” services aren’t actually free. These companies need to make a profit and pay their staff, so all of them must have some form of revenue stream.

In most cases, this source of user revenue is to” sell” or use data about their users. For advertisers, knowing about their consumer population and being able to target their advertisements to particular individuals and groups is extremely valuable.

If an organization has limited advertising dollars, they want to put their advertisements and products in front of the people that are most likely to buy them. While some products may have “universal” appeal, others are intended for niche markets (think video games, hiking gear, maternity clothes, etc.).

Social media platforms give advertisers access to their desired target markets. By observing their users and how they interact with the advertisements and other content on the site, these platforms can make good and “educated” guesses about the products that a particular user could or would be interested in and is likely to purchase. By selling access to this data to advertisers, social media both makes a profit and acts as a matchmaker for advertisers and their desired target markets.

Defining “Success” for Social Media Platforms

AI and Social MediaMost social media platforms are paid based on the number of advertisements that they are able to present to their users. The more advertisements that a particular user views, the more profitable they are to these platforms.

Maximizing the time that a user spends on a social media platform requires the ability to measure the user’s “engagement” with the content. The more interested the user is, the more likely that they’ll spend time on the platform and make it more money.

The ways that social media platforms measure engagement has evolved over the years. Earlier, the focus was on the amount of content that a particular user clicked on. This success metric resulted in the creation of “clickbait” to lure users into continually clicking on new content and links and spending time on the platform.

However, over time users have grown increasingly tired of clicking on things that look anything like clickbait. While they may be willing to spend hours on a particular platform, they want their interactions to have some level of substance. This prompted an evolution in how these platforms defined “successful” and “engaging” content.

Giving the User What They Want

The modern goal of social media platforms is to provide users with content that they find “valuable”. The belief was that continually showing users high-value content incentivizes them to spend time on the site (and make the platform more advertising money), react, comment, share, and draw in the attention of their connections.

However, measuring “value” is difficult without clear metrics. To make the system work, these platforms measure the value of content based upon the amount that a user engages with a post.

This is where AI comes into the picture. The social media platform’s content management engine observes user behavior and updates its ranking system accordingly. The posts that receive the most likes, comments, etc. are ranked as more “valuable” and have a higher probability of being shown to users. In contrast, the posts that receive negative feedback (“don’t show me this again”, etc.) are shown less often.
Social Echo Chambers
In theory, this approach should make truly valuable content bubble to the top. In practice, people tend to respond most strongly (i.e. posting comments, likes, complaints, etc.) to content that they feel strongly about. As a result, polarizing content tends to score well under these schemes as people show their support for adorable cats and the political party of their choice and complain about “fake news” (whether or not it is actually fake).

In order to keep users engaged, an AI-based system using user behavior as a metric will naturally create “echo chambers”, where users will only see posts that align with what they already believe. The primary goal of social media platforms is to keep their users happy and engaged, and “echo chambers” are an effective way of achieving this.

The Bottom Line on AI and Social Media

AI is a crucial component of modern social media, but it is important to consider who this AI is really designed to benefit. Social media platforms, like any other business, are driven by the need to make a profit and keep shareholders happy. AI in social media is designed to accomplish this goal by feeding as many ads as possible to their users.

Artificial Intelligence - Part 1

AI Series Part 1: Introduction to the Modern Threats of AI

AI Series Part 1 - NetragardAI Series Part 1 of 6

This is the first post in a series discussing AI and its impacts on modern life. Artificial Intelligence is useful, powerful, and dangerous when used irresponsibly. Its being leveraged by a wide variety of industries including but not limited to social media, defense contractors and information security companies. Some of the dangers created by the use of AI are overt while others are very subtle. For example, the ongoing rapid development of autonomous weapons is overt while the use of AI in social media is subtle and possibly more damaging. The AI used in various social media platforms is in part responsible for the current divide here in the united states.

Introduction to AI

Artificial intelligence (AI) occupies an unusual position in the public consciousness. On the one hand, every cybersecurity solution – and a number in other tech industries as well – seems to contain “AI”, a claim that carries varying levels of truth. On the other hand, many of the things that most people think of as AI – such as general intelligence, conscious robots, etc. – have not yet been created.

While “traditional” popular conceptions of AI are still in the future, AI is a very real part of our daily lives today. AI already shapes how people think and behave – often without their knowledge – and other parts of daily life.

This article launches a series on the modern use of AI. This series discusses some of the ways in which AI is commonly used today and the cybersecurity considerations of AI-based systems.

The Modern Threats of AI

AI has effects on many different aspects of daily life. Some of the biggest areas where AI plays a critical role in society include:

  • Social Media: Social media is a core part of many peoples’ lives. These “free” platforms make massive profits by monetizing their users’ attention and their personal data. AI is a core part of how these platforms optimize their content to maximize the time that their users spend on them and the amount of targeted advertising that they are able to sell.
  • Facial Recognition: Facial recognition systems are a contentious topic as these systems are used by law enforcement and other organizations to automatically identify and track individuals. However, the AI behind these systems is extremely – and potentially unintentionally – biased by how they are made and trained.
  • Automated Content Creation: Trust in the digital world is driven by consensus. Articles supporting certain viewpoints and reviews or comments on pages can have a significant impact on peoples’ worldviews and how they respond to the content. Modern AI is increasingly capable of generating automated, “plausible” content and human pictures, making it possible to rapidly generate fake content or reviews about anything that looks like it came from a real human being.

These three topics will be the focus of the next few articles in this series. In each, we will dive into the details of how AI is used in each of these particular scenarios, what they do right, and where they go wrong.

AI and Cybersecurity

As we become more reliant on AI as part of our daily lives, it is also important to consider the security of AI systems. What makes an infosec AI system effective or ineffective? Is it possible to “hack” AI in ways beyond standard IT security?

The final two articles in this series will deal with the security of AI systems:

  • Hacking AI: AI systems are designed to learn and create their own decision-making models. This self-learning process, while essential to the growth and development of AI, also makes these systems vulnerable to exploitation.
  • Fixing AI: Implicit biases, underspecification, and deliberate exploitation can cause AI systems to learn to make the wrong decisions or to make decisions in the “wrong” way. Fixing and securing AI requires an understanding both of how it can be broken and the steps that can be taken to improve and secure it.

Protecting the AI-Driven Enterprise

Being “data driven” is a goal of most organizations, and AI systems are a crucial part of accomplishing this. As organizations continue to develop and deploy AI solutions, it is essential to understand the capabilities of AI and where things can go wrong.

This series dives into the modern use of AI. It explores how AI is used today, the risks and benefits to its creators and other parties, and the security considerations of AI-based systems.

AI Series Part 2 >

Solar Winds

SolarWinds, SOX, and Corporate Responsibility for Cybersecurity

By now, most everyone has heard of the SolarWinds breach. Cybercriminals took advantage of SolarWinds’ poor cybersecurity practices to gain access to their network and implant malicious code within updates to their Orion network monitoring solution.

This Orion solution is widely used, and its compromise led to the attackers gaining access to the networks of many large enterprises and a significant percentage of US government agencies. As a result, intellectual property and sensitive government data has been compromised and much of it is being sold online. Investigations into the incident are still ongoing.

SolarWinds and SOX Disclosures

The SolarWinds breach has likely caused significant damage to the organization reputationally and financially. The damage caused by SolarWinds’ negligence is widespread, and the company will likely be the defendant in numerous lawsuits regarding the breach.

A recent class action lawsuit filed against the company’s leadership by SolarWinds shareholders demonstrates the potentially far-reaching impacts of such a breach. As a publicly-traded company, SolarWinds is subject to the Sarbanes-Oxley Act (SOX), which was passed in response to the Enron scandal to protect investors. Under SOX, a company’s CEO and CFO must sign an attestation that publicly-released statements regarding the company’s financial status are correct.

The lawsuit against SolarWinds focuses on a statement in SolarWinds’ 2019 10-K filing that acknowledges the risk of cyberattacks to the company. Based on this statement, the company acknowledges that this risk exists, that steps should be taken to mitigate this risk, and that any breach should be reported to shareholders.

SolarWinds was initially breached on September 4th, 2019, but the breach was not reported until December of the next year. Since the company has filed multiple 10-Q statements since with no reference to the breach, the plaintiffs in the SOX case allege that SolarWinds was negligent in managing its cybersecurity risk. Additionally, investigation into the incident revealed other instances of cybersecurity negligence, such as the use of a password solarwinds123 on the SolarWinds update server.

SolarWinds attack timeline
Source: Solarwinds

SOX Disclosures and the Cost of Poor Cybersecurity Due Diligence

Obviously, SolarWinds’ CEO and CFO are not directly responsible for detecting and remediating security incidents within their organization. However, they do hold overall responsibility, and the SOX Act allows them to be held personally responsible for misleading or false statements within SOX disclosures.

Any organization can suffer a security breach, but it is the responsibility of a company’s leadership to ensure that due diligence is performed to prevent incidents like the SolarWinds breach. SolarWinds failed to do their due diligence in two crucial ways:

  1. Internal Cybersecurity Failures: As SolarWinds mentions in their 10-K, it is impossible to fully protect against cybersecurity threats. However, the company failed to follow even the most basic cybersecurity best practices as demonstrated by the use of a blatantly insecure password (solarwinds123) on its update server.
  2. Failure to Perform Proper Security Testing: Passing a Penetration Test is not proof of strong cybersecurity, as demonstrated by Trustwave’s certification of Target before the 2013 breach. However, a Penetration Test should have detected the use of such a weak password on the update server. This oversight demonstrates a failure to perform proper due diligence on behalf of both SolarWinds and any organization that performed a Penetration Test for the company.

Taking Responsibility for Corporate Cybersecurity

The class action lawsuit against SolarWinds – if successful – creates a strong precedent for holding corporate executives personally responsible for their companies’ security failures. Under the SOX Act, executives can face 10 years in prison and a $1 million fine for signing off on misleading statements, and 20 years and $5 million if the deception was willful.

In cybersecurity, as in any field, mistakes can be made, and companies can be breached despite their best efforts. However, making a “good faith” effort toward strong corporate cybersecurity – including contracting regular Penetration Tests by a competent testing firm – is essential to earning forgiveness for cybersecurity failures. The appearance of good security isn’t the same as the real thing.

Inside the 2020 Ping of Death Vulnerability

What is the 2020 Ping of Death?

Ping of Death vulnerabilities are nothing new. These vulnerabilities arise from issues in memory allocation in the TCP/IP stack. If memory is improperly allocated and managed, a buffer overflow vulnerability can be created that leaves the application vulnerable to exploitation.

The original Ping of Death was discovered in 1997 and was the result of an implementation error in how operating systems handled IPv4 ICMP packets.    ICMP ECHO_REQUEST packets (aka ping) are intended to be 64 bytes, but this length was not enforced. Any ping packet with a length greater than 65536 bytes (the expected maximum value of the length field) would cause a system to crash.

In August 2011, Microsoft fixed another Denial of Service in its TCP/IP Stack that occurred when processing a sequence of specially crafted Internet Control Message Protocol (ICMP) messages

In August 2013, a third ping of death vulnerability was announced and patched in the Windows operating system. This time it was specific to the IPv6 protocol.

Yesterday (October 2020), Microsoft revealed its second IPv6 Ping of Death vulnerability as part of its October Patch Tuesday release. Exploitation of this vulnerability could allow an attacker to perform a Denial of Service attack against an application and potentially achieve remote code execution.

Inside the 2020 Ping of Death Vulnerability

2020 Ping of Death Technical Details

The Ping of Death vulnerability arises from an issue in how Microsoft’s tcpip.sys implements the Recursive DNS Server (RDNSS) option in IPv6 router advertisement packets. This option is intended to provide a list of available recursive DNS servers.

The issue that creates the Ping of Death vulnerability is that tcpip.sys does not properly handle the possibility that the router advertisement packet contains more data than it should. Microsoft’s implementation trusts the length field in the packet and allocates memory accordingly on the stack.

An unsafe copy of data into this allocated buffer creates the potential for a buffer overflow attack. This enables the attacker to overwrite other variables on the stack, including control flow information such as the program’s return address.

How the Vulnerability Can Be Exploited

In theory, the buffer overflow vulnerability can be exploited to achieve a couple of different goals:

  1. Denial of Service: Exploitation of the buffer overflow vulnerability enables “stack smashing” that can crash the application.
  2. Remote Code Execution: Using return-oriented programming, a buffer overflow exploit could cause a function to return to and execute attacker-provided shellcode.

In practice, a Denial of Service attack is the most likely use for this exploit. In order to perform a successful Denial of Service attack, all an attacker needs to do is attempt to write outside of the memory accessible to it (triggering a segmentation fault) or to overwrite a critical value within the program stack.

One of these key values is the stack canary, which is also one of the reasons why exploitation of this vulnerability is unlikely to allow RCE. A stack canary is a random value placed on the stack that is designed to detect attempts to overwrite the function return address via a buffer overflow attack. Before attempting to return from a function (by going to the location indicated by the return address), a protected program checks to see if the value of the stack canary is correct. If so, execution continues. If not, the program is terminated.

The existence of a stack canary makes it more difficult to exploit the vulnerability for RCE, and the use of Address Space Layout Randomization (ASLR), which makes functions useful to attackers harder to locate in memory, exacerbates this issue. However, it is possible to bypass both of these protections in certain cases, so an exploit may be developed that enables the 2020 version of the ping of death to be used for RCE. If this is the case, the repercussions could be severe as tcpip.sys is a kernel-level module within the Windows operating system.

Ping of Death in the Wild

A patch for this vulnerability was included in the October 2020 Patch Tuesday release of updates. At the time, the vulnerability was not publicly disclosed, meaning that (theoretically) no one knew about it previously and could develop an exploit.

Based on the Microsoft description of the vulnerability, a Proof of Concept for using it for a DoS attack has already been created. Additionally, the vulnerability has been given an exploitability value of 1, meaning that it is very likely to be exploited but has not yet been observed in the wild.

This means that we can expect to see DoS attacks using this vulnerability shortly, and the potential exists that an attacker will successfully create a RCE exploit using it as well. If this is the case, the wormability of the exploit makes it likely to be used to spread ransomware and similar malware (like Wannacry and EternalBlue).

Protecting Against the 2020 Ping of Death

The vulnerability in tcpip.sys was patched in an update included in the October 2020 Patch Tuesday release. Installing this update will fix the vulnerability and protect a system from exploitation.

Beyond installing the update, it is a good idea to minimize your attack surface by disabling unnecessary functionality. If you currently do not use the functionality, then disabling IPv6 in general or RDNSS in particular can eliminate the potential exploitability of this and any other vulnerabilities within the Microsoft implementation of this functionality. Instructions for doing so are included in Microsoft’s description of the vulnerability.

Inside Zerologon

What is the Zerologon Vulnerability?

Zerologon is a vulnerability in the Windows netlogon protocol (on Windows Server version 2008 and later) discovered by Tom Tervoort of Secura during a security review of the protocol (which had not previously undergone such a review).  Due to cryptographic and implementation errors in the protocol, an attacker can falsely authenticate and elevate their privileges to Domain Admin.  This has a number of potential impacts including:

  • Full Network Control: With Domain Administrator access, the attacker has full control over the network.
  • Credential Compromise: Netlogon enables an attacker to extract user account credentials for offline password cracking.
  • Credential Stuffing: Passwords compromised via netlogon are likely to be used on other accounts, enabling an attacker to access bank accounts, social media, etc.
  • Initial Access: With the access provided by netlogon, an attacker could steal sensitive data, deploy ransomware, etc.
  • Denial of Service Attack: Zerologon enables an attacker to change a password in Active Directory but not in the registry or LSASS.  This means that services on a rebooted machine may no longer function.

Technical Details of Zerologon

Zerologon exploits a vulnerability in the netlogon authentication process, which is performed as follows:

  1. Server and client each generate and exchange a random 8-byte challenge
  2. The shared session key is generated as the first sixteen bytes of SHA256(MD4(domain password),challenges)
  3. Client verifies the session key by encrypting and sending the original challenge that they generated encrypted with the new session key.

The zerologon vulnerability arises due to the fact that Windows netlogon uses an insecure variant of the cipher feedback (CFB) block cipher mode of operation with AES.

Normally, CFB mode is designed to encrypt 16-byte chunks of the plaintext (as shown above).  This enables the encryption or decryption of data longer than the standard 16-byte AES block size.  It starts by taking a random initialization vector, encrypting it, and XORing it with the plaintext of a block to create the ciphertext for that block.  This ciphertext is used as the input to encryption for the next block, and so on.

Zerologon

(via Secura)

The vulnerable version of CFB mode used in Windows netlogon (called CFB8) performs encryption one byte at a time.  To do so, it takes the following steps:

  1. The initialization vector (yellow) is encrypted using AES
  2. The first byte of the result (pink) is XORed with the first byte of plaintext (blue)
  3. The resulting byte of ciphertext is appended to the end of the IV
  4. The first byte of the IV is dropped
    1. The result is the same length as the original IV (yellow and pink above)
  5. Steps 1-4 are repeated until the entire plaintext is encrypted

While the use of a non-standard encryption algorithm is bad enough, the netlogon protocol made another critical error.  The initialization vector is hard-coded to zero, not random like it should be.

This creates a vulnerability if the first byte of encryption produces a ciphertext byte of zero (which occurs with 1/256 probability).  If this is the case, then the result of encryption will always be all zeros.  This is because the input to the encryption algorithm will always be the same (since the IV is all zeros and the value appended to the end during each round of encryption is a zero).

This makes it possible for an attacker to authenticate via netlogon with no knowledge of the domain password.  By trying to authenticate repeatedly to the system with an all-zero challenge, the attacker can trigger the 1/256 chance that the shared secret (that they don’t know) encrypts the first byte of the challenge to a zero.  They can then trivially generate the encrypted challenge (step 3 in the netlogon authentication process) since it is all zeros.

How Zerologon Can Be Exploited

The Zerologon vulnerability, by itself, only enables an attacker to successfully authenticate to the domain controller and encrypt all-zero plaintexts.  However, this is enough to successfully call the NetrServerPasswordSet2 function, which is designed to change the server password.  This function takes the following parameters:

  • Original client challenge plus the current time in POSIX notation
  • Random data
  • New password
  • New password length

Of these the original client challenge, the random data, and the new password are easily set to all zeros.  In theory, the server should verify the current time and disallow a zero-length password.  However, this is not the case, making it possible to set the domain controller’s password to empty.

While changing this password does not enable the attacker to log into the machine, it does enable them to access the Domain Replication Service Protocol.  This enables them to extract the password hashes of domain administrator accounts, enabling them to generate Kerberos golden tickets.  Additionally, these hashes could be used in a pass-the-hash attack to log into the Domain Controller and as Domain Administrator and reset the password manually.  This provides the attacker with full access and control over the network.

However, this is not even the only way to exploit the Netlogon vulnerability.  A writeup by Dirk-jan Mollema describes another method that takes advantage of the NTLM protocol to gain Domain Administrator access without changing a password (which can crash services).  However, this version of the exploit requires two vulnerable domain controllers, an available domain user account, and a print spooler service running on a DC and accessible from the network (default domain controller configuration).

Zerologon Exploitation in the Wild

The patch for Zerologon was released in August 2020, and the details of the vulnerability weren’t publicly announced until September 2020.  In theory, this provided organizations with ample opportunity to apply the patch and eliminate the vulnerability.

In practice, many organizations have not applied the patch, leaving them vulnerable to exploitation.  Microsoft publicly announced that they have detected active exploitation of the vulnerability, and the Department of Homeland Security (DHS) issued a directive on September 18th requiring federal agencies to patch the issue by September 21st (i.e. the following Monday).

This is due to the fact that the vulnerability was expected to be actively exploited by cybercriminals.  This belief is backed up by a report by Tenable that multiple different exploit executables were uploaded to Virustotal.

Protecting Against Zerologon

The Zerologon vulnerability is patched in the August 2020 set of Windows updates, and is blocked by some endpoint security solutions.  Microsoft recommends taking the following steps to fix the issue:

  1. Update Domain Controllers with the patch released in August 2020.
  2. Monitor patched Domain Controller logs for event IDs 5829, 5828, and 5829.  These events indicate a client that is using a vulnerable netlogon secure channel connection and require either a Windows or manufacturer update.
  3. Enable Domain Controller Enforcement Mode for additional visibility and protection.

After patching known domain controllers and other known affected systems, it might be wise to undergo a penetration test to discover other potentially vulnerable devices.  The vulnerability affects most versions of Windows Server, which can be deployed in a number of different environments and contexts.

Hacking Casinos with Zero Day Exploits

Hacking casinos with zeroday exploits

Most popular email programs like Microsoft Outlook, Apple Mail, Thunderbird, etc. have a convenient feature that enables them to remember the email addresses of people that have been emailed.  Without this feature people would need to recall email addresses from memory or copy and paste from an address book. This same feature enables hackers to secretly breach networks using a technique that we created back in 2006 and named Email Seeding.

This article explains how we used Email Seeding to breach the network of a well-known and otherwise well protected casino.  As is always the case, this article has been augmented to protect the identity of our customer.

Lets begin…
Our initial objective was to gather intelligence about the casino’s employees.  To accomplish this, we developed a proprietary LinkedIn tool that uses the name or domain of a company and extracts employee information.  The information is compiled into a dossier of sorts that contains the name, title, employment history and contact information for each targeted individual.  Email address structure is automatically determined by our tool.

It is to our advantage if our customers use Google apps as was the case with the casino.  This is because Google suffers from a username enumeration vulnerability that allows hackers to extract valid email addresses.  For example, if we enter [email protected] and the address does not exist then we get an error.  If we enter the same address and it does exist, we don’t get an error.  Our LinkedIn tool has native functionality that leverages this vulnerability which allows us to compile a targeted list of email addresses for Spear Phishing and/or Social Engineering.

We used this tool to compile a target list for the casino. Then we assembled an offensive micro-infrastructure to support a chameleon domain and its associated services.  The first step in this process to register a chameleon domain, which is a domain designed to impersonate a legitimate domain (with SSL certificates and all).  Historically this was accomplished by using a now obsolete IDN Homoglyph attack.  Today we rely on psychological trickery and influence the tendency of the human brain to autocorrect incorrectly spelled names while perceiving them as correct.

For example, let’s pretend that our casino’s name is Acme Corporation and that their domain is acmecorporation.com.  A good chameleon domain would be acmecorporatlon.com or acmceorporation.com, which are both different than acmecorporation.com (read them carefully).  This technique works well for longer domains and obscure domains but is less ideal for shorter domains like fedex.com or ups.com for example.  We have tactics for domains like that but won’t discuss them here.
There are a multitude of advantages to using a chameleon domain over traditional email spoofing techniques.  One example is that chameleon domains are highly interactive.  Not only can we send emails from chameleon domains but we can also receive emails.   This high-interaction capability helps to facilitate high-threat Social Engineering attacks.  Additionally, because chameleon domains are actually real domains they can be configured with SPF records, DKIM, etc.  In fact, in many cases we will even purchase SSL certificates for our chameleon domains.  All of these things help to create a credible infrastructure.  Finally, we always configure our chameleon domains with a catchall email address.  This ensures that any emails sent to our domain will be received.

Netragard maintains active contracts with various Virtual Private Server (VPS) providers.  These providers enable us to spin up and spin down chameleon infrastructures in short time.  They also enable us to spin up and spin down distributed platforms for more advanced things like distributed attacking, IDS/IPS saturation, etc. When we use our email seeding methodology we spin up a micro-infrastructure that offers DNS, Email, Web, and a Command & Control server for RADON.
For the casino, we deployed an augmented version of bind combined with something similar to honeytokens so that we could geographically locate our human targets.  Geolocation is important for impersonation as it helps to avoid the accidental face-to-face meetings. For example, if we’re impersonating John to attack Sally and they bump into each other at the office then there’s a high risk of operational exposure.

With the micro-infrastructure configured we began geolocating employees.  This was accomplished in part with social media platforms like Twitter, Facebook, etc. The employees that could not be located with social media were located using a secondary email campaign.  The campaign used a unique embedded tracker URL and tracker image.  Any time the host associated with the URL was resolved our DNS server would tell us what IP address the resolution was done from.  If the image was loaded (most were) then we’d receive the IP address as well as additional details about the browser, operating system in use by our target, etc.  We used the IP addressing information to plot rough geographic locations.

When we evaluated the data that we collected we found that the Casino’s employees (and contractors) worked from a variety of different locations.  One employee, Jack Smith, was particularly appealing because of his title which was “Security Manager” and his linked in profile talked about incident response and other related things.  It also appeared that Jack worked in a geographically dissimilar location to many potential targets.  Jack became our primary choice for impersonation.
With Jack selected we emailed 15 employees from  [email protected].   That email address is a chameleon address, note the “ec” is inverted to “ce”. Jack’s real email address would be [email protected].  While we can’t disclose the content of the email that we used, it was something along the lines of:
“Hi <name>, did you get my last email?” 
Almost immediately after sending the email we received a 3 out of office auto-responses.  By the end of the next day we received 12 human responses indicating that we had a 100% success rate.  The 12 human responses were exciting because chances were high that we had successfully seeded our targets with Jack’s fake chameleon address.

After 4 days we received an email from an employee named Brian with the title “Director of IT Security”. Brian emailed us rather than emailing the real Jack because his email client auto-completed Jack’s email with our seeded address rather than Jack’s real one. Attached to the email was a Microsoft Word document.  When we opened the document we realized that we were looking at an incident report that Jack had originally emailed to Brian for comment.

While the report provided a treasure trove of information that would have been useful for carrying out a multitude of different attacks, the document and trust relationship between Jack and Brian was far more interesting.  For most customers we’d simply embed malware (RADON) into a document and use macro’s or some other low-tech method of execution.   For this customer, given that they were a high-profile casino with high-value targets, we decided to use a zeroday exploit for Microsoft Word rather than something noisy like a macro.

While the exploit was functional it was not flawless.  Despite this we were confident that exploitation would be successful.  The payload for the exploit was RADON, our home-grown zeroday malware and it was configured to connect back to our command and control server using one three different techniques. Each of the three techniques uses common network protocols and each communicates in using methods that appear normal as to evade detection.  The exact details on these techniques isn’t something that share because we use them regularly.

We delivered our now weaponized Microsoft Word document back to Brian with an email that suggested more updates were made.  Within 10 minutes of delivery RADON called home and we took covert control of Brian’s corporate desktop.
The next step was to move laterally and infect a few more targets to ensure that we did maintained access to the casino’s LAN.  The normal process for doing this would be to scan / probe the network and identify new targets.   We wanted to proceed with caution because we didn’t know if the Casino had any solutions to detect lateral movement.  So, to maintain stealth, rather than scanning the internal network we sniffed and monitored all network connections.

In addition to sniffing, our team also searched Brian’s computer for intelligence that would help to facilitate lateral movement.  Searching was carried out with extreme care as to avoid accessing potential bait files.  Bait files, when accessed, will trigger an alarm that alerts administrators and we could not afford to get caught in such early stages.  Aside from collecting network and filesystem information we also took screenshots every minute, activated Brian’s microphone, took frequent web-cam photographs and recorded his keystrokes using RADON.
After a few hours of automated reconnaissance, we began to analyze our findings.  One of the first things that caught our attention was a screen shot of Brian using TeamViewer.  This prompted us to search our keylogger recordings for Brian’s TeamViewer credentials and when we did we found them in short time.  We used his captured credentials to login to TeamViewer and were presented with a long list of servers belonging to the casino.  What was even more convenient was that credentials for those servers were stored in each server profile so all we had to do was click and pwn.  It was like Christmas for Hackers!

Our method from that point forward was simple.  We’d connect to a server, deploy RADON, use RADON to gather files, credentials, screenshots, etc.  Within 30-minutes we went from having a single point of access to having more control over the casino’s network than their own IT department. This was in large part because our control was completely centralized thanks to RADON and we weren’t limited by corporate polices, rules, etc.  (We are the hackers after all).

This was the first casino that we encountered with such a wide-scale deployment of TeamViewer.  When we asked our customer why they were using TeamViewer in this manner their answer was surprising.  The casino’s third party IT support company recommended that they use TeamViewer in place of RDP suggesting that it was more secure.  We of course demonstrated that this was not the case.  With our direction the casino removed TeamViewer and now requires all remote access to be handled over VPN with 2 factor authentication and RDP.

For the sake of clarity, much more work was done for the Casino than what was discussed here.  We don’t simply hack our clients, say thank you and leave them hanging. We do provide our customers with detailed custom reports and if required assistance with hardening. With that explained, this article was written with a specific focus on email seeding.   We felt that given the current threat landscape this was a good thing to be aware of because it makes for an easy breach.

Hackers - Vulnerability Disclosures

What hackers know about vulnerability disclosures and what this means to you

Before we begin, let us preface this by saying that this is not an opinion piece.  This article is the product of our own experience combined with breach related data from various sources collected over the past decade.  While we too like the idea of detailed vulnerability disclosure from a “feel good” perspective the reality of it is anything but good.  Evidence suggests that the only form of responsible disclosure is one that results in the silent fixing of critical vulnerabilities.  Anything else arms the enemy.

Want to know the damage a single exposed vulnerability can cause? Just look at what’s come out of MS17-010. This is a vulnerability in Microsoft Windows that is the basis for many of the current cyberattacks that have hit the news like WannaCry, Petya, and NotPetya.
However, it didn’t become a problem until the vulnerability was exposed to the public. Our intelligence agencies did know about the vulnerability, kept it a secret, and covertly exploited it with a tool called EternalBlue. Only after that tool was leaked and the vulnerability that it exploited was revealed to the public did it become a problem. In fact, the first attacks happened 59 days after March 14th, which was when Microsoft published the patch thus fixing the MS17-010 vulnerability. 100% of the WannaCry, Petya and NotPetya infections occurred no less than two months after a patch was provided.
Why? The key word in the opening paragraph is not vulnerability. It’s exposed. Many security experts and members of the public believe that exposing vulnerabilities to the public is the best way to fix a problem. However, it is not. It’s actually one of the best ways to put the public at risk.
Here’s an analogy that can help the reader understand the dangers of exposing security vulnerabilities. Let’s say everyone on earth has decided to wear some kind of body armor sold by a particular vendor. The armor is touted as an impenetrable barrier against all weapons. People feel safe while wearing the armor.
Let’s say a very smart person has discovered a vulnerability that allows the impenetrable defense to be subverted completely, rendering the armor useless. Our very smart individual has a choice to make. What do they do?

Choice One: Sell it to intelligence agencies or law enforcement

Intelligence agencies and law enforcement are normally extremely judicious about using any sort of zero-day exploit.  Because zero-day exploits target unknown vulnerabilities using unknown methods they are covert by nature. If an intelligence agency stupidly started exploiting computers left and right with their zero-day knowledge, they’d lose their covert advantage and their mission would be compromised. It is for this reason that the argument of using zero-day exploits for mass compromise at the hands of intelligence or law enforcement agencies is nonsensical. This argument is often perpetuated by people who have no understanding of or experience in the zero-day industry.
For many hackers this is the best and most ethical option. Selling to the “good guys” also pays very well. The use cases for sold exploits includes things like combating child pornography and terrorism. Despite this, public perception of the zero-day exploit market is quite negative. The truth is that if agencies are targeting you with zero-day exploits then they think that you’ve done something sufficiently bad to be worth the spend.

Choice Two: Sit on it

Our very smart individual could just forget they found the problem. This is security through obscurity. It’s quite hard for others to find vulnerabilities when they have no knowledge of them. This is the principle that intelligence agencies use to protect their own hacking methods. They simply don’t acknowledge that they exist. The fewer people that know about it, the lower the risk to the public. Additionally it is highly unlikely that low-skilled hackers (which make up the majority) would be able to build their own zero-day exploit anyway. Few hackers are truly fluent in vulnerability research and quality exploit development.
Some think that this is an irresponsible act. They think that vulnerabilities must be exposed because then they can be fixed and to fail to do so puts everyone at increased risk. This thinking is unfortunately flawed and the opposite is true. Today’s reports show that over 99% of all breaches are attributable to the exploitation of known vulnerabilities for which patches already exist. This percentage has been consistent for nearly a decade.

Choice Three: Vendor notification and silent patching

Responsible disclosure means that you tell the vendor what you found and, if possible, help them find a way to fix it. It also means that you don’t publicize what you found which helps to prevent arming the bad guys with your knowledge. The vendor can then take that information, create and push a silent patch. No one is the wiser other than the vendor and our very smart individual.
Unfortunately, there have been cases where vendors have pursued legal action against security researchers who come to them with vulnerabilities. Organizations like the Electronic Frontier Foundation have published guides to help researchers disclose responsibly, but there are still legal issues that could arise.
This fear of legal action can also prompt security researchers to disclose vulnerabilities publicly under the theory that if they receive retaliation it will be bad PR for the company. While this helps protect the researcher it also leads to the same problems we discussed before.

Choice Four: Vendor notification and publishing after patch release

Some researchers try to strike a compromise with vendors by saying they won’t publicly release the information they discovered until a patch is available. But given the slow speed of patching (or complete lack of patching) all vulnerable systems, this is still highly irresponsible. Not every system can or will be patched as soon as a patch is released (as was the case with MS17-010). Patches can cause downtime, bring down critical systems, or cause other pieces of software to stop functioning.
Critical infrastructure or a large company cannot afford to have an interruption. This is one reason why major companies can take so long to patch vulnerabilities that were published so long ago.

Choice Five: Exploit the vulnerability on their own for fun and profit.

The media would have you believe that every discoverer of a zero-day vulnerability is a malicious hacker bent on infecting the world. And true, it is theoretically possible that a malicious hacker can find and exploit a zero-day vulnerability. However, most malicious hackers are not subtle about their use of any exploit. They are financially motivated and generally focused on a wide-scale, high-volume of infection or compromise. They know that once they exploit a vulnerability in the wild it will get discovered and a patch will be released. Thus, they go for short-term gain and hope they don’t get caught.

Choice Six: Expose it to the public

This is a common practice and it is the most damaging from a public risk perspective. The thinking goes that if the public is notified then vendors will be pressured to act fast and fix the problem. The assumption is also that the public will act quickly to patch before a hacker can exploit their systems. While this thinking seems rational it is and has always been entirely wrong.
In 2015 the Verizon Data Breach Investigation Report showed that half of the vulnerabilities that were disclosed in 2014 were being actively exploited within one month of disclosure. The trend of rapid exploitation of published vulnerabilities hasn’t changed. In 2017 the number of breaches is up 29 percent from 2016 according to the Identity Theft Resource Center. A large portion of the breaches in 2017 are attributable to public disclosure and a failure to patch.
So what is the motivator behind public disclosure? There are three primary motivators.  The first is that the revealer believes that disclosure of vulnerability data is an effective method for combating risk and exposure. The second is that the revealer feels the need to defend or protect themselves from the vulnerable vendor.  The second is that the revealer wants their ego stroked. Unfortunately, there is no way to tell the public without also telling every bad guy out there how to subvert the armor. It is much easier to build a new weapon from a vulnerability and use it than is to create a solution and enforce its implementation.
Exposing vulnerability details to the public when the public is still vulnerable is the height of irresponsible disclosure.  It may feel good and be done with good intention but the end result is always increased public risk (the numbers don’t lie).
It is almost certainly fact that if EternalBlue had never been leaked by ShadowBrokers then WannaCry, Petya and NotPetya would never have come into existence. This is just one out of millions of examples like this. Malicious hackers know that businesses don’t patch their vulnerabilities properly or in a timely manner.  They know that they don’t need zeroday exploits to breach networks and steal your data.  The only thing they need is public vulnerability disclosure and a viable target to exploit.  The defense is logically simple but can be challenging to implement for some.  Patch your systems.

Cybersecurity Human Performance

The Human Vulnerability

It seems to us that one of the biggest threats that businesses face today is socially augmented malware attacks. These attacks have an extremely high degree of success because they target and exploit the human element. Specifically, it doesn’t matter how many protective technology layers you have in place if the people that you’ve hired are putting you at risk, and they are.

Case in point, the “here you have” worm that propagates predominantly via e-mail and promises the recipient access to PDF documents or even pornographic material. This specific worm compromised major organizations such as NASA, ABC/Disney, Comcast, Google Coca-Cola, etc. How much money do you think that those companies spend on security technology over a one-year period? How much good did it do at protecting them from the risks introduced by the human element? (Hint: none)

Here at Netragard we have a unique perspective on the issue of malware attacks because we offer pseudo-malware testing services. Our pseudo-malware module, when activated, authorizes us to test our clients with highly customized, safe, controlled, and homegrown pseudo-malware variants. To the best of our knowledge we are the only penetration testing company to offer such a service (and no, we’re not talking about the meterpreter).

Attack delivery usually involves attaching our pseudo-malware to emails or binding the pseudo-malware to PDF documents or other similar file types. In all cases we make it a point to pack (or crypt) our pseudo-malware so that it doesn’t get detected by antivirus technology (see this blog entry on bypassing antivirus). Once the malware is activated, it establishes an encrypted connection back to our offices and provides us with full control over the victim computer. Full control means access to the software and hardware including but not limited to keyboard, mouse, microphone and even the camera. (Sometimes we even deliver our attacks via websites like this one by embedding attacks into links).

So how easy is it to penetrate a business using pseudo-malware? Well in truth its really easy. Just last month we finished delivering an advanced external penetration test for one of our more secure customers. We began crafting an email that contained our pseudo-malware attachment and accidentally hit the send button without any message content. Within 45 seconds of clicking the send button and sending our otherwise blank email, we had 15 inbound connections from 15 newly infected client computer systems. That means that at least 15 employees tried to open our pseudo-malware attachment despite the fact that the email was blank! Imagine the degree of success that is possible with a well-crafted email?

One of the computer systems that we were able to compromise was running a service with domain admin privileges. We were able to use that computer system (impersonation attack involved) to create an account for ourselves on the domain (which happened to be the root domain). From there we were able to compromise the client’s core infrastructure (switches, firewalls, etc) due to a password file that we found sitting on someone’s desktop (thank you for that). Once that was done, there really wasn’t much more that we had left to do, it was game over.

The fact of the matter is that there’s nothing new about taking advantage of people that are willing to do stupid things. But is it really stupidity or is it just that employees don’t have a sense of accountability? Our experience tells us that in most cases its a lack of accountability that’s the culprit.

When we compromise a customer using pseudo-malware, one of the recommendations that we make to them is that they enforce policies by holding employees accountable for violations. We think that the best way to do that is to require employees to read a well-crafted policy and then to take a quiz based on that policy. When they pass the quiz they should be required to sign a simple agreement that states that they have read the policy, understood the policy, and agree to be held accountable for any violations that they make against the policy.

In our experience there is no better security technology than a paranoid human that is afraid of being held accountable for doing anything irresponsible (aka: violating the policy). When people are held accountable for something like security they tend to change their overall attitude towards anything that might negatively affect it. The result is a significantly reduced attack surface. If all organizations took this strict approach to policy enforcement then worms like the “here you have” worm wouldn’t be such a big success.

Compare the cost and benefit of enforcing a strict and carefully designed security policy to the cost and benefit of expensive (and largely ineffective) security technologies. Which do you think will do a better job at protecting your business from real threats? Its much more difficult to hack a network when that network is managed by people that are held accountable for its security than it is to hack a network that is protected technology alone.

So in the end there’s really nothing special about the “here you have” worm. It’s just another example of how malicious hackers are exploiting the same human vulnerability using an ever so slightly different malware variant. Antivirus technology certainly won’t save you and neither will other expensive technology solutions, but a well-crafted, cost-effective security policy just might do the trick.

It’s important to remember that well written security policies don’t only impact human behavior, but generally result in better management of systems, which translates to better technological security. The benefits are significant and the overall cost isn’t in comparison.

Core Image Fun House – Advisory

Netragard’s SNOsoft Research Team discovered an exploitable buffer overflow vulnerability in Apple’s Core Image Fun House version <= 2.0 on OS X. Netragard notified apple and released a formal advisory that can be found here. Proof of concept is included in the advisory.

ZDNet Australia

Netragard’s CTO was quoted in the following article titled “2007: How was it for Apple”. Here’s the article and here’s the quote:

Adriel Desautels, chief technology officer for security company Netragard and founder of the SNOSoft research team, said: “If OS X had the same installed base as Windows, Linux and other systems, it would be less secure or at the very most, as secure as the other systems … It’s just a matter of what