Bug Bounties

The dark side of bug bounties

Bug Bounty companies (often called crowd sourced penetration tests) are all the hype.  The primary argument for using their services is that they provide access to a large crowd of testers, which purportedly means that customers will always have a fresh set of eyes looking for bugs.  They also argue that traditional penetration testing teams are finite and, as a result, tend to go stale in terms of creativity, depth, and coverage.  While these arguments seem to make sense at face value, are they accurate?

Penetration Testing Company

The first thing to understand is that the quality of any penetration test isn’t determined by the volume of potential testers, but instead by their experience, talent, and overall capabilities.  A large group of testers with average talent will never outperform a small group of highly talented testers in terms of depth and quality.  A great parallel example of this is when the world’s largest orchestra played ninth symphonies of Dvorák and Beethoven.  While that orchestra was made up of 7,500 members, the quality of their song was nothing compared to that which is produced by The Boston Symphony Orchestra (which is made up of 91 musicians).

Interestingly, it appears that bug hunters are incentivized to spend as little time as possible per bounty.  This is because bug hunters need to maintain a profitable hourly rate while working or their work won’t be worth their time.  For example, a bug hunter might spend 15 minutes to find a bug and collect a $4,000.00 bounty, which is an effective rate of $16,000.00 per hour!  In other cases, a bug hunter might spend 40 hours to find a bug and collect a $500.00 bounty which is a measly $12.50 per hour in comparison.  Even worse they might spend copious time finding a complex bug only to learn that it is a duplicate and collect no bounty (wasted time).

This argument is further supported when we appraise the quality of bugs disclosed by most bug bounty programs.  We find that most of the bugs are rudimentary in terms of ease of discovery, general complexity, and exploitability.  The bugs regularly include cross-site scripting vulnerabilities, SQL injection vulnerabilities, easily spotted configuration mistakes, and other common problems.  On average they appear to be somewhat more complex than what might be discovered using industry standard automated vulnerability scanners and less complex than what we’ve seen exploited in historical breaches.  To be clear, this doesn’t suggest that all bug hunters are low talent individuals, but, instead, that they are not incentivized to go deep.

In contrast to bug bounty programs, genuine penetration testing firms are incentivized to bolster their brand by delivering depth, quality, and maximal coverage to their customers.  Most operate under a fixed cost agreement and are not rewarded based on volume of findings, but instead by the repeat business that is earned through the delivery of high-quality services.  They also provide substantially more technical and legal safety to their customers than bug bounty programs do.

For example, we evaluated the terms and conditions for several bug bounty companies and what we learned was surprising.  Unlike traditional penetration testing companies, bug bounty companies do not accept any responsibility for the damages or losses that might result from the use of their services.  They explicitly state that the bug hunters are independent third parties and that any remedy with respect to loss or damages that a customer seeks to obtain is limited to a claim against that bug hunter.  What’s more is that the vetting process for bug hunters is lax at best.  In nearly all cases, background checks are not run and even when they are run the bug hunter could provide a false identity. The validation around who a bug hunter really is, is also lacking. To sign up to most programs you simply need to validate your email address. In simple terms, organizations that use bug bounty programs accept all risk and have no realistic legal recourse, even if a bug hunter acts in a malicious manner.

To put this into context, bug bounty programs effectively provide anyone on the internet with a legitimate excuse to attack your infrastructure.  Since these attacks are expected as a part of the bug bounty program, it may impact your ability to differentiate between an actual attack and an attack from a legitimate bug hunter.  This creates an ideal opportunity for bona fide malicious actors to hide behind bug bounty programs while working to steal your data. When you combine this, with the fact that it takes an average of ~200 days for most organizations to detect a breach, the risk becomes even more apparent.

There’s also the issue of GDPR. GDPR increases the value of personal data on the black market and to organizations alike.  Under GDPR, if personal data of a European citizen is breached, the organization that suffered the breach can face heavy fines, penalties, and more. In article 4 of the GDPR, a personal data breach is defined as “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to personal data transmitted, stored or otherwise processed”. While bug bounty programs target configurations, systems, and implementations, they do not incentivize bug hunters to go after personal data.  However, because of GDPR, a malicious bug hunter who exploits a vulnerability that discloses personal data (accidental or not), may be incentivized to ransom their finding for a higher dollar value. Likewise, organizations might be incentivized to pay the ransom and report it as a bounty to avoid having to notify the Data Protection Authorities (“DPA”) as is required by GDPR.

On a positive note, many of our customers use bug bounty programs in tandem with our Realistic Threat Penetration Testing services.  Customers who use bug bounty programs have far less vulnerabilities in terms of low-hanging-fruit than ones who don’t.  In fact, we are confident that bug bounty programs are pointedly more effective at finding bugs than automated vulnerability scanning could ever be. It’s also true that these programs are more effective than penetration testing vendors who deliver services based on the product of automated vulnerability scans.  When compared to a research driven penetration test, however, the bug bounty programs pale in comparison.

Penetration Testing Liability

How to Price a Penetration Test

How Much Should You Spend On Penetration Testing Services?

The most common question asked is “how much will it cost for you to deliver a penetration test to us?”. Rather than responding to those questions each time with the same exact answer to penetration testing cost, we thought it might be best to write a detailed yet simple blog entry on the subject.

This video provides and overview of the two most common methodologies for determining penetration testing cost.

Workload and Penetration Testing Cost

The price for a genuine penetration test is based on the amount of human work required to successfully deliver the test.  The amount of human work depends on the complexity of the infrastructure to be tested.  The infrastructure’s complexity depends on the configuration of each individual network connected device. A network connected device is anything including but not limited to servers, switches, firewalls, telephones, etc. Each unique network connected device provides different services that serve different purposes.  Because each service is different each service requires different amounts of time to test correctly. It is for this exact reason that a genuine penetration test cannot be priced based on the number of IP addresses or number of devices.  It does not make sense to charge $X per IP address when each IP address requires a different amount of work to test properly. Instead, the only correct way to price a genuine penetration test is to assess the time requirements and from there derive workload.

At Netragard the workload for an engagement is based on science and not an arbitrary price per IP. Our pricing is based on something that we call Time Per Parameter (TPP).  The TPP is the amount of time that a Netragard researcher will spend testing each parameter. A parameter is either a service being provided by a network connected device or a testable variable within a web application. Higher threat penetration tests have a higher TPP while more basic penetration tests have a lower TPP.  Certainly this makes sense because the more time we spend trying to hack something the higher the chances are of success. Netragard’s base LEVEL 1 penetration test is our most simple offering and allows for a TPP of 5 minutes. Our LEVEL 2 penetration test is far more advanced than LEVEL 1 and allows for a TPP of up to 35 minutes. Our LEVEL 3 penetration test is possibly the most advanced threat penetration test offered in the industry and is designed to produce a true Nation State level threat (not that APT junk).  Our LEVEL 3 penetration test has no limit on TPP or on offensive capabilities.

The details of the methodology that we use to calculate TPP is something that we share with our customers but not our competitors (sorry guys). What we will tell you is that the count based pricing methodology that is used by our competition is a far cry from our TPP based pricing. Here’s one example of how our pricing methodology saved one of our customers $49,000.00.

We were recently competing for a Penetration Testing engagement for a foreign government department.  This department received a quote for a Penetration Test from another penetration testing vendor that also created software used by penetration testers. When we asked the department how much money the competitive quote came in at they told us roughly $70,000.00. When we asked them if that price was within their budget they said yes.  Our last question was about the competitive pricing methodology. We asked the department “did the competitor price based on how many IP addresses you have or did they do a detailed workload assessment?”.  The department told us that they priced based on the number of IP addresses that they had and that the number was 64.

At that moment we understood that we were competing against a vendor that was offering a Vetted Vulnerability Scan and not a Genuine Penetration Test. If a vendor prices an engagement based on the number of IP addresses involved then that vendor is not taking actual workload into consideration.  For example, a vendor that charges $500.00 per IP address for 10 IP addresses would price the engagement at $5,000.00. What happens if those 10 IP addresses require 1,000 man-hours of work to test because they are exceedingly complex? Will the vendor really find a penetration tester to work for less than $1.00 an hour?  Of course not. The vendor will instead deliver a Vetted Vulnerability Scan and call it a Penetration Test.  They will scan the 10 IP addresses, vet the results that are produced by the scanner and exploit things where possible, then produce a report. Moreover they will call the process of vetting “manual testing” which is a blatant lie. Any vendor that does not properly evaluate workload requirements must use a Vetted Vulnerability Scan methodology to avoid running financially negative on the project.

The inverse of this (which is far more common) is what happened with the foreign government department.  While our competitor priced the engagement at $1093.75 per IP for 64 IP’s which equates to $70,000.00, we priced at $21,000.00 for 11 IP’s (each of which offered between 2 and 6 moderately complex Internet connectable services). More clearly, our competitor wanted to charge the department $57,968.75 for testing 54 IP addresses that were not in use which equates to charging for absolutely nothing! When we presented our pricing to the department we broke our costs down to show the exact price that we were charging per internet connectable service.  Needless to say the customer was impressed by our pricing and shocked by our competitor, we won the deal.

While we wish that we could tell you that being charged for nothing is a rare occurrence, it isn’t.  If you’ve received a penetration test then you’ve probably been charged for nothing.  Another recent example involves a small company that was in need of Penetration Testing for PCI. They approached us telling us that they had already received quotes from other vendors and that the quotes were all in the thousands of dollars range. We explained that we would evaluate their network and determine workload requirements.  When we did that we found that they had zero responding IP addresses and zero Internet connectable services which equates to zero seconds of work. Instead of charging them for anything we simply issued them a certificate that stated that as of the date of testing no attack surface was present.  They were so surprised by our honesty that they wrote us an awesome testimonial about their experience with us.

Finally, our TPP based pricing doesn’t need to be expensive.  In fact, we can deliver a Penetration Test to any customer with any budget.  This is because we will adjust the engagement’s TPP to match your budget. If your budget only allows for a $10,000.00 spend then we will reduce the TPP to adjust the project cost to meet your budgetary requirements. Just remember that reducing the TPP means that our penetration testers will spend less time testing each parameter and increasing the TPP means that they will spend more time.  The more the time, the higher the quality. If we set your TPP at 10 but we encounter services that only require a few seconds of time to test then we will allocate the leftover time to other services that require more time to test.  Doing this ensures that complex services are tested very thoroughly.


Netragard’s Hacker Interface Device (HID). You can get hacked by your mouse.

We (Netragard) recently completed an engagement for a client with a rather restricted scope. The scope included a single IP address bound to a firewall that offered no services what so ever. It also excluded the use of social attack vectors based on social networks, telephone, or email and disallowed any physical access to the campus and surrounding areas. With all of these limitations in place, we were tasked with penetrating into the network from the perspective of a remote threat, and succeeded.

The first method of attack that people might think of when faced with a challenge like this is the use of the traditional autorun malware on a USB stick. Just mail a bunch of sticks to different people within the target company and wait for someone to plug it in; when they do its game over, they’re infected. That trick worked great back in the day but not so much any more. The first issue is that most people are well aware of the USB stick threat due to the many published articles about the subject. The second is that more and more companies are pushing out group policies that disable the autorun feature in Windows systems. Those two things don’t eliminate the USB stick threat, but they certainly have a significant impact on its level of success and we wanted something more reliable.

Enter PRION, the evil HID.

mouse_guts

A prion is an infectious agent composed of a protein in a misfolded form. In our case the prion isn’t composed of proteins but instead is composed of electronics which include a teensy microcontroller, a micro USB hub (small one from RadioShack), a mini USB cable (we needed the ends) a micro flash drive (made from one of our Netragard USB Streamers), some home-grown malware (certainly not designed to be destructive), and a USB device like a mouse, missile turret, dancing stripper, chameleon, or whatever else someone might be tempted to plug in. When they do plug it in, they will be infected by our custom malware and we will use that point of infection to compromise the rest of the network.

For the purposes of this engagement we choose to use a fancy USB logitech mouse as our Hacker Interface Device / Attack Platform. To turn our logitech Human Interface Device into a Hacker Interface Device, we had to make some modifications. The first step of course was to remove the screw from the bottom of the mouse and pop it open. Once we did that we disconnected the USB cable from the circuit board in the mouse and put that to the side. Then we proceed to use a drummel tool to shave away the extra plastic on the inside cover of the mouse. (There were all sorts of tabs that we could sacrifice). The removal of the plastic tabs was to make room for the new hardware.

Once the top of the mouse was gutted and all the unnecessary parts removed we began to focus on the USB hub. The first thing we had to do was to extract the board from the hub. Doing that is a lot harder than it sounds because the hub that we chose was glued together and we didn’t want to risk breaking the internals by being too rough. After about 15 minutes of prying with a small screwdriver (and repeated accidental hand stabbing) we were able to pull the board out from the plastic housing. We then proceeded to strip the female USB connectors off of the board by heating their respective pins to melt the solder (careful not to burn the board). Once those were extracted we were left with a naked USB hub circuit board that measured about half an inch long and was no wider than a small bic lighter.

With the mouse and the USB board prepared we began the process of soldering. The first thing that we did was to take the mini USB cable, cut one of the ends off leaving about 1 inch of wire near the connector. Then we stripped all plastic off of the connector and stripped a small amount of wire from the 4 internal wires. We soldered those four wires to the USB board making sure to follow the right pinout pattern. This is the cable that will plug into the teensy mini USB port when we insert the teensy microcontroller.

Once that was finished we took the USB cable that came with the mouse and cut the circuit board connector off of the end leaving 2 inchs of wire attached. We stripped the tips of the 4 wires still attached to the connector and soldered those to the USB hub making sure to follow the right pinout patterns mentioned above. This is an important cable as its the one that connects the USB hub to the mouse. If this cable is not soldered properly and the connections fail, then the mouse will not work. We then took the other piece of the mouse cable (the longer part) and soldered that to the USB board. This is the cable that will connect the mouse to the USB port on the computer.

At this point we have three cables soldered to the USB hub. Just to recap those cables are the mouse connector cable, the cable that goes from the mouse to the computer, and the mini USB adapter cable for the teensy device. The next and most challenging part of this is to solder the USB flash drive to the USB hub. This is important because the USB flash drive is where we store our malware. If the drive isn’t soldered on properly then we won’t be able to store our malware on the drive and the the attack would be mostly moot. ( We say mostly because we could still instruct the mouse to fetch the malware from a website, but that’s not covert.)

To solder the flash drive to the USB hub we cut about 2 inches of cable from the mini USB connector that we stole the end from previously. We stripped the ends of the wires in the cable and carefully soldered the ends to the correct points on the flash drive. Once that was done we soldered the other ends of the cable to the USB hub. At that point we had everything soldered together and had to fit it all back into the mouse. Assembly was pretty easy because we were careful to use as little material as possible while still giving us the flexibility that we needed. We wrapped the boards and wires in single layers of electrical tape as to avoid any shorts. Once everything was we plugged in we tested the devices. The USB drive mounted, the teensy card was programmable, and the mouse worked.

Time to give prion the ability to infect…

We learned that the client was using Mcafee as their antivirus solution because one of their employees was complaining about it on Facebook. Remember, we weren’t allowed to use social networks for social engineering but we certainly were allowed to do reconnaissance against social networks. With Mcafee in our sights we set out to create custom malware for the client (as we do for any client and their respective antivirus solution when needed). We wanted our malware to be able to connect back to Metasploit because we love the functionality, we also wanted the capabilities provided by meterpreter, but we needed more than that. We needed our malware to be fully undetectable and to subvert the “Do you want to allow this connection” dialogue box entirely. You can’t do that with encoding…

To make this happen we created a meterpreter C array with the windows/meterpreter/reverse_tcp_dns payload. We then took that C array, chopped it up and injected it into our own wrapper of sorts. The wrapper used an undocumented (0-day) technique to completely subvert the dialogue box and to evade detection by Mcafee. When we ran our tests on a machine running Mcafee, the malware ran without a hitch. We should point out that our ability to evade Mcafee isn’t any indication of quality and that we can evade any Antivirus solution using similar custom attack methodologies. After all, its impossible to detect something if you don’t know what it is that you are looking for (It also helps to have a team of researchers at our disposal).

Once we had our malware built we loaded it onto the flash drive that we soldered into our mouse. Then we wrote some code for the teensy microcontroller to launch the malware 60 seconds after the start of user activity. Much of the code was taken from Adrian Crenshaw’s website who deserves credit for giving us this idea in the first place. After a little bit of debugging, our evil mouse named prion was working flawlessly.

Usage: Plug mouse into computer, get pwned.

The last and final step here was to ship the mouse to our customer. One of the most important aspects of this was to repack the mouse in its original package so that it appeared unopened. Then we used Jigsaw to purchase a list of our client’s employees. We did a bit of reconnaissance on each employee and found a target that looked ideal. We packaged the mouse and made it look like a promotional gadget, added fake marketing flyers, etc. then shipped the mouse. Sure enough, three days later the mouse called home.

Inside The Brains Of A Professional Bank Hacking Team

Originally posted on Forbes.comRead the original article here.
We were recently hired to perform an interesting Advanced Stealth Penetration test for a mid-sized bank. The goal of the penetration test was to penetrate into the bank’s IT Infrastructure and see how far we could get without detection. This is a bit different than most penetration tests as we weren’t tasked with identifying risks as much as we were with demonstrating vulnerability.

The first step of any penetration test is reconnaissance. Reconnaissance is the military term for the passive collection of intelligence about an enemy prior to attacking that enemy. It is technically impossible to effectively attack an enemy without first obtaining actionable intelligence about the enemy. Failure to collect good intelligence can result in significant casualties, unnecessary collateral damage and a completely failed attack. In penetration testing, damages are realized by downed systems and a loss of revenue.

Because this engagement required stealth, we focused on the social attack vectors and Social Reconnaissance. We first targeted FaceBook with our “FaceBook from the hackers perspective” methodology. That enabled us to map relationships between employees, vendors, friends, family etc. It also enabled us to identify key people in Accounts Receivable / Accounts Payable (“AR/AP”).

In addition to FaceBook, we focused on websites like Monster, Dice, HotJobs, LinkedIn, etc.

We identified a few interesting IT related job openings that disclosed interesting and useful technical information about the bank. That information included but was not limited to what Intrusion Detection technologies had been deployed, what their primary Operating Systems were for Desktops and Servers, and that they were a Cisco shop.

Naturally, we thought that it was also a good idea to apply for the job to see what else we could learn. To do that, we created a fake resume that was designed to be the “perfect fit” for an “Sr. IT Security” position (one of the opportunities available).Within one day of submission of our fake resume, we had a telephone screening call scheduled.

We started the screening call with the standard meet and greet, and an explanation of why we were interested in the opportunity. Once we felt that the conversation was flowing smoothly, we began to dig in a bit and start asking various technology questions. In doing so, we learned what Anti-Virus technologies were in use and we also learned what the policies were for controlling outbound network traffic.

That’s all that we needed!

Upon completion of our screening call, we had sufficient information to attempt stealth penetration with a high probability of success. The beauty is that we collected all of this information without sending a single packet to our customer’s network.

In summary we learned:

  • That the bank uses Windows XP for most Desktops
  • Who some of the bank’ vendors were (IT Services)
  • The names and email addresses of people in AR/AP
  • What Anti-Virus technology the bank uses
  • Information about the banks traffic control policies

Based on the intelligence that we collected we decided that the ideal scenario for stealth penetration would be to embed an exploit into a PDF document and to send that PDF document to the bank’s AR/AP department from the banks trusted IT Services provider. This attack was designed to exploit the trust that our customer had with their existing IT Services provider.

When we created the PDF, we used the new reverse https payload that was recently released by the Metasploit Project.(Previously we were using similar but more complex techniques for encapsulating our reverse connections in HTTPS).

We like reverse HTTPS connections for two reasons:

  • First, Intrusion Detection Technologies cannot monitor encrypted network traffic. Using an encrypted reverse connection ensures that we are protected from the prying eyes of Intrusion Detection Systems and less likely to trip alarms.
  • Second, most companies allow outbound HTTPS (port 443) because its required to view many websites. The reverse HTTPS payload that we used mimics normal web browsing behavior and so is much less likely to set off any Intrusion Detection events.

Before we sent the PDF to the our customer we checked it against the same Antivirus Technology that they were using to ensure that it was not detected as malware or a virus. To evade the scanners we had to “pack” our pseudo-malware in such a way that it would not be detected by the scanners. Once that was done and tested, we were ready to launch our attack.

When we sent the PDF to our customer, it didn’t take long for the victim in AP/AR to open it, after all it appeared to be a trusted invoice. Once it was opened, the victims computer was compromised. That resulted in it establishing a reverse connection to our lab which we then tunneled into to take control of the victims computer (all via HTTPS).

Once we had control, our first order of operation was to maintain access. To do this we installed our own backdoor technology onto the victims computer. Our technology also used outbound HTTPS connections, but for authenticated command retrieval. So if our control connection to the victims computer was lost, we could just tell our backdoor to re-establish the connection.

The next order of operation was to deploy our suite of tools on the compromised system and to begin scoping out the internal network. We used selective ARP poisoning as a first method for performing internal reconnaissance. That proved to be very useful as we were able to quickly identify VNC connections and capture VNC authentication packets. As it turns out, the VNC connections that we captured were being made to the Active Directory (“AD”) server.

We were able to crack the VNC password by using a VNC Cracking Tool. Once that happened we were able to access, the AD server and extract the servers SAM file. We then successfully cracked all of the passwords in that file, including the historical user passwords. Once the passwords were cracked, we found that the same credentials were used across multiple systems. As such, we were not only able to access desktops and servers, but also able to access Cisco devices, etc.

In summary, we were able to penetrate into our customers IT Infrastructure and effectively take control of the entire infrastructure without being detected. We accomplished that by avoiding conventional methods for penetration and by using our own unorthodox yet obviously effective penetration methodologies.

This particular engagement was interesting as our customers goal was not to identify all points of risk, but instead was to identify how deeply we could penetrate. Since the engagement, we’ve worked with that customer to help them create barriers for isolation in the event of penetration. Since those barriers have been implemented, we haven’t been able to penetrate as deeply.

As usual, if you have any questions or comments, please leave them on our blog. If there’s anything you’d like us to write about, please email me the suggestion. If I’ve made a grammatical mistake in here, I’m a hacker not an English major.

Originally posted on Forbes.comRead the original article here.
Netragard, Inc. — “We protect you from people like us”.