Netragard’s thoughts on Pentesting IPv6 vs IPv4

We’ve heard a bit of “noise” about how IPv6 may impact network penetration testing and how networks may or may not be more secure because of IPv6.  Lets be clear, anyone telling you that IPv6 makes penetration testing harder doesn’t understand the first thing about real penetration testing.
Whats the point of IPv6?
IPv6 was designed by the Internet Engineering Task Force (“IETF”) to address the issue of IPv4 address space exhaustion.  IPv6 uses a 128-bit address space while IPv4 is only 32 bits.  This means that there are 2128 possible addresses with IPv6, which is far more than the 232 addresses available with IPv4.  This means that there are going to be many more potential targets for a penetration tester to focus on when IPv6 becomes the norm.
What about increased security with IPv6?
The IPv6 specification mandates support for the Internet Protocol Security (“IPSec”) protocol suite, which is designed to secure IP communications by authenticating and encrypting each IP Packet. IPSec operates at the Internet Layer of the Internet Protocol suite and so differs from other security systems like the Secure Socket Layer, which operates at the application layer. This is the only significant security enhancement that IPv6 brings to the table and even this has little to no impact on penetration testing.
What some penetration testers are saying about IPv6.
Some penetration testers argue that IPv6 will make the job of a penetration testing more difficult because of the massive increase in potential targets. They claim that the massive increase in potential targets will make the process of discovering live targets impossibly time consuming. They argue that scanning each port/host in an entire IPv6 range could take as long as 13,800,523,054,961,500,000 years.  But why the hell would anyone waste their time testing potential targets when they could be testing actual live targets?
The very first step in any penetration test is effective and efficient reconnaissance. Reconnaissance is the military term for the passive gathering of intelligence about an enemy prior to attacking an enemy.  There are countless ways to perform reconnaissance, all of which must be adapted to the particular engagement.  Failure to adapt will result bad intelligence as no two targets are exactly identical.
A small component of reconnaissance is target identification.  Target identification may or may not be done with scanning depending on the nature of the penetration test.  Specifically, it is impossible to deliver a true stealth / covert penetration test with automated scanners.  Likewise it is very difficult to use a scanner to accuratley identify targets in a network that is protected by reactive security systems (like a well configured IPS that supports black-listing).  So in some/many cases doing discovery by scanning an entire block of addresses is ineffective.
A few common methods for target identification include Social Engineering, DNS enumeration, or maybe something as simple as asking the client to provide you with a list of targets.  Not so common methods involve more aggressive social reconnaissance, continued reconnaissance after initial penetration, etc.  Either way, it will not take 13,800,523,054,961,500,000 years to identify all of the live and accessible targets in an IPv6 network if you know what you are doing.
Additionally, penetration testing against 12 targets in an IPv6 network will take the same amount of time as testing 12 targets in an IPv4 network.  The number of real targets is what is important and not the number of potential targets.  It would be a ridiculous waste of time to test 2128 IPv6 Addresses when only 12 IP addresses are live.  Not to mention that increase in time would likely translate to an increase in project cost.
So in reality, for those who are interested, hacking an IPv6 network won’t be any more or less difficult than hacking an IPv4 network.  Anyone that argues otherwise either doesn’t know what they are doing or they are looking to charge you more money for roughly the same amount of work.

Hacking your car for fun and profit.

Our CEO (Adriel Desautels) recently spoke at the Green Hills Software Elite Users Technology Summit regarding automotive hacking. During his presentation there were a series of reporters taking photographs, recording audio, etc.   Of all of the articles that came out, one in particular caught our eye.  We made the front page of “Elektronik iNorden” which is a Swedish technology magazine that focuses on hardware and embedded systems.
You can see the full article here but you’ll probably want to translate:
hacking a car
http://www.webbkampanj.com/ein/1011/?page=1&mode=50&noConflict=1

What really surprised us during the presentation was how many people were in disbelief about the level of risk associated with cars built after 2007.

 For example, it really isn’t all that hard to program a car to kill the driver.  In fact, its far too easy due to the overall lack of security cars today.

Think of a car as an IT Infrastructure.  All of the servers in the infrastructure are critical systems that control things like breaks, seat belts, door locks, engine timing, airbags, lights, the radio, the dashboard display, etc.  Instead of these systems being plugged into a switched network they are plugged into a hub network lacking any segmentation with no security to speak of.  The only real difference between the car network and your business network is that the car doesn’t have an Internet connection.
Enter the Chevrolet Volt, the first car to have its own IP address. Granted we don’t yet know how the Volt’s IP address will be protected.  We don’t know if each car will have a public IP address or if the cars will be connected to a private network controlled by Chevy (or someone else).  What we do know is that the car will be able to reach out to the Internet and so it will be vulnerable to client side attacks.

So what happens if someone is able to attack the car?

Realistically if someone is able to hack into the car then they will be able to take full control over almost any component of the car.  They can do anything from apply the brakes, accelerate the car, prevent the brakes from applying, kill (literally destroy) the engine, apply the breaks to one side of the car, lock the doors, pretension the seat belts, etc. For those of you that think this is Science Fiction, it isn’t.  Here’s one of many research papers that demonstrates the risks.

Why is this possible?

This is possible because people adopt technology too quickly and don’t stop to think about the risks but instead are blinded by the continence that it introduces.  We see this in all industries not just automotive. IT managers, CIO’s, CSO’s, CEO’s, etc. are always purchasing and deploying new technologies without really evaluating the risks.  In fact just recently we had a client purchase a “secure email gateway” technology… it wasn’t too secure.  We were able to hack it and access every email on the system because it relied on outdated third party software.
Certainly another component that adds to this is that most software developers write vulnerable and buggy code (sorry guys but its true).  Their code isn’t written to be secure, its written to do a specific thing like handle network traffic, beep your horn, send emails, whatever.  Poor code + a lack of security awareness == high risks.
So what can you do ?
Before you decide to adopt new technology make sure that you understand the benefits and the risks associated with the adoption.  If you’re not technical enough (most people aren’t) to do a low-level security evaluation then hire someone (a security researcher) to do it for you.  If you don’t then you could very well be putting yourselves and your customers at serious risk.

Cybersecurity Human Performance

What If?

I recently participated in a panel at the BASC conference that was held at the Microsoft New England Research & Development (NERD) building at One Memorial Drive in Cambridge. One of the questions that surfaced inspired me to write this article.
While there are more security solutions available today than ever before, are we actually becoming more secure or is the gap growing? The short answer is yes.  The security industry is reactive in that it can only respond to threats but it cannot predict them.  This is because of threats are defined by malicious hackers and technology savvy criminals and not the security industry.  Antivirus technology for example, was created as a response to viruses that were being written by hackers. So yes, security is getting better, technologies are advancing, and the gap is still growing rapidly.  One major part of the problem is that people adopt new technologies too quickly.  They don’t stop to question those technologies from the perspective a hacker…

 

A prime example of this problem is clearly demonstrated within the automotive industry. Computer systems that are in automobiles were not designed to withstand any sort of  real hacker threat.  This wasn’t much of a problem at first because automotive computer systems weren’t Internet connected and at first they didn’t have direct control over things like breaks and the accelerator.  That all changed as the automotive industry advanced and as people wanted the convenience that computer technology could bring to the table.  Now automotive computer systems directly control critical automotive functions and a hacker can interface with the computer system and cause potentially catastrophic failures.  Despite this the problem wasn’t perceived as particularly high risk because accessing the computer system required physical access to the car (or close proximity for TPMS like hacks). That is all going to change when the Chevy Volt hits the streets since the Chevy Volt actually has its own IP address and is network connected.  Is the risk really worth the convenience?
Another good example of how we adopt technology too quickly is demonstrated in critical infrastructure (power, water, communications, etc).  Just like the automotive industry critical systems were not initially designed to be plugged into the Internet. These critical systems are the systems that control the water coolant levels in our nuclear power plants or the mixtures of chemicals in water treatment plants, etc.  Some of these critical systems were designed in the 1960’s so the concept of the “hacker threat” didn’t exist.  Other systems are very modern but even those aren’t designed to be secure as much as they are designed to be functional.  Back in the day power plants, water treatment plants, etc. were air-gaped to isolate them from potentially harmful environments.  But as the Internet offered more and more convenience the air-gaps that once existed are almost extinct.  Now our critical systems connected to the Internet and exposed to real hacker threats; and do they get hacked?  Yes.  Again, is the risk really worth the convenience?
Of course an example that everyone can relate to is business networks.  Business networks are constantly evolving and new technologies are continually being adopted without proper vetting.  These technologies often include web applications, security technologies, backup technologies, content management systems, etc.  These technologies usually promise to make things easier and thus save time which equates to saving money.  For example, the other week we were delivering a penetration test for a pharmaceutical company.  This company had a video conference system setup so that they could speak with remote offices and have “face to face” conversations.  They loved the technology because it made for more productive meetings and we loved the technology because it was easy to hack.
Despite the fact that the security industry is evolving at a rapid pace, it can’t keep up with the volume of people that are prematurley adopting new and untested technologies. This adoption causes the gap between good security and security risks to grow. To help close the gap consumers need to start challenging their vendors.  They need to ask their vendors to demonstrate the security of their technology and maybe even to make some sort of a guarantee about it. There are some solid companies out there that offer services designed to enhance the security of technology products.  Once such company is Veracode (no affiliation with Netragard).

Netragard Protects Voters

Penetration Testing – What’s that?

It amazes me that most of the “security companies” that offer penetration testing services don’t know what penetration testing is. Specifically, they don’t deliver penetration tests even though they call their services penetration testing services. In most cases their customers think that they’re receiving penetration tests but instead they’re receiving the lesser quality vulnerability assessment service.
When customers are looking to purchase penetration testing services they should receive penetration testing services. Likewise, when they’re looking to purchase vulnerability assessment services they should receive vulnerability assessment services. Unfortunately, customers won’t know what they’re receiving unless they clearly understand what those services are and how those services are defined. The services are not interchangeable and they are  entirely different.
The English dictionary defines a Penetration Test as a method for determining the presence of points where something can make its way through or into something else. Penetration testing is not unique to Information Security and is used by a wide variety of other industries.  For example, penetration testing is used to test armor by exposing the armor to a level of threat that is usually slightly higher in intensity than what it will face in the real world. If the armor is defeated by the threat then it is improved upon until it can withstand the threat.
The standard product of penetration testing is a report that identifies the points where penetration is possible.  If the service that was delivered was a real penetration test then the report cannot contain any false positives. You either penetrate or you don’t, there is no grey zone. If the report contains false positives than a service that was delivered was not a true penetration test and was likely a vulnerability assessment which is an entirely different and lower quality service.
A Vulnerability Assessment as defined by the English dictionary is a best estimate as to how susceptible something is to harm or attack. Vulnerability assessments are often used where penetration testing is too risky. Specifically, a vulnerability assessment might be used to assess the Eiffel Tower, the Statue of Liberty, the strength of a bridge, etc.   The important difference between Penetration Tests and Vulnerability Assessments is that Vulnerability Assessments do not prove that vulnerabilities exist but instead provide a best guess as denoted by the word “assessment”.
With regards to IT Security, Vulnerability Assessments test at a lower than real world threat level.  This is because Vulnerability Assessments do not exploit the vulnerabilities that they identify yet malicious hackers do.  Vulnerability Assessments alone are inadequate when it comes to providing deep and effective testing services but are useful for performing quarterly maintenance and checkups.
Lastly, don’t allow your vendor to confuse methodology with service definition.  Methodology defines how a service is delivered but not what a service is and from what perspective.  With regards to security testing there are only two core services , Vulnerability Assessments and Penetration Tests.  You can apply those services to Web Applications, Networks, People, Physical Locations, WiFi, etc.  For example, you can receive a Web Application Penetration Test, or a Network Vulnerability Assessment.  You wouldn’t need to receive both a Vulnerability Assessment and a Penetration Test against the same target as that would be redundant.  A Penetration Test covers the same ground as a Vulnerability Assessment only with even more depth, and accuracy.

Define Perimeter

Its surprising to us that people still define their network perimeter by their firewall, which is often the perceived demarcation point between the Internet and the Local Area Network (LAN).  The fact of the matter is that the real demarcation point has nothing to do with the firewall at all.   In fact these days the real demarcation point has more to do with the human element (you) than with technology in general.
I bring this up because the issue surfaces during penetration testing engagements frequently.  Specifically, customers want penetration testing services against their perimeter but they don’t actually know what their perimeter is.  Once we explain it to them their perspective on what a penetration test is changes significantly and for ever.  Their perimeter is defined by any point that is accessible to an Internet based attacker, but what does that really mean?
Clearly firewalls, web servers, email servers, ftp servers, etc. are accessible to an Internet based attacker.  But what about all of those services that businesses use on a daily basis that reach out to the Internet to collect data.  What about what you are doing right now?  You are likely reading this post in your web browser which means that you’ve reached out from the safety of your LAN to our web server.  What if I told you that this blog entry was specifically designed to exploit a vulnerability in your web browser and compromise your system?  Yes, by reading this blog entry your computer just got hacked.  (Not really, but imagine).
Truth be told, your web browser is not the only technology that is vulnerable to this sort of attack.  In fact, this is what defines a client side attack.  In this case the client is your web browser, but in some cases it might be your MP3 player, your email client, your smart phone, your PDF reader, or maybe even the update functionality in your anti-virus software.  Anything and everything that reaches out to third party networks from your network is a component of your network perimeter and each of those things helps to define your total attack surface. If you’re not including those types of tests when you receive penetration tests then you’re really only testing a very small fraction of your total attack surface.  Considering the number of businesses that are compromised on a daily basis with client side attacks, is that really something that you can afford to overlook?  Just an idea…

Cybersecurity Human Performance

The Human Vulnerability

It seems to us that one of the biggest threats that businesses face today is socially augmented malware attacks. These attacks have an extremely high degree of success because they target and exploit the human element. Specifically, it doesn’t matter how many protective technology layers you have in place if the people that you’ve hired are putting you at risk, and they are.

Case in point, the “here you have” worm that propagates predominantly via e-mail and promises the recipient access to PDF documents or even pornographic material. This specific worm compromised major organizations such as NASA, ABC/Disney, Comcast, Google Coca-Cola, etc. How much money do you think that those companies spend on security technology over a one-year period? How much good did it do at protecting them from the risks introduced by the human element? (Hint: none)

Here at Netragard we have a unique perspective on the issue of malware attacks because we offer pseudo-malware testing services. Our pseudo-malware module, when activated, authorizes us to test our clients with highly customized, safe, controlled, and homegrown pseudo-malware variants. To the best of our knowledge we are the only penetration testing company to offer such a service (and no, we’re not talking about the meterpreter).

Attack delivery usually involves attaching our pseudo-malware to emails or binding the pseudo-malware to PDF documents or other similar file types. In all cases we make it a point to pack (or crypt) our pseudo-malware so that it doesn’t get detected by antivirus technology (see this blog entry on bypassing antivirus). Once the malware is activated, it establishes an encrypted connection back to our offices and provides us with full control over the victim computer. Full control means access to the software and hardware including but not limited to keyboard, mouse, microphone and even the camera. (Sometimes we even deliver our attacks via websites like this one by embedding attacks into links).

So how easy is it to penetrate a business using pseudo-malware? Well in truth its really easy. Just last month we finished delivering an advanced external penetration test for one of our more secure customers. We began crafting an email that contained our pseudo-malware attachment and accidentally hit the send button without any message content. Within 45 seconds of clicking the send button and sending our otherwise blank email, we had 15 inbound connections from 15 newly infected client computer systems. That means that at least 15 employees tried to open our pseudo-malware attachment despite the fact that the email was blank! Imagine the degree of success that is possible with a well-crafted email?

One of the computer systems that we were able to compromise was running a service with domain admin privileges. We were able to use that computer system (impersonation attack involved) to create an account for ourselves on the domain (which happened to be the root domain). From there we were able to compromise the client’s core infrastructure (switches, firewalls, etc) due to a password file that we found sitting on someone’s desktop (thank you for that). Once that was done, there really wasn’t much more that we had left to do, it was game over.

The fact of the matter is that there’s nothing new about taking advantage of people that are willing to do stupid things. But is it really stupidity or is it just that employees don’t have a sense of accountability? Our experience tells us that in most cases its a lack of accountability that’s the culprit.

When we compromise a customer using pseudo-malware, one of the recommendations that we make to them is that they enforce policies by holding employees accountable for violations. We think that the best way to do that is to require employees to read a well-crafted policy and then to take a quiz based on that policy. When they pass the quiz they should be required to sign a simple agreement that states that they have read the policy, understood the policy, and agree to be held accountable for any violations that they make against the policy.

In our experience there is no better security technology than a paranoid human that is afraid of being held accountable for doing anything irresponsible (aka: violating the policy). When people are held accountable for something like security they tend to change their overall attitude towards anything that might negatively affect it. The result is a significantly reduced attack surface. If all organizations took this strict approach to policy enforcement then worms like the “here you have” worm wouldn’t be such a big success.

Compare the cost and benefit of enforcing a strict and carefully designed security policy to the cost and benefit of expensive (and largely ineffective) security technologies. Which do you think will do a better job at protecting your business from real threats? Its much more difficult to hack a network when that network is managed by people that are held accountable for its security than it is to hack a network that is protected technology alone.

So in the end there’s really nothing special about the “here you have” worm. It’s just another example of how malicious hackers are exploiting the same human vulnerability using an ever so slightly different malware variant. Antivirus technology certainly won’t save you and neither will other expensive technology solutions, but a well-crafted, cost-effective security policy just might do the trick.

It’s important to remember that well written security policies don’t only impact human behavior, but generally result in better management of systems, which translates to better technological security. The benefits are significant and the overall cost isn’t in comparison.

Netragard Protects Voters

That nice, new computerized car you just bought could be hackable

Link: https://www.cnet.com/tech/services-and-software/cars-the-next-hacking-frontier/

Of course, your car is probably not a high-priority target for most malicious hackers. But security experts tell CNET that car hacking is starting to move from the realm of the theoretical to reality, thanks to new wireless technologies and evermore dependence on computers to make cars safer, more energy efficient, and modern.

“Now there are computerized systems and they have control over critical components of cars like gas, brakes, etc.,” said Adriel Desautels, chief technology officer and president of Netragard, which does vulnerability assessments and penetration testing on all kinds of systems. “There is a premature reliance on technology.”

Often the innovations are designed to improve the safety of the cars. For instance, after a recall of Firestone tires that were failing in Fords in 2000, Congress passed the TREAD (Transportation Recall Enhancement, Accountability and Documentation) Act that required that tire pressure monitoring systems (TPMS) be installed in new cars to alert drivers if a tire is under-inflated.

Wireless tire pressure monitoring systems, which also were touted as a way to increase fuel economy, communicate via a radio frequency transmitter to a tire pressure control unit that sends commands to the central car computer over the Controller-Area Network (CAN). The CAN bus, which allows electronics to communicate with each other via the On-Board Diagnostics systems (OBD-II), is then able to trigger a warning message on the vehicle dashboard.

Researchers at the University of South Carolina and Rutgers University tested two tire pressure monitoring systems and found the security to be lacking. They were able to turn the low-tire-pressure warning lights on and off from another car traveling at highway speeds from 40 meters (120 feet) away and using low-cost equipment.

“While spoofing low-tire-pressure readings does not appear to be critical at first, it will lead to a dashboard warning and will likely cause the driver to pull over and inspect the tire,” said the report. “This presents ample opportunities for mischief and criminal activities, if past experience is any indication.”

“TPMS is a major safety system on cars. It’s required by law, but it’s insecure,” said Travis Taylor, one of the researchers who worked on the report.

“This can be a problem when considering other wireless systems added to cars. What does that mean about future systems?”

The researchers do not intend to be alarmist; they’re merely trying to figure out what the security holes are and to alert the industry to them so they can be fixed, said Wenyuan Xu, another researcher on the project. “We are trying to raise awareness before things get really serious,” she said.

Another report in May highlighted other risks with the increased use of computers coordinated via internal car networks. Researchers from the University of Washington and University of California, San Diego, tested how easy it would be to compromise a system by connecting a laptop to the onboard diagnostics port that they then wirelessly controlled via a second laptop in another car. Thus, they were able to remotely lock the brakes and the engine, change the speedometer display, as well as turn on the radio and the heat and honk the horn.

Granted, the researchers needed to have physical access to the inside of the car to accomplish the attack. Although that minimizes the likelihood of an attack, it’s not unthinkable to imagine someone getting access to a car dropped off at the mechanic or parking valet.

“The attack surface for modern automobiles is growing swiftly as more sophisticated services and communications features are incorporated into vehicles,” that report (PDF) said. “In the United States, the federally-mandated On-Board Diagnostics port, under the dash in virtually all modern vehicles, provides direct and standard access to internal automotive networks. User-upgradable subsystems such as audio players are routinely attached to these same internal networks, as are a variety of short-range wireless devices (Bluetooth, wireless tire pressure sensors, etc.).”

Engine Control Units
The ubiquitous Engine Control Units themselves started arriving in cars in the late 1970s as a result of the California Clean Air Act and initially were designed to boost fuel efficiency and reduce pollution by adjusting the fuel and oxygen mixture before combustion, the paper said. “Since then, such systems have been integrated into virtually every aspect of a car’s functioning and diagnostics, including the throttle, transmission, brakes, passenger climate and lighting controls, external lights, entertainment, and so on,” the report said.

It’s not just that there are so many embedded computers, it’s that safety critical systems are not isolated from non-safety critical systems, such as entertainment systems, but are “bridged” together to enable “subtle” interactions, according to the report. In addition, automakers are linking Engine Control Units with outside networks like global positioning systems. GM’s OnStar system, for example, can detect problems with systems in the car and warn drivers, place emergency calls, and even allow OnStar personnel to remotely unlock cars or stop them, the report said.

In an article entitled “Smart Phone + Car = Stupid?” on the EETimes site in late July, Dave Kleidermacher noted that GM is adding smartphone connectivity to most of its 2011 cars via OnStar. “For the first time, engines can now be started and doors locked by ordinary consumers, from anywhere on the planet with a cell signal,” he wrote.

Car manufacturers need to design the systems with security in mind, said Kleidermacher, who is chief technology officer at Green Hills Software, which builds operating system software that goes into cars and other embedded systems.

“You can not retrofit high-level security to a system that wasn’t designed for it,” he told CNET. “People are building this sophisticated software into cars and not designing security in it from the ground up, and that’s a recipe for disaster.”

Representatives from GM OnStar were not available for comment late last week or this week, a spokesman said.

“Technology in cars is not designed to be secure because there’s no perceived threat. They don’t think someone is going to hack a car like they’re going to hack a bank,” said Desautels of Netragard. “For the interim, network security in cars won’t be a primary concern for manufacturers. But once they get connected to the Internet and have IP addresses, I think they’ll be targeted just for fun.”

The threat is primarily theoretical at this point for a number of reasons. First, there isn’t the same financial incentive to hacking cars as there is to hacking online bank accounts. Secondly, there isn’t one dominant platform used in cars that can give attackers the same bang for their buck to target as there is on personal computers.

“The risks are certainly increasing because there are more and more computers in the car, but it will be much tougher to (attack) than with the PC,” said Egil Juliussen, a principal analyst at market researcher firm iSuppli. “There is no equivalent to Windows in the car, at least not yet, so (a hacker) will be dealing with a lot of different systems and have to have some knowledge about each one. It doesn’t mean a determined hacker couldn’t do it.”

But Juliussen said drivers don’t need to worry about anything right now. “This is not a problem this year or next year,” he said. “Its five years down the road, but the way to solve it is to build security into the systems now.”

Infotainment systems
In the meantime, the innovations in mobile communications and entertainment aren’t limited to smartphones and iPads. People want to use their devices easily in their cars and take advantage of technology that will let them make calls and listen to music without having to push any buttons or touch any track wheels. Hands-free telephony laws in states are requiring this.

Millions of drivers are using the SYNC system that has shipped in more than 2 million Ford cars that allows people to connect digital media players and Bluetooth-enabled mobile phones to their car entertainment system and use voice commands to operate them. The system uses Microsoft Auto as the operating system. Other cars offer less-sophisticated mobile device connectivity.

“A lot of cars have Bluetooth car kits built into them so you can bring the cell phone into your car and use your phone through microphones and speakers built into the car,” said Kevin Finisterre, lead researcher at Netragard. “But vendors often leave default passwords.”

Ford uses a variety of security measures in SYNC, including only allowing Ford-approved software to be installed at the factory and default security set to Wi-Fi Protected Access 2 (WPA2), which requires users to enter a randomly chosen password to connect to the Internet. To protect customers when the car is on the road and the Mobile Wi-Fi Hot Spot feature is enabled, Ford also uses two firewalls on SYNC, a network firewall similar to a home Wi-Fi router and a separate central processing unit that prevents unauthorized messages from bei ng sent to other modules within the car.

“We use the security models that normal IT folks use to protect an enterprise network,” said Jim Buczkowski, global director of electrical and electronics systems engineering for Ford SYNC.

Not surprisingly, there is a competing vehicle “infotainment” platform being developed that is based on open-source technology. About 80 companies have formed the Genivi Alliance to create open standards and middleware for information and entertainment solutions in cars.

Asked if Genivi is incorporating security into its platform from the get-go, Sebastian Zimmermann, chair of the consortium’s product definition and planning group, said it is up to the manufacturers that are creating the branded devices and custom apps to build security in and to take advantage of security mechanisms provided in Linux, the open-source operating system the platform is based on.

“Automakers are aware of security and have taken it seriously…It’s increasingly important as the vehicle opens up new interfaces to the outside world,” Zimmermann said. “They are trying to find a balance between openness and security.”

Another can of security worms being opened is the fact that cars may follow the example of smart phones and Web services by getting their own customized third-party apps. Hughes Telematics reportedly is working with automakers on app stores for drivers.

This is already happening to some extent, for instance, with video cameras becoming standard in police cars and school buses, bringing up a host of security and privacy issues.

“We did a penetration test where we had a police agency that has some in-car cameras,” Finisterre of Netragard said, “and we were able to access the cameras remotely and have live audio and video streams from the police car due to vulnerabilities in the manufacturing systems.”

“I’m sure (eventually) there is going to be smart pavement and smart lighting and other dumb stuff that has the capability of interacting with the car in the future,” he said. “Technology is getting pushed out the door with bells and whistles and security gets left behind.”

 

Bypassing Antivirus to Hack You

Many people assume that running antivirus software will protect them from malware (viruses, worms, trojans, etc), but in reality the software is only partially effective. This is true because antivirus software can only detect malware that it knows to look for. Anything that doesn’t match a known malware pattern will pass as a clean and trusted file.
Antivirus technologies use virus definition files to define known malware patterns. Those patterns are derived from real world malware variants that are captured in the wild. It is relatively easy to bypass most antivirus technologies by creating new malware or modifying existing malware so that it does not contain any identifiable patterns.
One of the modules that our customers can activate when purchasing Penetration Testing services from us, is the Pseudo Malware module. As far as we know, we are one of the few Penetration Testing companies to actually use Pseudo Malware during testing. This module enables our customers to test how effective their defenses are against real world malware threats but in a safe and controllable way.
Our choice of Pseudo Malware depends on the target that we intend to penetrate and the number of systems that we intend to compromise. Sometimes we’ll use Pseudo Malware that doesn’t automatically propagate and other times we’ll use auto-propagation. We should mention that this Pseudo Malware is only “Pseudo” because we don’t do anything harmful with it and we use it ethically. The fact of the matter is that this Pseudo Malware is very real and very capable technology.
Once we’ve determined what Pseudo Malware variant to go with, we need to augment the Pseudo Malware so that it is not detectable by antivirus scanners. We do this by encrypting the Pseudo Malware binary with a special binary encryption tool. This tool ensures that the binary no longer contains patters that are detectable by antivirus technologies.

Before Encryption:


After Encryption: (Still Infected)

As you can see from the scan results above, the Pseudo Malware was detected by most antivirus scanners before it was encrypted. We expected this because we chose a variant of Pseudo Malware that contained several known detectable patterns. The second image (after encryption) shows the same Pseudo Malware being scanned after encryption. As you can see, the Pseudo Malware passed all antivirus scanners as clean.

Now that we’ve prevented antivirus software from being able to detect our Pseudo Malware, we need to distribute it to our victims. Distribution can happen many ways that include but are not limited to infected USB drives, infected CD-ROM’s, Phishing emails augmented by IDN homograph attacks with the Pseudo Malware attached, Facebook, LinkedIn, MySpace, binding to PDF like files, etc.

Our preferred method for infection is email (or maybe not). This is because it is usually very easy to gather email addresses using various existing email harvesting technologies and we can hit a large number of people at the same time. When using email, we may embed a link that points directly to our Pseudo Malware, or we might just insert the malware directly into the email. Infection simply requires that the user click our link or run the attached executable. In either case, the Pseudo Malware is fast and quiet and the user doesn’t notice anything strange.

Once a computer is infected with our Pseudo Malware it connects back to our Command and Control server and grants us access to the system unbeknownst to the user. Once we have access we can do anything that the user can do including but no
t limited to seeing the users screen as if we were right there,
running programs, installing software, uninstalling software, activating web cam’s and microphones, accessing and manipulating hardware, etc. More importantly, we can use that computer to compromise the rest of the network through a process called Distributed Metastasis.

Despite how easy it is to bypass antivirus technologies, we still very strongly recommend using them as they keep you protected from known malware variants.

Security Chip SIM

Security Vulnerability Penetration Assessment Test?

Our philosophy here at Netragard is that security-testing services must produce a threat that is at least equal to the threat that our customers are likely to face in the real world. If we test our customers at a lesser threat level and a higher-level threat attempts to align with their risks, then they will likely suffer a compromise. If they do suffer a compromise, then the money that they spent on testing services might as well be added to the cost in damages that result from the breach.
This is akin to how armor is tested. Armor is designed to protect something from a specific threat. In order to be effective, the armor is exposed to a level of threat that is slightly higher than what it will likely face in the real world. If the armor is penetrated during testing, it is enhanced and hardened until the threat cannot defeat the armor. If armor is penetrated in battle then there are casualties. That class of testing is called Penetration Testing and the level of threat produced has a very significant impact on test quality and results.

What is particularly scary is that many of the security vendors who offer Penetration Testing services either don’t know what Penetration Testing is or don’t know the definitions for the terms. Many security vendors confuse Penetration Testing with Vulnerability Assessments and that confusion translates to the customer. The terms are not interchangeable and they do not define methodology, they only define testing class. So before we can explain service quality and threat, we must first properly define services.

Based on the English dictionary the word “Vulnerability” is best defined as susceptibility to harm or attack. Being vulnerable is the state of being exposed. The word “Assessment” is best defined as the means by which the value of something is estimated or determined usually through the process of testing. As such, a “Vulnerability Assessment” is a best estimate as to how susceptible something is to harm or attack.

Lets do the same for “Penetration Test”. The word “Penetration” is best defined as the act of entering into or through something, or the ability to make way into or through something. The word “Test” is best defined as the means by which the presence, quality or genuineness of anything is determined. As such the term “Penetration Test” means to determine the presence of points where something can make its way through or into something else.

Despite what many people think, neither term is specific to Information Technology. Penetration Tests and Vulnerability Assessments existed well before the advent of the microchip. In fact, the ancient Romans used a form of penetration testing to test their armor against various types of projectiles. Today, we perform Structural Vulnerability Assessments against things like the Eiffel Tower, and the Golden Gate Bridge. Vulnerability Assessments are chosen because Structural Penetration Tests would cause damage to, or possibly destroy the structure.

In the physical world Penetration Testing is almost always destructive (at least to a degree), but in the digital world it isn’t destructive when done properly. This is mostly because in the digital world we’re penetrating a virtual boundary and in the physical world we’re penetrating a physical boundary. When you penetrate a virtual boundary you’re not really creating a hole, you’re usually creating a process in memory that can be killed or otherwise removed.

When applied to IT Security, a Vulnerability Assessment isn’t as accurate as a Penetration Test. This is because Vulnerability Assessments are best estimates and Penetration Tests either penetrate or they don’t. As such, a quality Vulnerability Assessment report will contain few false positives (false findings) while a quality Penetration Testing report should contain absolutely no false positives. (though they do sometimes contain theoretical findings).

The quality of service is determined by the talent of the team delivering services and by the methodology used for service delivery. A team of research capable ethical hackers that have a background in exploit development and system / network penetration will usually deliver higher quality services than a team of people who are not research capable. If a team claims to be research capable, ask them for example exploit code that they’ve written and ask them for advisories that they’ve published.

Service quality is also directly tied to threat capability. The threat in this case is defined by the capability of real world malicious hackers. If testing services do not produce a threat level that is at least equal to the real world threat, then the services are probably not worth buying. After all, the purpose for security testing is to identify risks so that they can be fixed / patched / eliminated before malicious hackers exploit them. But if the security testing services are less capable than the malicious hacker, then chances are the hacker will find something that the service missed.

We Are Politically Incorrect

 
Back in February of 2009 we released an article called FaceBook from the hackers perspective. As far as we know, we were the first to publish a detailed article about using Social Networking Websites to deliver surgical Social Engineering attacks. Since that time, we noticed a significant increase in marketing hype around Social Engineering from various other security companies. The problem is that they’re not telling you the whole truth.

The whole truth is that Social Engineering is a necessary but potentially dangerous service. Social Engineering at its roots is the act of exploiting the human vulnerability and as such is an offensive and politically incorrect service. If a customer’s business has any pre-existing social or political issues then Social Engineering can be like putting a match to a powder keg. In some cases the damages can be serious and can result in legal action between employee and employer, or visa versa.

It’s for this reason that businesses need to make sure that their environments are conducive to receiving social attacks, and that they are prepared to deal with the emotional consequences that might follow. If employees are trained properly and if security policies are enforced that cover the social vector, then things “should” be ok. If those policies don’t exist and if there’s any internal turmoil, high-risk employees, or potentially delicate political situations, then Social Engineering is probably not such a great idea as it will likely identify and exploit one of those pre-existing issues.

For example, we recently delivered services to a customer that had pre-existing issues but assumed that their environment was safe for testing with Social Engineering. In this particular case the customer had an employee that we’ll call Jane Doe who was running her own business on the side. Jane Doe was advertising her real employers name on her business website making it appear as if there was a relationship between her employer and her business. She was also advertising her business address as her employers address on her FaceBook fan page. From our perspective, Jane Doe was a perfect Social Engineering target.

With this social risk identified, we decided that we’d impersonate Jane Doe and hijack the existing relationships that she had with our customer (her employer). We accomplished this with a specially crafted phishing attack.

The first step in the phish was to collect content for the phishing email. In this case Jane Doe posted images to her FaceBook fan page that included a photo of herself and a copy of her businesses logo. We used those images to create an email that looked like it originated from Jane Doe’s email address at our customers network and was offering the recipient discounted pricing. (Her FaceBook privacy settings were set to allow everybody.)

Once we had the content for the phishing email set up we used an IDN homograph attack to register a new domain that appeared to be identical to our customers domain. For example, if our customer was SNOsoft and their real domain was snosoft.com, the fake domain looked just like “snosoft.com”.

We embedded a link into the phishing email using the fake domain to give it a legitimate look and feel. The link was advertised as the place to click to get information about specially discounted offerings that were specific to our customer’s employees. Of course, the link really pointed to our web server where we were hosting a browser based exploit.

Then we collected email addresses using an enumerator and loaded those into a distribution list. We sent a test email to ourselves first to make sure that everything would render ok. Once our testing was complete, we clicked send and the phish was on its way. Within 15 minutes of delivering the attack our customer called us and requested that all testing be stopped. But by that time, 38 people had already clicked on our embedded URL, and more clicks were on their way.

As it turns out, our customer wasn’t prepared to receive Social Engineering tests despite the fact that they requested them. At first they accused us of being unprofessional because we used Jane Doe’s picture in the phishing email, which was apparently embarrassing to Jane Doe. Then they accused us of being politically incorrect for the same reason.

So we asked our customer, “Do you think that a black-hat would refrain from doing this because it’s politically incorrect?” Then we said, “Imagine if a black-hat launched this attack, and received 38 clicks (and counting).” (Each click representing a potential compromise).

While we can’t go into much more detail for reasons of confidentiality, the phishing attack uncovered other more serious internal and political issues. Because of those issues, we had to discontinue testing and move to report delivery. There was no fault or error on our part as everything was requested and authorized by the customer, but this was certainly a case of the match and the powder keg.

Despite the unfortunate circumstances, the customer did benefit significantly from the services. Specifically, the customer became aware of some very serious social risks that would have
been extremely damaging had they been identified and exploited by black-hat hackers. Even if it was a painfu
l process for the customer, we’re happy that we were able to deliver the services as we did because they enabled our customer to reduce their overall risk and exposure profile.

The moral of the story is that businesses should take care and caution when requesting Social Engineering services. They should be prepared for uncomfortable situations and discoveries, and if possible they should train and prepare their employees in advance. In the end it boils down to one of two things. Is it more important for a company to understand their risks or is it more important to avoid embarrassing or offending an employee.