ZERO DAYS: The Technology Back Story Behind The Movie

 

GettyImages-530881765-640x426

Getty Images

 

The first rule of zero-days is no one talks about zero-days (so we’ll explain)

Just as defenders find their feet, lawmakers move to outlaw security research entirely.

by Sebastian AnthonyARS Technica

How do you defend yourself against the unknown? That is crux of the zero-day vulnerability: a software vulnerability that, by definition, is unknown by the user of the software and often its developer as well.

Everything about the zero-day market, from research and discovery through disclosure and active exploitation, is predicated upon this fear of the unknown—a fear that has been amplified and distorted by the media. Is the world really at threat of destabilisation due to lone-wolf hackers digging up vulnerabilities in popular software packages and selling them to whichever repressive government offers the most money? Or is it just a classic case of the media and megacorp lobbyists focusing on the sexy, scary, offensive side of things, and glossing over the less alluring aspects?

And then what about legislation and regulation of zero-days? In most countries, there are scant legal mechanisms for discouraging or punishing the discovery of new zero-days. There are even fewer laws and directives dictating how zero-days should be responsibly disclosed. It isn’t that lawmakers aren’t aware of these problems, it’s just that there isn’t an easy solution. How do you craft a law that allows some research groups to keep on digging for vulnerabilities while at the same time blocking the black hats? What if the government’s idea of “responsible disclosure” means disclosing all vulnerabilities to GCHQ or the NSA?

Recently, Europe began discussing how best to interpret the Wassenaar Arrangement—an agreement between 41 countries that was originally designed to limit the proliferation of physical, military weapons to non-desirables—when it applies to the proliferation of surveillance software, intrusion tools, and zero-day software vulnerabilities. In the US, the Senate is set to vote on the Cybersecurity Information Sharing Act as soon as today. The legislation would expand the Computer Fraud and Abuse Act to include security research. The US is trying to decide how to interpret Wassenaar when it comes to the exporting of intrusion software and zero-days too.

The outcome of these consultations and parliamentary processes will dictate whether security researchers, irrespective of the colour of their hat, can continue to operate in Europe and the US.

Who uses zero-days, and what are they used for?

A zero-day is a very specific thing, and it likewise has a very specific purpose: gaining access to something without someone else finding out. This specificity is what makes zero-days so powerful and at the same time so weak. The more boldly or broadly you use a zero-day, the more likely you are to be discovered—then the jig is up. If you’ve spent £200,000 on acquiring a zero-day and perhaps thousands of hours actually engineering and coordinating the attack, it’s unlikely that your first port of call is to infect millions of computers and immediately raise the ire of the security research community.

In short, then, the actual usage of zero-days is quite limited. “It’s not that zero-days aren’t being used,” explained Adriel Desautels, an experienced security researcher and CEO of Netragard. “It’s that they hold no real threat for the average business or citizen.”

Think of an exploit based on a zero-day vulnerability as a laser-targeted, bunker-busting bomb for solving a single problem rather than a panacea. “A prime example of zero-day usage: 2013, FBI, Firefox, child porn,” said Desautels. He’s referring to a high-profile case where the FBI is believed to have used a zero-day vulnerability in Firefox to catch a number of people who were browsing child porn. “That’s the kind of thing that a government does when it purchases a zero-day. Very specialised.”

Another, slightly more famous example of zero-day usage is Project Aurora, where a group of purportedly state-sponsored Chinese hackers used a zero-day vulnerability in Internet Explorer to penetrate and exfiltrate data from Google and a number of other US tech giants.

“On the black market, for stealing credit cards and things like that, you don’t use zero-days,” Desautels said. “So if you think ‘who actually needs a zero-day?’—well, it’s the people who are going after very hardened infrastructure, which is clearly not public or private businesses. They have to exfiltrate information without detection, and they’re willing to spend a very large amount of money to do it one time. Who is willing to do that other than governments?”

Furthermore, Desautels pointed out, “the government doesn’t need zero-days to spy on people.” A prime example is PRISM, the giant surveillance program revealed in 2013 by Edward Snowden. “How does PRISM work? It tied into Verizon, AT&T—everyone was just freely providing information. People send text messages from their mobiles. They post status updates on Facebook.”

Another thing to consider when it comes to zero-day discovery and exploitation is that the target is always moving. After years of assault, it’s now quite hard to find a zero-day in Windows, and so the focus has shifted to applications. But as Java, Flash, and other regularly assaulted apps eventually get themselves into shape, the focus is beginning to shift again, ZDI chief Jewel Timpe told Ars in an interview. “The focus has turned to new areas in computing, like the Internet of Things and SCADA [Supervisory Control and Data Acquisition; industrial machines].” Just in recent months, we’ve reported on IoT attacks that run the gamut, including everything from baby monitors to cars to light bulbs.

“Researchers go after what is interesting to them, and they especially go after things others haven’t tried yet. This is true of attackers as well, which is why SCADA systems are also an attractive target,” Timpe explained. “These systems are used to control different types of processes within large infrastructures, such as industrial power plants. While we may all be familiar with Stuxnet [which specifically targeted the centrifuges used by Iran to enrich uranium], SCADA vulnerabilities don’t end there, and the ZDI has seen a recent influx in remote code execution vulnerabilities in SCADA products.”

All this isn’t to say that private groups aren’t buying or using zero-days to separate you from your credit card details or to carry out corporate espionage, but it’s certainly not de rigueur.

How are zero-days disclosed, bought, and sold?

The main problem with writing about the zero-day market is that, by necessity, no one really talks about zero-days. Researchers don’t want to discuss them until they’ve been sold or disclosed, and the prospective buyer doesn’t want to talk about it because that would completely defeat the point. Fortunately, thanks to the work of intrepid security journalists, leaked e-mails, and reformed hackers who speak about their past experiences, it’s possible to build up a pretty solid picture of the current zero-day market.

For the most part, there are three outcomes for a fresh zero-day: it’s publicly disclosed, it’s privately disclosed to the software vendor (sometimes for a significant bounty), or it’s sold to a third party. That third party could be offensive (Hacking Team, Zerodium), defensive (ZDI), or both. Sometimes the zero-day will be used directly by the third party; in other cases, it might be acquired by a broker who is trying to shop around a bunch of useful zero-days.

Zero-days are often sold on the Dark Web, where the combination of Tor and Bitcoin allow buyers and sellers to operate anonymously. They are far less numerous today than a few years ago, but some trading still occurs on normal Web cybercrime forums. (Security reporter Brian Krebs has written extensively about one of the most famous cybercrime forums: Darkode.)

The recent hack of Hacking Team, an offensive intrusion and surveillance group, showed that there’s now another avenue for security researchers to sell their zero-days too. In a frank discussion with Hacking Team’s CEO David Vincenzetti in 2013, Russian security researcher Vitaliy Toropov managed to secure £30,000 for a single zero-day in Flash Player. Toropov even offered Hacking Team a discount on any further zero-days that they wanted to acquire.The e-mail exchange also highlighted another facet of the zero-day market: for “3 times” the price, Toropov said that Hacking Team would get exclusive use of the zero-days. Some zero-day buyers, such as the government, might prefer exclusivity, but other buyers, such as defensive companies, might not care quite as much.

While £30,000 (or even three times that amount) might sound like a lot of money, it’s probably towards the lower end of the zero-day market. Just a couple of weeks ago, Zerodium, which bills itself as a broker of “premium zero-day vulnerabilities,” said it would pay up to $1 million (£650,000) for an iOS 9 zero-day. The program is open until October 31, and there’s a total of $3 million available for exploits that lead to an “exploitation/jailbreak process achievable remotely, reliably, silently, and without requiring any user interaction except visiting a webpage or reading a SMS/MMS.”

The motivation behind selling exploits

The most interesting thing about the zero-day market, though, isn’t that they’re being sold or disclosed—it’s the motivation behind why a security researcher opts to disclose or sell a zero-day and who they sell or disclose that zero-day to.

Say you’re a security researcher, and you find a zero-day in a piece of software. Then you write a proof-of-concept (PoC) that demonstrates that, yes, there is actually a useful vulnerability there. What do you do next? Historically before the market matured, it wasn’t unusual for researchers to try and responsibly pass the vulnerability along to the software developer. Unfortunately, that method didn’t work out so well. “It used to be very frequent that when you approached a vendor they would attempt to come after you legally with the DMCA and try to quash your research,” explained Desautels, who has been digging up zero-day vulnerabilities since 1999.

“The very most you would get, back then, might be credit in a security advisory, if it gets published, which it might not. So all those hours of hard work that you did—you effectively provided an extremely high-value service—and you did it for free.”

“And what are the bad guys going to do? They’re going to go and figure out how to hack it—and tear networks apart.”

It is no surprise, then, that some security researchers started selling zero-days for money. “He or she can sell their hard work to someone who they think will do something ethical or good with it, such as a defence contractor or directly to the government, or to a company like us. They get paid fair-dollar value for their hard work.” Desautels’ company, Netragard, used to buy zero-days from researchers but closed the program down when it learnt, from the Hacking Team breach, that its zero-days were being used by repressive governments with poor human rights records.

The third option is full disclosure, where the researcher publishes the zero-day and/or the proof-of-concept to the Web at large. Ethically, full disclosure is problematic, according to Desautels. “If you take a look at the Verizon Data Breach reports, you’ll find that the vast majority of vulnerabilities are exploited within days of them becoming known. So when you publish a vulnerability, when you publish information—even if it’s partial information—you are telling the world that a specific piece of software can be hacked. And what are the bad guys going to do? They’re going to go and figure out how to hack it—and they do. And the proof is there: they tear networks apart.”

Katie Moussouris helped create Microsoft’s security bounty programs and was pivotal in improving the company’s approach to security research and vulnerability response. She believes that the ethics of full disclosure are slightly more nuanced. “Under ideal circumstances, everything gets privately disclosed, and a fix is released before any technical details are made public,” said Moussouris. “But there are circumstances where that just isn’t possible, especially if an issue affects multiple parties.

“At Microsoft itself, where I founded Microsoft Vulnerability Research, we found vulnerabilities in libraries that we could fix on the Microsoft side—but everyone would still be vulnerable until they recompiled their software with the fixed library,” Moussouris continued. “There were only so many other software vendors we could notify before going public. Even in circumstances with the most resourced security team possible, you’re still going to have issues where perfect coordinated private disclosure is not going to be possible. I think understanding the nuances and the best ways to minimise risk is really what it comes down to, and that comes from experience.”

A furore of FUD

Of course, there’s always the nuclear option—sell to the highest bidder. Some security researchers pay little heed to who the ultimate buyer of the zero-day is or what they might do with it because they just want the money. This is the scenario that has most piqued the interest of the media even if there’s little evidence for the narrative—that there’s a huge, shadowy marketplace where lone-wolf hackers are peddling their zero-days to North Korea, Sudan, or some other country that has a history of abusing its citizens.

Neither Moussouris or Desautels are too fazed by this fourth category of security researcher, though. Moussouris seems to fall on the side of “build it and they will come”—there will always be some researchers who sell their zero-days to the highest bidder, but if software vendors have the right systems in place for disclosure reporting and bounties, then the defence market will win out over the offence. “Back in the day when electronic versions of music were being traded and pirated, that was because there wasn’t really an easy way to do the right thing. And then iTunes came along,” explained Moussouris. “The model evolved to allow people who would’ve done the right thing, if they had an avenue to do it, to go ahead and do it. I think that’s where we are with this defence market, which is just beginning.”

Desautels was a bit more vitriolic. “The biggest problem with the zero-day market is that it’s being grossly misrepresented,” he began. “The idea, the threat of zero-days is largely driven by fear, uncertainty, and doubt. FUD. The citizens are so freaked out because of what the media has been writing about zero-days, because they only parade the bad side of it. It’s become so bad that the media have created this situation where regulatory bodies are trying to take action to solve this imaginary problem.

“When you look at the zero-day market, there’s actually nothing secretive about it,” Desautels continued. “The reason people say that it’s secretive is because they don’t know what the software being sold is,” which, as we mentioned earlier, is an intrinsic part of what makes a zero-day a zero-day, and not just any old vulnerability. “And the other part of it is that people don’t understand who the buyers are, of these zero-days, because the buyers require confidence. But why is that unusual? Even in the private sector, companies don’t tell other companies who their customers are. Frankly, this is all something that most people just don’t understand. And because they don’t understand it, it’s human nature to be afraid of what they don’t understand.”

We’re just waiting for the hammer to fall

Currently, the discovery and disclosure of zero-day vulnerabilities (along with the development of code that exploits them) is mostly free from legislation and regulation. This is due to two primary reasons: the existing computer misuse laws were generally drafted before zero-days became a big thing, and perhaps more importantly, because it isn’t clear if zero-days should be regulated or legislated against.

Moussouris, who describes herself as an ex-hacker, is now the chief policy officer at HackerOne, a service that makes it easier for software vendors to track vulnerabilities and to pay out bounties to researchers. She is advising both the EU and US as they investigate the possibility of legislating and controlling the flow of zero-day vulnerabilities. In the EU, she’s working with the Dutch MEP Marietje Schaake to try and stem any unintended consequences from the updated Wassenaar Arrangement that might impact the research and disclosure of zero-days. In the US, she’s part of the multi-stakeholder task force that, under the purview of the Department of Commerce’s NTIA, is looking into how security researchers and software/system vendors can work together in harmony to close more vulnerabilities.

More immediately, however, there’s the Cybersecurity Information Sharing Act (CISA), which could be voted upon by the US Senate as soon as next week. CISA includes a bunch of amendments, the most notable of which (as far as zero-days are concerned, anyway) is No. 2626, which seeks to expand the Computer Fraud and Abuse Act (CFAA) so that it increases the punishment for security research. “There are many people who are still looking at it from a very dated perspective,” Moussouris explained. “They are looking at the security research community as the source of the problem, where actually it’s the vendors who wrote the vulnerable code in the first place, and the vendors who really need to figure out a way of responding to zero-days gracefully, rather than shooting the messenger.”

“The threat of zero-days is largely driven by FUD. The media have created this situation where regulatory bodies are trying to solve this imaginary problem.”

The Wassenaar Arrangement is a slightly more nuanced beast. While it was originally designed to control the export of conventional weapons to non-Wassenaar members, in December 2013 the list of controlled exports was updated to include “intrusion software.” The ostensible purpose of its inclusion was to prevent the sale of spyware to governments that are known to abuse human rights. The Wassenaar Arrangement isn’t binding in itself; rather, the member nations have to interpret the Arrangement and implement their own export controls.

In May 2015, the US Department of Commerce published its proposed implementation of the updated Arrangement—and it didn’t look good for security researchers. Following push-back from experts like Moussouris, the DoC is now revising its proposal. It may contain exemptions for security researchers and legitimate, dual-use software (i.e. intrusion or surveillance tools that can be used nefariously.)

In Europe, a similar consultation on export control policy is currently ongoing with a closing date of October 15. Ahead of that deadline, there was an important meeting on September 30 at the European Parliament. A number of experts spoke about the need for security research exemptions and controls that will stop companies and countries from selling digital weapons to repressive governments.

At the heart of these negotiations are two opposing forces: the security researchers, who are against legislation or regulation that would make their work illegal, and the lobbying arms of big software companies, which perceive security researchers as a threat. “Look to the companies that have a hard time patching vulnerabilities in a timely manner,” explained Moussouris. “Look to the companies that typically issue a patch every quarter or so. Those are the companies that don’t do so well when security researchers are offering a lot of scrutiny and noise, because they’re locked into an older, enterprise patching cycle.”

At the moment, Wassenaar doesn’t regulate zero-days or exploits; rather, it regulates the technology that would be used to create or interact with them. Lobbying from Oracle, a long-time enemy of security research, and some other big companies is trying to change that. “I think Oracle has fallen into the same trap that I’ve seen with other organisations that invest a lot, internally, into securing software,” said Moussouris. “They look at the number of vulnerabilities that they find internally, and then they look at the relatively small percentage that are reported from the outside—and they mistake that big disparity for a reason why they should continue as they always have, and discouraging reports from the outside.

“The truth of the matter, upon further analysis and drawing from my own experiences building vulnerability coordination and bug bounty programs at Microsoft, is that you’d see the same thing: Microsoft was incredibly good at finding its own bugs, and would find way more than outside researchers. But that was by design! That means Microsoft was doing its job by trying to secure its own software first, by building very specialised tools, hiring specialised people—and that’s good and correct.”

Even if Oracle and friends successfully lobby for the regulation of security research and disclosure, another thorny issue will present itself: How do you define a zero-day, anyway? A zero-day could be a vulnerability that’s unknown to the vendor. Or it could be known to various parties other than the vendor. Or it could simply be a vulnerability for which there is currently no patch. At the same time, there are plenty of issues out there where the vendor knows about it, but for myriad reasons it hasn’t provided a patch.

Moussouris is fairly confident that reasonable conclusions will be found both in the US and EU. “Luckily we’re at a point where a generation of security researchers and hackers like myself have matriculated into the ‘credible ruling class of security’ such that we’re called upon by legislators, and not just dismissed as being on the wrong side in the first place,” she explained. “I think it helps that a lot of us have worked at large companies, so that we understand the trade-offs—we understand what the large companies are doing, what works and what doesn’t. We’re able to come at the problem with empathy and credibility. We are working towards a solution with these legislators and regulators. I felt very welcome at the EU, and I felt very welcome talking to the departments of commerce, state, and defence in the US. I think we are coming together. It’s just a matter of taking the time to bridge those gaps of understanding.”

How can enterprises defend themselves against zero-days?

The final piece of the zero-day puzzle—after research, discovery, brokerage, and potentially regulation—is mitigation. Even though the defence market has finally erected a redoubt, a bastion against those scurrilous attackers, it is inevitable that zero-days will continue to find their way into the hands of foreign governments and other nefarious actors. If you’re an IT admin and you don’t want to be the next target of China’s cyberespionage efforts, what can you do?

Well, first there is overwhelming evidence that zero-days are not the main threat that you, as an IT admin or decision maker, should be looking out for. The latest Data Breach Investigations Report, produced by Verizon Enterprise Solutions, lays it out very clearly. “We found that 99.9 percent of the exploited vulnerabilities had been compromised more than a year after the associated CVE was published.” CVE, or Common Vulnerabilities and Exposures, refers to the official listing and description of a vulnerability after it has been made public.

Enlarge / A series of graphs from the 2015 Verizon Data Breach Investigations Report, showing the breakdown of which CVEs are used to actively exploit computers.
Verizon

Verizon, drilling down into the actively exploited vulnerabilities, found that 97 percent of exploits observed in 2014 were based on just 10 CVEs—mostly old (early-2000s) vulnerabilities in SNMP and Windows, with one RDP vulnerability from 2012 and the SSL/POODLE vulnerability from 2014 thrown in for good measure. The remaining 3 percent of exploits stem from 7 million other known vulnerabilities, with ages ranging from 2014 all the way back to 1999 (the year that CVEs were first introduced).

The last of the three graphs, shown above, tells a particularly damning story about the perils of full disclosure. About 50 percent of exploits occur within a month of the vulnerability being made public. “There are very few researchers who are brave enough to come out and say it, but full disclosure is a farce,” Desautels told Ars. “Yes, people need to be taught about vulnerabilities, and people need to fix vulnerabilities, but we should be able to introduce software patches without telling the public where the specific vulnerability exists, because frankly, people are too careless to fix things in a responsible and timely manner. As has been demonstrated time and time again.

“The idea that full disclosure is a good thing is an emotionally driven idea,” he continued. “People want to do good, and so they think, ‘hey, I’m going to tell the world about the vulnerability so that they can protect themselves if they want to.’ Unfortunately, it doesn’t work that way. People want to protect themselves, but they may not have the time; the business might not be nimble enough to do it quickly; and the offence always operates faster than the defence, because they have the desire to get in. The defenders are the IT guys—they’re not necessarily security experts, and they certainly don’t have the same wherewithal as the hackers. They’re purely focused on keeping the business up; their primary job isn’t to stop people from coming in.”

Step one, then, is to patch your machines and install the latest version of whatever software packages are on your system. That should protect you from the vast majority of attacks. If you want to protect your IT infrastructure from a zero-day attack, however, it’s a little more complicated. Again, by definition, a zero-day exploit isn’t known by the software vendor or other defending factions. A simple signature-based antivirus program isn’t going to prevent a zero-day exploit if the signature of the zero-day isn’t yet known.

Intrusion detection software suites Bit9 and Cylance were highly recommended by Desautels. Rather than using signatures, Cylance uses machine intelligence in an attempt to build predictive models that can pick up malware-like behaviour in real-time. Bit9 is similar, combining machine intelligence and policy-driven application control—that is, only software that is explicitly trusted can be executed. “Bit9 doesn’t stop us when we’re doing high-threat penetration testing, but it certainly makes our job a lot a more difficult,” said Desautels.

On the other end of the spectrum, companies like HP and Cisco have hardware solutions that plug into your network infrastructure and scan gigabits of data per second, looking for potential threats both old and new. In many cases, enterprise IT companies now have their own security research and intelligence groups (ZDI, Talos, X-Force) that acquire zero-days from external sources or develop their own zero-days for defensive purposes.

The general state of enterprise security does seem to be improving. Desautels told us that, in 2015, his company’s average time for initial penetration of the infrastructure—without the use of zero-days—was “about an hour.” In 2014, however, it was just “four minutes.” The dramatic improvement is because of a “change in trends as a result of all the recent breaches,” he said.

But there’s still a long way to go—60 minutes might be better than four minutes, but the end result is much the same. “We’re still getting in with relative ease, and the vulnerabilities that we’re getting in with are very much the same,” said Desautels. “What’s changing is that people are a bit more paranoid. The technology isn’t necessarily getting better, but people are getting better at responding to things more quickly. After all the breaches, they’re more aware of the after effects.”

This post originated on Ars Technica UK

___
http://arstechnica.com/security/2015/10/the-rise-of-the-zero-day-market/