Kristoffer Tripplaar / Sipa / AP Images
President Obama and Homeland Security Secretary Jeh Johnson discussed efforts to improve government collaboration with industry to combat cyber threats at the National Cybersecurity and Communications Integration Center in Arlington, Virginia, last January.
This article appears in the Spring 2015 issue of The American Prospect magazine. Subscribe here. A follow-up to this article by Joshua A. Kroll will appear on June 4.
Celebrate our 25th Anniversary with us by clicking here for a free download of this special issue.
The devastating cyberattacks against Sony Pictures in 2014 resulted in disabling of equipment, release of employees' sensitive information, disclosure of company secrets and unreleased movies, and ultimately the departure of one of the studio's top executives. The FBI blamed the Sony attacks on North Korea, and the attackers may have been operating in Sony's systems undetected for more than a year. Many Americans were left wondering why their government was unable to detect and stop this foreign threat and whether it can prevent others that may be even more serious.
At the core of the cybersecurity problem lies the cyber conundrum: Should we undermine security systems or bolster them? During the Cold War era, the answer was simple: We did both. We undermined and exploited our adversaries' communication technologies, and we protected our own. We could easily pursue these missions simultaneously because our adversaries used different technologies and operated separate infrastructures-damaging one did not harm the other. The National Security Agency excelled at both missions.
"Both" is much harder to achieve now. The economic logic of the Internet, computers, smartphones, and our software-driven world pushed us toward a single set of technical standards and a shared global infrastructure. Now adversaries use the same technologies that we rely on. The systems we want to undermine often are the very same systems we want to protect-thus the conundrum.
Many believe we must trade off security against privacy, on the theory that enhancing security tends to undermine privacy. The cyber conundrum imposes a harsher trade-off: Steps to enhance security in one place will often weaken security elsewhere.
The United States government has dealt with this trade-off by sacrificing the security of the global communications infrastructure to maximize the capacity of intelligence and national-security agencies to track and undermine adversaries. But, with good reason, computer security experts have deep misgivings about the wisdom of current policy.
Undermine First or Protect First?
An undermine-first strategy allows the United States government to intrude into adversaries' systems, exploiting this covert access to gather intelligence and occasionally to deliver a destructive cyberattack. The undermine-first strategy tends, however, to leave domestic systems vulnerable and exploitable because their vulnerabilities are the same ones that permit undermining of adversaries. Instead of fixing these vulnerabilities, the government focuses on domestic protection through surveillance, detection, and analysis, hoping that hostile activity can be identified, tracked, and countered before it causes substantial damage.
A protect-first strategy starts by identifying the key technologies that protect our systems and seeks to improve them by developing, vetting, and deploying stronger defenses. This strengthening makes it more difficult to intrude into rivals' systems and to conduct domestic surveillance, but the upside is that our systems can better resist attacks from all sources, whether hostile nation-states, criminals, or hacktivists.
The National Security Agency was already wrestling with the cyber conundrum in the late 1960s. When the National Bureau of Standards (NBS) first proposed the creation of a standard for encrypting sensitive but unclassified government data, the NSA debated how to respond. According to a now-declassified internal NSA history, "From the SIGINT [Signals Intelligence, i.e., electronic eavesdropping] standpoint, a competent industry standard could spread into undesirable areas, like Third World government communications, narcotics traffickers, and international terrorism targets." That some within the NSA argued against a standard to secure the U.S. government's own communications, for fear that adversaries would also adopt it to stymie surveillance, conveys the severity of the cyber conundrum even in an era before ubiquitous electronic communication.
In 1977, the NBS, with NSA help, issued the Data Encryption Standard (DES), which was widely used inside and outside of government for at least two decades. Nonetheless, because the NSA vouched for the security of DES but insisted on keeping secret the technical rationale for its design, suspicion of an NSA back door in DES took 15 years to dispel.
Between the 1970s and the early 2000s, the NSA planted itself in the undermine-first camp. Its use of surveillance grew with the Internet, exploded after September 11, and today is the central feature of cybersecurity policy.
Through the Back Door
Documents leaked by Edward Snowden show that by 2011 the NSA was spending more than $250 million annually on its SIGINT Enabling Project, which included activities to "insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" and to "influence policies, standards, and specifications for commercial public key [encryption] technologies."
This effort to build in vulnerabilities in electronic systems also involved changes in the NSA's strategy toward U.S. government encryption standards. In the early 2000s, the National Institute of Standards and Technology (the new name for NBS) devised a technical standard, again with the NSA's help, for a component called a Deterministic Random Bit Generator (DRBG). Despite its unappealing name, the DRBG plays a pivotal role in security because an adversary who compromises your DRBG can decrypt all of your secret data. NIST issued its DRBG standard in 2008.
We now know that the NSA almost certainly inserted a back door into one of the standard's core components, known as DUAL_EC. The story of how the NSA did this is a case study in the new world of mathematical cloak and dagger.
At its core, DUAL_EC relies on a "magic number" called P. If P is chosen randomly, the underlying mathematics are believed to be secure, but independent cryptographers realized in 2005 that a party who gets to hand-pick P can do so in a way that allows that party to compromise DUAL_EC. The P value published in the standard was picked by the NSA; the agency was conspicuously uninterested in any suggestion to generate a new random P that would be more trustworthy. When NIST suggested generating a new P, an NSA consultant replied that "NSA kyboshed this idea, and I was not allowed to publicly discuss it." The final standard didn't mention the possibility of a back door, nor did it say who had chosen P. Private-sector cryptographers already knew that an NSA back door was a possibility, and many avoided DUAL_EC as a result, but NIST declined to change the standard because it trusted the NSA.
NIST changed its position after the emergence of Snowden documents that tended to confirm an NSA backdoor in DUAL_EC. The NSA had reportedly made a secret deal with RSA Data Security, a leading purveyor of encryption software, in which the company was paid $10 million to arrange for its customers to use DUAL_EC by default. After learning all of this in 2013, NIST withdrew DUAL_EC and launched a review of its existing standards and its relationship with the NSA because it now knew that even U.S. government standards could be subject to NSA undermining.
The NSA has also been developing and distributing malicious software that can infiltrate computers and extract information. Kaspersky Lab, a leading Russian antivirus research group, announced in February 2015 that it had captured and analyzed such software developed by "the most advanced threat actor we have seen." Kaspersky designated the malware's author "Equation Group," but experts believe it to be the NSA's Tailored Access Operations group. Equation Group software implants were seen in at least 30 countries before they fell into the hands of Kaspersky and other analysts. Now the software will provide a master class in exploitation to programmers around the world.
Moving to Protect-First
Three months after NIST withdrew the DRBG standard, a review initiated by President Barack Obama called for a shift in policy. Regarding encryption, the President's Review Group on Intelligence and Communications Technologies recommended that "the U.S. Government should: (1) fully support and not undermine efforts to create encryption standards; (2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and (3) increase the use of encryption and urge U.S. companies to do so." But there were few visible signals that policy had changed.
"No foreign nation, no hacker," Obama said in his 2015 State of the Union speech, "should be able to shut down our networks, steal our trade secrets, or invade the privacy of American families." But the nearly $14 billion requested for cybersecurity in the president's fiscal year 2016 budget proposal effectively supports and reinforces current undermine-first policy, a policy that has failed to stop the flood of attacks on American businesses and the government itself by foreign intelligence services, weekend hacktivists, and common criminals.
A protect-first policy of bolstering security technologies would identify the most critical pieces of security infrastructure, invest in making those defenses secure, and support their universal deployment. Such a policy would emphasize support for universal end-to-end encryption tools such as secure web browsing. A website is delivered securely when that site's address starts with "https"-the 's' stands for secure-and your browser puts a lock or key icon next to the address. Browsers can load and display secure pages, guaranteeing that while the pages are in transit from server to user, the pages remain confidential and are protected from tampering, and that the user's browser verifies that the server is not an impostor.
At present, secure browsing is underused and underfunded, leading to troubling security lapses. A notorious example is the Heartbleed bug, disclosed in April of 2014. Heartbleed allowed attackers to reach out across the Internet and extract the contents of a computer's memory, including encryption keys, passwords, and private information. Two-thirds of the websites on the Internet were vulnerable, along with countless computers embedded in cars, wireless routers, home appliances, and other equipment. Because exploitation via Heartbleed usually did not leave a record, the full consequences of Heartbleed will almost certainly never be known.
All of this was due to a single programming error in a software package called OpenSSL, which is used by the majority of websites that provide secure pages. By any measure, OpenSSL is a core piece of our cyber infrastructure. Yet it has been maintained by a very small team of developers-in the words of one journalist, "two guys named Steve"-and the foundation supporting it never had a budget reaching even $1 million per year. Despite its central role in web security, OpenSSL had never undergone a careful security audit.
Matthew Green, a cryptographer at Johns Hopkins University and an outspoken critic of OpenSSL, said after Heartbleed that "the OpenSSL Foundation has some very devoted people, it just doesn't have enough of them, and it can't afford enough of them." Since the Heartbleed attack, a consortium of companies, including some of the biggest names in the Internet business, pledged contributions of a few million dollars to start the Core Infrastructure Initiative (CII), a grant-making process for security audits of important infrastructure components like OpenSSL. CII's budget of a few million dollars is nowhere near the few hundred million now devoted to the NSA's SIGINT Enabling program, but it is a start. A more proactive government policy would provide ample funding for security audits. By leaving OpenSSL to its own devices, government perpetuates the status quo and implicitly rejects a protect-first strategy.
A similar situation applies to encrypted email, the state of which is well conveyed by a recent ProPublica headline: "The World's Email Encryption Software Relies on One Guy, Who is Going Broke." Werner Koch, the author and maintainer of the software Gnu Privacy Guard-the most popular tool for encrypted email and a piece of critical security infrastructure used to verify the integrity of operating system updates on the most popular operating system for web servers-had been getting by on donations of $25,000 per year since 2001, and a new online fund drive was bringing only modest donations. The ProPublica piece brought attention to Koch's plight, and a few hundred thousand dollars of donations poured in, enabling Koch to keep maintaining GPG. It was a success, of a sort. But passing the digital hat for donations is not a sustainable way to fund a critical security infrastructure.
The Limitations of Surveillance
Meanwhile, although precise numbers are hard to come by, one estimate is that 0.64 percent of U.S. gross domestic product is lost to cyber crime, an over–$400 billion global growth industry. Despite the fact that a cyberattack can decimate a company's operations and pry loose its secrets, and despite billions of dollars in annual direct losses to foreign governments and criminals, the most popular systems for secure web page delivery and encrypted email get only crumbs from the $14 billion U.S. government cybersecurity budget.
Instead, the government usually treats cybersecurity as a military or intelligence problem and therefore tends to look first to the military and the intelligence community for a solution. The result is massive surveillance that gathers situational awareness, hoping to connect the dots to find and stop attacks. Some surveillance happens quietly, coming into the public eye only through leaks and investigative journalism. Some happens more openly, under the guise of "information sharing" between companies and government.
Surveillance of adversaries, both overseas and domestically with an appropriate court order, is prudent and necessary to prevent attacks and inform diplomatic and military decisions. Universal domestic surveillance is harder to justify on the merits. Officials argue that they need all of the data if we want them to connect the dots. But the problem is not a lack of dots. More often, the problem is that the dots can be connected in too many ways.
There is no reliable way to tell in advance which pattern marks an impending attack and which simply reflects one of the endless permutations of human social behavior.
Surveillance data is more useful in hindsight. In the Sony Pictures hack, intelligence and investigation were critical in connecting the dots after the attack had happened, even though they did very little to prevent the attack or to discover it in the year or so that it was ongoing.
Aggressive surveillance has limited efficacy and imposes real costs on U.S. companies. Users who are suspicious of the U.S. government-a group including most foreign users and more than a few Americans-want to steer clear of products and companies that might be complicit in surveillance. Foreign companies market themselves as more trustworthy because, unlike American companies, they can defy information demands from U.S. authorities. Analysts estimate that U.S. companies will lose at least tens of billions of dollars of business due to users' surveillance concerns.
At the same time, news of U.S. government demands for data emboldens demands for similar access by other governments-including countries with much weaker civil liberties records. Anything that facilitates U.S. government access will facilitate access by other governments.
Industry worries, too, about direct government attacks on their infrastructures. That is exactly what happened when the NSA tapped into the private communications lines that Google, Yahoo, and other major Internet companies use to move data internally, enabling the NSA to capture information on the users of those systems without any request or notification. Consequently, the Internet companies are seen as either complicit or vulnerable-or both.
The rift between government and industry was visible at the White House Summit on Cybersecurity and Consumer Protection, held at Stanford University on February 13. Obama called for "new legislation to promote greater information sharing between government and private sector, including liability protections for companies that share information about cyber threats," and announced that "our new Cyber Threat Intelligence Integration Center [will be] a single entity that's analyzing and integrating and quickly sharing intelligence about cyber threats across government so we can act on all those threats even faster." After the speech, he signed an executive order implementing these proposals. To the president, cyber defense means collecting more information and using it more aggressively-a policy of undermining and surveillance.
An Alternative Strategy
Apple CEO Tim Cook, who spoke before the president at Stanford, called for a different approach, emphasizing the goal of strengthening systems against snooping. "People have entrusted us with their most personal and precious information. … History has shown us that sacrificing our right to privacy can have dire consequences. … If those of us in positions of responsibility fail to do everything in our power to protect the right of privacy, we risk something far more valuable than money. We risk our way of life. Fortunately, technology gives us the tools to avoid these risks, and it is my sincere hope that by using them, and by working together, we will." The CEOs of other companies such as Google and Yahoo turned down invitations to the public event, reportedly due to frustration with White House policy, opting instead to send deputies. To much of the U.S. technology industry, the best policy is to build stronger walls around users' data.
Obama, in an interview with Re/code's Kara Swisher after the Stanford event, recognized this tension, saying, "I lean probably further in the direction of strong encryption than some do inside of law enforcement. But I am sympathetic to law enforcement because I know the kind of pressure they're under to keep us safe. … Now, in fairness, I think the folks who are in favor of airtight encryption also want to be protected from terrorists."
It would be a great achievement if we could somehow provide strong encryption against every adversary, except for a loophole only usable with a valid warrant. The same hope has been expressed by FBI Director James Comey, by British Prime Minister David Cameron, and by The Washington Post's editorial board, which famously asked for a "secure golden key" for law enforcement.
But if there is a virtual access port that can be opened by a technology vendor on seeing a warrant-as Comey has called for-the same port can be opened by the same vendor without a warrant. The technology cannot tell whether the employee requesting access has been compelled by a lawful court order, or by a blackmailer, or by an extortionist, or by a foreign government.
As far as the technology is concerned, access under a court order is the same as access to data by an insider. And misbehaving insiders often have privileged access that makes their attacks devastating. Consider Snowden's attack on the NSA or the electronic thefts revealed in February in which thieves impersonating insiders took hundreds of millions of dollars from banks around the world. If we want to lock out insiders, we will also have to lock out those with warrants. We cannot avoid the choice between access and security.
The largest Internet companies have been moving to adopt encryption for several years. Google switched its website over to secure access by default in 2010 and 2011. Microsoft's outlook.com email service followed suit in 2012, Facebook in 2013, and Yahoo Mail in 2014. Many of these products later went further, requiring secure access and disabling insecure access. The pace picked up after the Snowden revelations. Apple and Google beefed up encryption of data stored on iPhones and Android devices in 2014. In August 2014, Google announced that it would boost the position of secure pages in search results, treating encryption as an indicator that a site is serious about security.
Meanwhile, citizens who went to Whitehouse.gov to read the text of the president's Cybersecurity Summit speech could not do so on a secure page because the White House website did not offer even the option of secure browsing. Visitors to "https://whitehouse.gov" received a sternly worded security warning from their browsers and had to go to another site such as Google's YouTube if they wanted to experience the president's speech on a secure site.
Closing the Gap
The divide between government and industry runs deep, and it is cultural as much as political. The intelligence community, which dominates cybersecurity policy in government, wants a strategy that favors intelligence gathering, which means undermining and surveillance. The technical community, which dominates in industry, wants to strengthen systems, using tools such as encryption to protect privacy. The two communities come down on different sides of the cyber conundrum.
Ideally, both communities would be represented in government and would be at the table when important cyber policy decisions were being made. But the parts of government that run cybersecurity policy, other than the intelligence agencies, have little technical expertise. Under these conditions, the cyber conundrum generates tense meetings and competing speeches, but no solutions.
Is there any escape from the cyber conundrum? The way out, if we can find it, will strive to bolster security of common technical infrastructure, while finding a way to target exploitation of identified adversaries. It will protect globally, and exploit locally. Strengthening security for everyone will make it harder to exploit adversaries' systems, but the intelligence community has developed some impressive tools for local exploitation. Thus far government has not been willing to make major investments in strengthening security infrastructure, for fear that this would make broad surveillance more difficult. Until our leaders recognize the full costs of their current strategy, that is unlikely to change. And the attacks will continue.