The NSA and other elite intelligence agencies devote millions of dollars to hunt for common software flaws that are critical to stealing data from secure computers. Open-source protocols like OpenSSL, where the flaw was found, are primary targets.
The Heartbleed flaw, introduced in early 2012 in a minor adjustment to the OpenSSL protocol, highlights one of the failings of open source software development.
While many Internet companies rely on the free code, its integrity depends on a small number of underfunded researchers who devote their energies to the projects.
In contrast, the NSA has more than 1,000 experts devoted to ferreting out such flaws using sophisticated analysis techniques, many of them classified. The agency found Heartbleed shortly after its introduction, according to one of the people familiar with the matter, and it became a basic part of the agency’s toolkit for stealing account passwords and other common tasks.
The NSA has faced nine months of withering criticism for the breadth of its spying, documented in a rolling series of leaks from Snowden, who was a former agency contractor.
The revelations have created a clearer picture of the two roles, sometimes contradictory, played by the U.S.’s largest spy agency. The NSA protects the computers of the government and critical industry from cyber-attacks, while gathering troves of intelligence attacking the computers of others, including terrorist organizations, nuclear smugglers and other governments.
Ordinary Internet users are ill-served by the arrangement because serious flaws are not fixed, exposing their data to domestic and international spy organizations and criminals, said John Pescatore, director of emerging security trends at the SANS Institute, a Bethesda, Maryland-based cyber-security training organization.
“If you combine the two into one government agency, which mission wins?” asked Pescatore, who formerly worked in security for the NSA and the U.S. Secret Service. “Invariably when this has happened over time, the offensive mission wins.”
When researchers uncovered the Heartbleed bug hiding in plain sight and made it public on April 7, it underscored an uncomfortable truth: The public may be placing too much trust in software and hardware developers to insure the security of our most sensitive transactions.
“We’ve never seen any quite like this,” said Michael Sutton, vice president of security research at Zscaler, a San Jose, California-based security firm. “Not only is a huge portion of the Internet impacted, but the damage that can be done, and with relative ease, is immense.”
The potential stems from a flawed implementation of protocol used to encrypt communications between users and websites protected by OpenSSL, making those supposedly secure sites an open book. The damage could be done with relatively simple scans, so that millions of machines could be hit by a single attacker.
Questions remain about whether anyone other than the U.S. government might have exploited the flaw before the public disclosure. Sophisticated intelligence agencies in other countries are one possibility.
If criminals found the flaw before a fix was published this week, they could have scooped up troves of passwords for bank accounts, e-commerce sites and e-mail accounts worldwide.
Evidence of that is so far lacking, and it’s possible that cybercriminals missed the potential in the same way security professionals did, suggested Tal Klein, vice president of marketing at Adallom, in Menlo Park, California.
The fact that the vulnerability existed in the transmission of ordinary data — even if it’s the kind of data the vast majority of users are concerned about — may have been a factor in the decision by NSA officials to keep it a secret, said James Lewis, a cybersecurity senior fellow at the Center for Strategic and International Studies.
“They actually have a process when they find this stuff that goes all the way up to the director” of the agency, Lewis said. “They look at how likely it is that other guys have found it and might be using it, and they look at what’s the risk to the country.”
Lewis said the NSA has a range of options, including exploiting the vulnerability to gain intelligence for a short period of time and then discreetly contacting software makers or open source researchers to fix it.
The SSL protocol has a history of security problems, Lewis said, and is not the primary form of protection governments and others use to transmit highly sensitive information.
“I knew hackers who could break it nearly 15 years ago,” Lewis said of the SSL protocol.
That may not soothe the millions of users who were left vulnerable for so long.
Following the leaks about NSA’s electronic spying, President Barack Obama convened a panel to review surveillance activities and suggest reforms. Among the dozens of changes put forward was a recommendation that the NSA quickly move to fix software flaws rather that exploit them, and that they be used only in “rare instances” and for short periods of time.
“If the NSA knows about a vulnerability, then often other nation states and even criminal organizations can exploit the same security vulnerability,” said Harley Geiger, senior counsel for the Center for Democracy & Technology in Washington. “What may be a good tool for the NSA may also turn out to be a tool for organizations that are less ethical or have no ethics at all.”
Currently, the NSA has a trove of thousands of such vulnerabilities that can be used to breach some of the world’s most sensitive computers, according to a person briefed on the matter. Intelligence chiefs have said the country’s ability to spot terrorist threats and understand the intent of hostile leaders would be vastly diminished if their use were prohibited.