Encourage FBI to hoard exploits? No thanks.

A misguided opinion piece in Wired by Matt Blaze and Susan Landau argues that law enforcement should be encouraged to exploit software vulnerabilities to wiretap suspects, instead of requiring a backdoor in communication software. I agree with the latter premise, but the solution Blaze and Landau proposes will result in collateral damage and perverse incentives.

Again, I’m with them this far:

Whether we like them or not, wiretaps — legally authorized ones only, of course — are an important law enforcement tool. But mandatory wiretap backdoors in internet services would invite at least as much new crime as it could help solve.

But then they offer a poor solution:

…there’s already an alternative in place: buggy, vulnerable software. The same vulnerabilities that enable crime in the first place also give law enforcement a way to wiretap — when they have a narrowly targeted warrant and can’t get what they’re after some other way.

Sure, because what could possibly go wrong? Well, let’s see. Authorities could end up creating new forms of malware or remote exploit tools that get co-opted for use by criminals, much as the authors anticipate would happen with mandated backdoors. Attempts to break into or infect a system could lead to unintended damage to innocent systems. The authorities could pressure software vendors not to patch a vulnerability until they finish gathering evidence for a big case. The FBI could outbid a software vendor for information about a new vulnerability, leading to better investigative capabilities at the expense of everyone else’s security.

The authors do attempt to address some of these concerns:

And when the FBI finds a vulnerability in a major piece of software, shouldn’t they let the manufacturer know so innocent users can patch? Should the government buy exploit tools on the underground market or build them themselves? These are difficult questions, but they’re not fundamentally different from those we grapple with for dealing with informants, weapons, and other potentially dangerous law enforcement tools.

These are very difficult questions, and they are fundamentally different from the examples listed. They’re different because of the incentives for law enforcement to interfere with the security of the general public. They’re different because computer and network security are poorly understood by judges and the general public. And they’re different because of the inherent lack of accountability in behavior that takes place online.

But at least targeted exploit tools are harder to abuse on a large scale than globally mandated backdoors in every switch, every router, every application, every device.

Everything’s relative, I suppose, but criminals have shown repeatedly that exploits against specific software vulnerabilities (e.g., in Java or Flash Player) can be used individually or combined with others to wreak havoc on the general Internet using public. What’s good for the goose with a badge is good for the gander with an illicit profit motive.

I’d argue that wiretapping is a technique that was a product of its time: the telephone age. As technology marches on, law enforcement will need to turn to old strategies that still have value (e.g., bugging a person’s home or office) and new ones that have yet to be devised (or disclosed). These may well include certain malicious hacking techniques, but I hope that exploitation of software vulnerabilities by the authorities will not become a mainstream law enforcement strategy.

Obscurity is a double-edged sword

In an article in The Atlantic (h/t Bruce Schneier), Woodrow Hartzog and Evan Selinger argue for using the concept of obscurity in place of privacy when discussing the degree to which data is easily accessible:

Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn’t mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion’s share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

This is great framing, as it offers an important way of understanding the nuance that is lost when discussing things in terms of privacy, which is often treated as a binary concept. However, the article doesn’t touch on the fact that there are both pros and cons of data falling at any given point along the obscurity continuum.

Consider, for example, whois records, which provide contact information for registrants of domain names. These live somewhere in the middle of the obscurity spectrum. Registrars are supposed to publish the information via the whois service, so the records are not completely private, though some people do conceal their information behind a privacy proxy. (A privacy proxy completely obscures the who, though it does not obscure the means to contact the registrant, as the proxy service is supposed to provide a pass-through email address.) Those that don’t use proxies have their contact information published in plain text. However, automatically grabbing and parsing the information is non-trivial, due to unstructured distribution of whois servers, lack of data format standardization, and rate limiting imposed by registrars.

If you worry, as many people do, about the harvesting of whois records en masse for use by spammers or other criminals, this partial obscurity is a blessing. It makes it more difficult or “expensive” for criminals to do their work. For those of us working to identify malicious actors and correlate badware domains, or trying to automate the process of reporting compromised websites, though, the same obscurity is a curse. The same dichotomy will occur with most changes in data obscurity, including the introduction of Facebook Graph Search, which was used as an example in the article.

Hartzog and Selinger end their essay with the following call to action:

Obscurity is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine how much obscurity citizens need to thrive.

Taking into account the negative aspects of obscurity (or, put another way, the benefits of transparency), and the fact that there’s no one-size-fits-all solution, I’d amend their conclusion as follows:

Obscurity and transparency can each in its own way further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine the balance of transparency and obscurity that citizens need in various aspects of their lives to thrive.

A no-win strategy

In the Jan. 11, 2013 issue of SANS NewsBites, editor Brian Honan writes:

It seems each time a zero day exploit is found in software, be that Java or otherwise, the industry pundits recommend that people stop using that software.  New vulnerabilities will always be discovered in the software we use.  If our best defence to a threat is to cause a denial-of-service on ourselves then this in the long term is a no-win strategy for us as an industry.

In this case, the advice was to disable the Java plugin in the browser, which, in fairness, is something many users could do without impact. Still, I couldn’t agree more with Honan’s comment. Here are a few other examples of common advice from security professionals and journalists that contradicts the way software is designed to be used:

  • Don’t click on links in email messages (even though email software automatically adds links to URLs for convenience).
  • Don’t install Android apps from anywhere other than the official app store(s) (thus negating one of the advantages of an open platform without a central gatekeeper).
  • Don’t click on ads (which most Web businesses depend upon for revenue).

As Honan says, we have to find a better way to protect our users and our systems than saying “don’t use technology the way it’s designed to be used.” His comment goes on to point to the CSIS Twenty Critical Security Controls, which are great guidelines for large and/or high security organizations. For consumers and small businesses, though, we’ll need to look to other answers: increased industry cooperation and law enforcement to reduce the threat, improved interfaces and signals to help users make safer choices, more secure architectures, and so on.