Be careful what you wish for

I would be more worried that someone would kill me in order to get the documents released than I would be that someone would kill me to prevent the documents from being released. Any real-world situation involves multiple adversaries, and it’s important to keep all of them in mind when designing a security system.

—Bruce Schneier, in response to Edward Snowden having a “dead man’s switch” that would release all of the documents he stole if anything happens to him.

New column on Dark Reading

I have a new column/blog on Dark Reading. Or, more accurately, I’ve taken over a column called Sophos Security Insights (previously SophosLabs Insights).

The first post, “Forget Standardization. Embrace BYOD.” went up today. Here’s a sneak peak:

Despite its rocky start, Windows 8 has IT departments salivating over the idea of standardizing on a single platform. It’s a compelling vision: phones, tablets, and workstations all running a single OS and managed through a shared set of native Microsoft tools. Compelling, perhaps, but for most organizations, it ain’t gonna happen.

Read the full post over at Dark Reading or subscribe to the feed.

No cell phone kill switch, please

From the “wait, what?” department:

In his letters, [New York State Attorney General] Schneiderman asked why companies such as Apple and Samsung, which develop such sophisticated devices, can’t also create technology to render stolen devices inoperable and eliminate the expanding black market.

Apart from the technical challenges, just think of the potential problems (errors, malicious hacking, etc.) that would result from our cell phones having remote-triggered self-destruct capability controlled by phone vendors. If you want to protect your phone, install security software like the free Sophos Mobile Security, which allows you to remotely locate, lock, or wipe your phone, but doesn’t render the phone itself inoperable. And if you’re that concerned about your phone being stolen, buy insurance (or, for families, self-insure by setting aside enough savings to replace one of the family’s phones in case of loss/theft).

Sophos bound

I’m very excited to announce that, in two weeks, I will be joining the team at Sophos. The company, dual headquartered in Abingdon, UK, and Burlington, MA, creates some of the best network and endpoint security products for small and medium enterprises. Sophos was one of the first companies to join StopBadware’s partner program when it launched in 2011, and I’ve had impressively positive interactions with the people there ever since. They also have one of the most prolific and entertaining blogs in the industry.

I’ll be joining Sophos’s marketing team as a Senior Product Marketing Manager, specializing in endpoint security. I have my friend and colleague Joram Borenstein to thank for helping me realize that much of the work I’ve done at StopBadware over the past few years has been product marketing, even if I didn’t have a name for it. I’m looking forward to this foray into a new field and a new organization. I’m also glad that I’ll be able to draw on the immense amount I’ve learned about the security industry during my five and a half years at StopBadware. I’ve had the chance to work with amazing people on our staff and board, at our partner companies, and throughout the industry. I’m grateful for the opportunity I was given to lead this exciting initiative, and I look forward to remaining involved as a member of the StopBadware Board of Directors.

I’ll be spending this week wrapping things up and training my replacement at StopBadware. Next week I get to take a much needed break, and then I’ll jump into my new role at Sophos.

Accountability for insecure software

The FTC recently settled charges with mobile phone maker HTC, which provided highly insecure software on its Android phones:

The Commission charged that HTC America failed to employ reasonable and appropriate security practices in the design and customization of the software on its mobile devices. Among other things, the complaint alleged that HTC America failed to provide its engineering staff with adequate security training, failed to review or test the software on its mobile devices for potential security vulnerabilities, failed to follow well-known and commonly accepted secure coding practices, and failed to establish a process for receiving and addressing vulnerability reports from third parties.

I haven’t seen much written about this, but it seems like a big deal. It’s the first time I can think of that a U.S. regulatory agency has held a company accountable for failing to provide reasonable security in its products. Indeed, for many years, software and hardware vendors alike have avoided accountability. Vendors often disclaim responsibility through license agreements and/or asserting that all products have flaws, so they can’t be expected to provide perfect security. It remains to be seen whether this will be the start of a trend toward greater vendor accountability and whether this action will get other product vendors to take notice and beef up their security efforts.

Issuing a patch doesn’t fix the problem

Alan Paller makes a great point in a comment in today’s issue of SANS NewsBites:

Issuing a patch does NOT fix the problem. Vendor’s should not be allowed to get away with leaving major security flaws in software used in the critical national infrastructure without ensuring that (1) each buyer knows about the risk (emails haven’t changed, the right person is on the mailing list) and (2) the buyer has confirmed that he/she has the needed knowledge and support from the vendor to install the patch effectively.  As an  industry, we have to stop pretending that a patch release fixes a security flaw. Too often, a patch is never installed because the right person doesn’t know about it or know enough about it and no automated capability is in  place to ensure the patch is installed.

The general point, that a vendor issuing a patch does not mean that the problem is solved, applies far more broadly than just critical infrastructure. Microsoft has clearly recognized this, as they have created advertising and educational campaigns to encourage users to update old versions of Internet Explorer. For all the excitement that is generated when attacks against zero day vulnerabilities occur, most malicious activity on the Internet exploits software for which patches have been available for weeks, months, or years.

Encourage FBI to hoard exploits? No thanks.

A misguided opinion piece in Wired by Matt Blaze and Susan Landau argues that law enforcement should be encouraged to exploit software vulnerabilities to wiretap suspects, instead of requiring a backdoor in communication software. I agree with the latter premise, but the solution Blaze and Landau proposes will result in collateral damage and perverse incentives.

Again, I’m with them this far:

Whether we like them or not, wiretaps — legally authorized ones only, of course — are an important law enforcement tool. But mandatory wiretap backdoors in internet services would invite at least as much new crime as it could help solve.

But then they offer a poor solution:

…there’s already an alternative in place: buggy, vulnerable software. The same vulnerabilities that enable crime in the first place also give law enforcement a way to wiretap — when they have a narrowly targeted warrant and can’t get what they’re after some other way.

Sure, because what could possibly go wrong? Well, let’s see. Authorities could end up creating new forms of malware or remote exploit tools that get co-opted for use by criminals, much as the authors anticipate would happen with mandated backdoors. Attempts to break into or infect a system could lead to unintended damage to innocent systems. The authorities could pressure software vendors not to patch a vulnerability until they finish gathering evidence for a big case. The FBI could outbid a software vendor for information about a new vulnerability, leading to better investigative capabilities at the expense of everyone else’s security.

The authors do attempt to address some of these concerns:

And when the FBI finds a vulnerability in a major piece of software, shouldn’t they let the manufacturer know so innocent users can patch? Should the government buy exploit tools on the underground market or build them themselves? These are difficult questions, but they’re not fundamentally different from those we grapple with for dealing with informants, weapons, and other potentially dangerous law enforcement tools.

These are very difficult questions, and they are fundamentally different from the examples listed. They’re different because of the incentives for law enforcement to interfere with the security of the general public. They’re different because computer and network security are poorly understood by judges and the general public. And they’re different because of the inherent lack of accountability in behavior that takes place online.

But at least targeted exploit tools are harder to abuse on a large scale than globally mandated backdoors in every switch, every router, every application, every device.

Everything’s relative, I suppose, but criminals have shown repeatedly that exploits against specific software vulnerabilities (e.g., in Java or Flash Player) can be used individually or combined with others to wreak havoc on the general Internet using public. What’s good for the goose with a badge is good for the gander with an illicit profit motive.

I’d argue that wiretapping is a technique that was a product of its time: the telephone age. As technology marches on, law enforcement will need to turn to old strategies that still have value (e.g., bugging a person’s home or office) and new ones that have yet to be devised (or disclosed). These may well include certain malicious hacking techniques, but I hope that exploitation of software vulnerabilities by the authorities will not become a mainstream law enforcement strategy.

Obscurity is a double-edged sword

In an article in The Atlantic (h/t Bruce Schneier), Woodrow Hartzog and Evan Selinger argue for using the concept of obscurity in place of privacy when discussing the degree to which data is easily accessible:

Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn’t mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion’s share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

This is great framing, as it offers an important way of understanding the nuance that is lost when discussing things in terms of privacy, which is often treated as a binary concept. However, the article doesn’t touch on the fact that there are both pros and cons of data falling at any given point along the obscurity continuum.

Consider, for example, whois records, which provide contact information for registrants of domain names. These live somewhere in the middle of the obscurity spectrum. Registrars are supposed to publish the information via the whois service, so the records are not completely private, though some people do conceal their information behind a privacy proxy. (A privacy proxy completely obscures the who, though it does not obscure the means to contact the registrant, as the proxy service is supposed to provide a pass-through email address.) Those that don’t use proxies have their contact information published in plain text. However, automatically grabbing and parsing the information is non-trivial, due to unstructured distribution of whois servers, lack of data format standardization, and rate limiting imposed by registrars.

If you worry, as many people do, about the harvesting of whois records en masse for use by spammers or other criminals, this partial obscurity is a blessing. It makes it more difficult or “expensive” for criminals to do their work. For those of us working to identify malicious actors and correlate badware domains, or trying to automate the process of reporting compromised websites, though, the same obscurity is a curse. The same dichotomy will occur with most changes in data obscurity, including the introduction of Facebook Graph Search, which was used as an example in the article.

Hartzog and Selinger end their essay with the following call to action:

Obscurity is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine how much obscurity citizens need to thrive.

Taking into account the negative aspects of obscurity (or, put another way, the benefits of transparency), and the fact that there’s no one-size-fits-all solution, I’d amend their conclusion as follows:

Obscurity and transparency can each in its own way further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine the balance of transparency and obscurity that citizens need in various aspects of their lives to thrive.

A no-win strategy

In the Jan. 11, 2013 issue of SANS NewsBites, editor Brian Honan writes:

It seems each time a zero day exploit is found in software, be that Java or otherwise, the industry pundits recommend that people stop using that software.  New vulnerabilities will always be discovered in the software we use.  If our best defence to a threat is to cause a denial-of-service on ourselves then this in the long term is a no-win strategy for us as an industry.

In this case, the advice was to disable the Java plugin in the browser, which, in fairness, is something many users could do without impact. Still, I couldn’t agree more with Honan’s comment. Here are a few other examples of common advice from security professionals and journalists that contradicts the way software is designed to be used:

  • Don’t click on links in email messages (even though email software automatically adds links to URLs for convenience).
  • Don’t install Android apps from anywhere other than the official app store(s) (thus negating one of the advantages of an open platform without a central gatekeeper).
  • Don’t click on ads (which most Web businesses depend upon for revenue).

As Honan says, we have to find a better way to protect our users and our systems than saying “don’t use technology the way it’s designed to be used.” His comment goes on to point to the CSIS Twenty Critical Security Controls, which are great guidelines for large and/or high security organizations. For consumers and small businesses, though, we’ll need to look to other answers: increased industry cooperation and law enforcement to reduce the threat, improved interfaces and signals to help users make safer choices, more secure architectures, and so on.