Issuing a patch doesn’t fix the problem

Alan Paller makes a great point in a comment in today’s issue of SANS NewsBites:

Issuing a patch does NOT fix the problem. Vendor’s should not be allowed to get away with leaving major security flaws in software used in the critical national infrastructure without ensuring that (1) each buyer knows about the risk (emails haven’t changed, the right person is on the mailing list) and (2) the buyer has confirmed that he/she has the needed knowledge and support from the vendor to install the patch effectively.  As an  industry, we have to stop pretending that a patch release fixes a security flaw. Too often, a patch is never installed because the right person doesn’t know about it or know enough about it and no automated capability is in  place to ensure the patch is installed.

The general point, that a vendor issuing a patch does not mean that the problem is solved, applies far more broadly than just critical infrastructure. Microsoft has clearly recognized this, as they have created advertising and educational campaigns to encourage users to update old versions of Internet Explorer. For all the excitement that is generated when attacks against zero day vulnerabilities occur, most malicious activity on the Internet exploits software for which patches have been available for weeks, months, or years.

Encourage FBI to hoard exploits? No thanks.

A misguided opinion piece in Wired by Matt Blaze and Susan Landau argues that law enforcement should be encouraged to exploit software vulnerabilities to wiretap suspects, instead of requiring a backdoor in communication software. I agree with the latter premise, but the solution Blaze and Landau proposes will result in collateral damage and perverse incentives.

Again, I’m with them this far:

Whether we like them or not, wiretaps — legally authorized ones only, of course — are an important law enforcement tool. But mandatory wiretap backdoors in internet services would invite at least as much new crime as it could help solve.

But then they offer a poor solution:

…there’s already an alternative in place: buggy, vulnerable software. The same vulnerabilities that enable crime in the first place also give law enforcement a way to wiretap — when they have a narrowly targeted warrant and can’t get what they’re after some other way.

Sure, because what could possibly go wrong? Well, let’s see. Authorities could end up creating new forms of malware or remote exploit tools that get co-opted for use by criminals, much as the authors anticipate would happen with mandated backdoors. Attempts to break into or infect a system could lead to unintended damage to innocent systems. The authorities could pressure software vendors not to patch a vulnerability until they finish gathering evidence for a big case. The FBI could outbid a software vendor for information about a new vulnerability, leading to better investigative capabilities at the expense of everyone else’s security.

The authors do attempt to address some of these concerns:

And when the FBI finds a vulnerability in a major piece of software, shouldn’t they let the manufacturer know so innocent users can patch? Should the government buy exploit tools on the underground market or build them themselves? These are difficult questions, but they’re not fundamentally different from those we grapple with for dealing with informants, weapons, and other potentially dangerous law enforcement tools.

These are very difficult questions, and they are fundamentally different from the examples listed. They’re different because of the incentives for law enforcement to interfere with the security of the general public. They’re different because computer and network security are poorly understood by judges and the general public. And they’re different because of the inherent lack of accountability in behavior that takes place online.

But at least targeted exploit tools are harder to abuse on a large scale than globally mandated backdoors in every switch, every router, every application, every device.

Everything’s relative, I suppose, but criminals have shown repeatedly that exploits against specific software vulnerabilities (e.g., in Java or Flash Player) can be used individually or combined with others to wreak havoc on the general Internet using public. What’s good for the goose with a badge is good for the gander with an illicit profit motive.

I’d argue that wiretapping is a technique that was a product of its time: the telephone age. As technology marches on, law enforcement will need to turn to old strategies that still have value (e.g., bugging a person’s home or office) and new ones that have yet to be devised (or disclosed). These may well include certain malicious hacking techniques, but I hope that exploitation of software vulnerabilities by the authorities will not become a mainstream law enforcement strategy.

Obscurity is a double-edged sword

In an article in The Atlantic (h/t Bruce Schneier), Woodrow Hartzog and Evan Selinger argue for using the concept of obscurity in place of privacy when discussing the degree to which data is easily accessible:

Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn’t mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.

Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion’s share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.

This is great framing, as it offers an important way of understanding the nuance that is lost when discussing things in terms of privacy, which is often treated as a binary concept. However, the article doesn’t touch on the fact that there are both pros and cons of data falling at any given point along the obscurity continuum.

Consider, for example, whois records, which provide contact information for registrants of domain names. These live somewhere in the middle of the obscurity spectrum. Registrars are supposed to publish the information via the whois service, so the records are not completely private, though some people do conceal their information behind a privacy proxy. (A privacy proxy completely obscures the who, though it does not obscure the means to contact the registrant, as the proxy service is supposed to provide a pass-through email address.) Those that don’t use proxies have their contact information published in plain text. However, automatically grabbing and parsing the information is non-trivial, due to unstructured distribution of whois servers, lack of data format standardization, and rate limiting imposed by registrars.

If you worry, as many people do, about the harvesting of whois records en masse for use by spammers or other criminals, this partial obscurity is a blessing. It makes it more difficult or “expensive” for criminals to do their work. For those of us working to identify malicious actors and correlate badware domains, or trying to automate the process of reporting compromised websites, though, the same obscurity is a curse. The same dichotomy will occur with most changes in data obscurity, including the introduction of Facebook Graph Search, which was used as an example in the article.

Hartzog and Selinger end their essay with the following call to action:

Obscurity is a protective state that can further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine how much obscurity citizens need to thrive.

Taking into account the negative aspects of obscurity (or, put another way, the benefits of transparency), and the fact that there’s no one-size-fits-all solution, I’d amend their conclusion as follows:

Obscurity and transparency can each in its own way further a number of goals, such as autonomy, self-fulfillment, socialization, and relative freedom from the abuse of power. A major task ahead is for society to determine the balance of transparency and obscurity that citizens need in various aspects of their lives to thrive.

A no-win strategy

In the Jan. 11, 2013 issue of SANS NewsBites, editor Brian Honan writes:

It seems each time a zero day exploit is found in software, be that Java or otherwise, the industry pundits recommend that people stop using that software.  New vulnerabilities will always be discovered in the software we use.  If our best defence to a threat is to cause a denial-of-service on ourselves then this in the long term is a no-win strategy for us as an industry.

In this case, the advice was to disable the Java plugin in the browser, which, in fairness, is something many users could do without impact. Still, I couldn’t agree more with Honan’s comment. Here are a few other examples of common advice from security professionals and journalists that contradicts the way software is designed to be used:

  • Don’t click on links in email messages (even though email software automatically adds links to URLs for convenience).
  • Don’t install Android apps from anywhere other than the official app store(s) (thus negating one of the advantages of an open platform without a central gatekeeper).
  • Don’t click on ads (which most Web businesses depend upon for revenue).

As Honan says, we have to find a better way to protect our users and our systems than saying “don’t use technology the way it’s designed to be used.” His comment goes on to point to the CSIS Twenty Critical Security Controls, which are great guidelines for large and/or high security organizations. For consumers and small businesses, though, we’ll need to look to other answers: increased industry cooperation and law enforcement to reduce the threat, improved interfaces and signals to help users make safer choices, more secure architectures, and so on.

Patching really does work

I’ve been wanting to do a study like this for a long time. I’m glad someone else finally did:

The German Federal Office for Information Security (BSI) previously recommended that users should keep their Windows systems up to date, should ideally use Google Chrome and should avoid using Java at all if possible. The efficacy of these simple protection measures has now been demonstrated in a study carried out by the BSI. It used two different Windows systems to visit a total of 100 web sites hosting drive-by downloads (malicious code which spreads primarily by exploiting security vulnerabilities).

Unsurprisingly (to me, if not to certain security cynics), keeping your browser and plug-ins patched (and disabling Java if not in use) is effective in most instances:

The results speak for themselves, with the vulnerable system picking up 36 infections from visiting infected websites, whilst the system configured according to BSI recommendations picked up none.

The message is clear: teaching users a few basics, like how (and why) to keep their browser and plugins up to date, can dramatically reduce users’ risk of infection.

(hat tip to Denis Sinegubko for the link)

The unwarranted war on AV products

“Antivirus software a waste of money for businesses” crows the headline of a recent story, one of many missives against antivirus (AV) software driven by an outdated understanding of how such software works. The truth is that the death of AV tools’ effectiveness has been greatly exaggerated.

Traditionally, antivirus software was powered by signatures: digital fingerprints that uniquely identify malicious files or code snippets. The AV software on a computer would receive updates of its signatures once per day or week from the AV company, ensuring it could protect the user from the latest threats. The effectiveness of an AV product was determined primarily by the number of different signatures available and how quickly they were distributed. Tools like VirusTotal arose to make it easy to see which AV tool could detect a particular piece of malware. Product testing labs and tech journalists could load up a computer with a bunch of malware files and easily compare detection rates across products.

Today, everything has changed. Malware evolves far too quickly—sometimes even on a per-download basis—for AV products to depend on daily signatures. As one would expect, most of the major vendors have responded, dramatically increasing the frequency of signature updates and supplementing signatures with new approaches. Here are a few features that have become popular in major AV products recently:

  • If you download a file, the AV product will check it against a whitelist of known safe files. If it’s not on the whitelist and it doesn’t match a malware signature, the product will analyze the file’s behavior and/or reputation in real time (either on the computer or via the cloud) before allowing it to run.
  • If your browser connects to a website/URL known to distribute malware, the user will receive a warning and/or the browser will be blocked from downloading potentially harmful files.
  • If unknown software on your computer attempts to engage in a potentially harmful behavior (e.g., installing a new add-on in your browser), it will be  blocked and/or the user will receive a warning.
  • If a web page or online ad attempts to exploit a vulnerability to install malware on your computer, the AV tool will block the attempt.

By layering several approaches (including the use of signatures) atop each other, today’s AV products protect users far more effectively than their predecessors. Unfortunately, many people, even in the security industry, are not aware of this evolution. It’s common to see articles like the one above that claim AV tools are still primarily signature based and that use VirusTotal (which only assesses signature-based detection) as a gauge of AV effectiveness. In reality, this is like assessing the effectiveness of a building’s security system based only on its window alarms, while ignoring its motion detectors and cameras. When you look at tests that attempt to simulate real-world user behavior, such as visiting malicious websites and opening infected email attachments, it’s clear that AV is far more effective than the pundits claim. A recent set of studies by Dennis Technology Labs, for example, found that products prevented infection in between 53% and 100% of cases—far more effective than the pundits claim. The range of results shows that the more important discussion is about which tools and methodologies work best. (Some other important areas of comparison are false positive rates, the tools’ impact on system performance, and user experience.)

It’s time to end the unwarranted war on AV products. They may not be perfect—nothing is—but they do continue to earn their place alongside a range of other security measures in companies and homes alike.

Moving on from StopBadware

I recently made the difficult decision to step down as executive director of StopBadware. Though I didn’t start StopBadware—credit for that goes to John Palfrey, Jonathan Zittrain, and their collaborators—it has been my adopted baby for over five years now. What once was an energetic and chaotic Berkman Center project is now an independent (though still energetic and at times chaotic) nonprofit organization working together with many of the world’s greatest Web companies. I’m proud of the contributions I’ve made to StopBadware’s success, and I’m gratified that the organization has matured to a point that I can feel comfortable passing the reins to someone else. In fact, it’s not just that I feel comfortable doing so; I actually look forward to it. After five years, I think StopBadware will benefit from some fresh ideas and a new vision of what can be accomplished by leveraging the organization’s dynamic team, supportive partners, impressive board of directors, and positive reputation.

I believe the change will do me good, as well. During my time at StopBadware, I’ve built relationships with a lot of amazing people and learned from a boatload of mistakes (and the occasional success). I’m ready to take that experience into a new environment with new types of problems to solve. A reboot for my professional soul, if you will.

Some people have asked me where, specifically, I’m headed. I’m still exploring my options, but I do have some ideas of what I’m looking for. I know I want to remain in greater Boston, though I’m open to some travel. I’d like to make the best possible use of my experience leading a team and an organization. I enjoy building external relationships, public speaking, and otherwise interacting with people. Remaining in the security field would be ideal, though another area of interest is the intersection of technology and education. And, perhaps most of all, I want to feel good about what I’m contributing to my organization and what my organization is contributing to the world. Private sector? Nonprofit? Government? I’m open; it all depends on the fit.

Meanwhile, I’m not walking out the door at StopBadware until we’ve found a new executive director. Please, if you know a strong candidate, pass along the job description. And if you’re a strong candidate, let the Board’s search committee know why by sending a cover letter and resume to execsearch@stopbadware.org.

Privacy choice done well

I recently switched my ISP and cable provider from Comcast to Verizon. Yesterday, I received an email from Verizon describing some plans they have for facilitating geo-targeting of online ads. The email stood out to me as an example of privacy choice done well, for several reasons:

  • Verizon contacted me proactively by sending me an email, instead of expecting me to notice a change in the privacy policy or a note on my bill.
  • The email was clear and concise, explaining exactly what was planned and what the impact would be on me if I didn’t opt out (or if I did).
  • Opting out, if I so desired, simply required changing a setting in my online account.

Verizon should be commended for handling this new initiative in a way that demonstrates respect for their customers. Other service providers, Internet and otherwise, would do well to follow Verizon’s example.

Teaching “geek thinking”

In the past six months, I’ve become a bit addicted to a TV show called “Holmes Inspection.” In each episode, a family has a major problem—sometimes several—with their new home. Mike Holmes comes in, does a detailed inspection, points out everything the original home inspector missed, and then “makes it right” by fixing everything up properly.

Now, please understand, I know nothing about home repair or the building trades. I’m the last person you want to see with a hammer in his hand. But, as I’ve watched the show, something interesting has happened: I’ve started thinking a bit like a builder or home inspector. I may not know how to install attic vents, but the next time I’m in my attic, I guarantee I’ll look around to make sure the vents seem “right.” And I have some sense of what “right” means, even if I don’t know every intricacy to look for. Before, I wouldn’t have even thought about the vents.

I think this carries a lesson for how we educate “the masses” about the effective, safe, and responsible use of technology, the more I think we have to focus on teaching “geek thinking.” Just as a good home inspector looks at a house differently than most of us, a geek experiences new technology and new technological challenges differently than most people. When a non-geek sees an error message, he thinks “what am I supposed to do now?” When a geek sees an error message, she thinks “how do I find out what this means and what I should do,” and she has a few basic strategies for finding the answer. It’s a different mindset, and it’s what we have to start exposing people to.

Learning geek thinking won’t make people into technology experts any more than watching Holmes Inspection has prepared me for working as a building contractor. But it will help them be more informed, prepared consumers of technology, and wouldn’t it be nice to have more of those in the world?