Base security should not cost extra

Microsoft’s release yesterday of Office 365, a suite of cloud-based services for small and mid-sized businesses (SMBs) and enterprises, has been getting a lot of attention in the press. One egregiously poor decision by Microsoft, though, has not been talked about quite as much. Ed Bott over at ZDNet, though, picked up on it:

If you sign up for one of the Office 365 Enterprise plans, all your users can connect to SharePoint using secure (HTTPS) connections. If you have a Professional (small business) plan, you don’t get that capability. For a small business that deals with sensitive documents, that’s a potentially dangerous configuration.

Let me be very clear about this: baseline security is not an option, and it shouldn’t be sold as a value-added feature. Microsoft claims “Office 365 comes with the robust security and reliability you need to run your business, all for $6 per user per month.” Offering SharePoint, which provides businesses with document sharing and other intranet capabilities, without protecting the confidentiality of data transmissions to and from the server, is not “robust security.”

One has to wonder what other security trade offs Microsoft made with Office 365 that are not yet evident. Until Microsoft demonstrates that it really stands behind its “robust security” claims, SMBs would be well advised to avoid this new offering.

Android users don’t swim in cesspools

Over at InfoWorld, Galen Gruman posted a column with the ludicrous title “Android is a malware cesspool—and users don’t care.” The entire premise (or premises) of the column are so flawed that I almost didn’t bother writing a response. But because they reflect ideas I’ve heard/seen mentioned elsewhere, I felt it was worth commenting.

First, Android is not a “malware cesspool.” Yes, it’s a popular open platform, and the combination of openness and relatively little effort by Google to “tend the garden” has led to some malware popping up. But we’re talking about a few dozen malware apps out of a few hundred thousand available. And, while some have lingered longer than they should in the Android Market, they do eventually get removed and, in some cases, even uninstalled from users’ phones. Though accurate numbers are difficult to find and compare, every indication I’ve seen says that malware is far less prevalent on Android phones than on Windows PCs at this point. And, even with years of history of malware issues, most people don’t consider Windows a cesspool.

The idea that “users don’t care” is also off base. Of course they do. No one wants to have his money stolen or his phone ruined. The evidence that Gruman provides to bolster his case is that users don’t carefully scrutinize applications’ permissions before installing, and that users haven’t flocked to download a specific app that seeks to warn users of dangerous apps. Even if we fully (and foolishly) assumed that users were aware of all the potential risks and how best to protect themselves, neither of these arguments holds up.

The permissions feature used in Android provides broad, confusing information about what an application is requesting permission to do once installed. There’s no easy way to get further information or to discuss with other users or the vendor any questions that arise when faced with the choice of whether to accept the permissions. And users have to make a cost-benefit trade-off: do I scrutinize these permissions, which takes time and thought on the chance that they might make me think twice about installing this app that I want to install? Even if a user cares about the risk, it may be rational to skip or skim the permissions and then click “install.”

As for security apps, Gruman first argues that mobile security apps are mostly a waste, and then bemoans the fact that users haven’t expressed interest in a particular security app. Again, even if we assume that the users are aware that the security app might help them, they may be unfamiliar with the product, the vendor, and how well this particular product would help protect them.

Of course, the assumption that users know the full range of risks and threat vectors doesn’t hold up, either. Part of the implication embedded in “users don’t care” is that they know about these risks and just aren’t interested in avoiding them. The reality, though, is much different. Many users have no idea what might happen, how likely it is to happen, what they can do to reduce the risks, how to prioritize those defenses, etc. Heck, I’m not sure I know the answers to all those questions, and I do this stuff for a living!

I should give Gruman credit for one great point towards the end of his column. He emphasizes that user education shouldn’t just come in the form of lectures, but in the form of in-your-face intervention. He gives the example of an IT department trying to phish its users, and then letting the victims know when they’ve fallen for it. This “just in time,” real-world learning can be very effective, and can help users comprehend the risks, which leads to better decisions. Which, of course, is only true because they care.

 

Driving good user behavior

“It’s the customer’s fault. He’s not using the product the way he’s supposed to.”

What a frustrating thing to hear. Fortunately, when I heard this today, it was being quoted by an executive who was fighting the use of such statements by other employees within his company. He recognized the truth of things: if your product allows people to do something they shouldn’t do, some of them will do it. And if you don’t want them complaining about the effects of doing it, you need to change your product’s design.

For example, it used to be the case that car cigarette lighters (now known as 12V sockets) continued to draw power when the car wasn’t running. Guess what happened? People left things plugged in, which drained their car’s battery, causing them to be unable to start the car a day or a week later. Sure, you can blame people for leaving things plugged in, but come on… who hasn’t left something in the car on occasion? The better solution, which fortunately has been adopted by most car companies nowadays, is to power off the 12V sockets when the car is turned off. That way, the battery can’t be killed by a predictable user behavior. Instead of blaming the customer, change the design!

The Safari web browser has an option to automatically “open ‘safe’ files after downloading.” The recent Mac Defender family of scareware exploited this by looking safe and thus automatically opening for users with this option checked. You could argue that Mac users shouldn’t have checked the option if they’re concerned about security. But, really, why is this option there in the first place? Apple’s own safety tips page urges users:

Always use caution when opening (such as by double-clicking) files that come from someone you do not know, or if you were not expecting them. This includes email attachments, instant messaging file transfers, and other files you may have downloaded from the Internet.

If you want users to think twice about opening files, and knowing that an automated system will never be certain what a “safe” file is, why offer an option to automatically open files that have been downloaded by a web browser? It’s certain that plenty of Safari users without a good sense of what they should and shouldn’t open would turn on this option, and the outcome is entirely predictable. Design flaw.

Nudging users to better security

I’ve recently started reading Nudge by Richard H. Thaler & Cass R. Sunstein. The basic idea of the book is that governments, businesses, and others that present choices to individuals can—and in some cases should—arrange the choices in a way that encourage the individuals to make choices that are in their own best interests.

Here’s the definition of nudge in the authors’ own words:

A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives.

They go on to talk about defaults as a powerful form of nudge. A classic example is enrolling new employees at a company into the 401(k) retirement plan by default, which encourages employees to save for retirement. (Typically, the default is for employees to not be enrolled unless they opt in.)

There are clear applications of this concept in the security world. A paper last year showed that users are more likely to have the latest version of a web browser installed if they’re using a browser like Firefox or Chrome that updates automatically by default, than if they use a browser that requires additional steps to update. Similarly, Adobe recently switched new installations of Reader from defaulting to only checking for updates to actually installing updates by default.

There is more work that can be done here. Plenty of apps still fail to even check for updates by default, even though known security issues in outdated apps are one of the leading avenues for malware infection. More browsers could default to protecting against cross-site scripting vulnerabilities. More wireless routers could default to a secure configuration. And so on.

Of course, remember that a nudge is supposed to be designed to support the individual’s self-interest. This implies that defaults have to be what most people would rationally choose if they had all the relevant information. We’ve seen cases of badware that default to bundling some irrelevant (or, worse, privacy-invading) additional piece of software without clear notice. Even legit software like Chrome may push things a little far with its completely silent, no-clear-disclosure updates that not only patch security vulnerabilities, but also change the browser’s functionality. Is this actually what users would want if they had all the information?

Every product manager at a hardware or software company should read Nudge, if only to help them think through how to configure defaults to maximize security and other benefits for their products’ users.

Schneier’s TED Talk

In my last post, I alluded to our brains being hardwired to assess risk, but not always doing the best job of it, especially online. In his recent talk at TEDx PSU, Bruce Schneier spoke about how we should think about security, and how that sometimes differs from how we do think about it.

How people think about PC security

I recently had the privilege of reading “Folk Models of Home Computer Security,” a great paper by Rick Wash, an assistant professor at Michigan State. Here’s his abstract:

Home computer systems are frequently insecure because they are administered by untrained, unskilled users.  The rise of botnets has amplified this problem; attackers can compromise these computers, aggregate them, and use the resulting network to attack third parties.  Despite a large security industry that provides software and advice, home computer users remain vulnerable.  I investigate how home computer users make security-relevant decisions about their computers.  I identify eight `folk models’ of security threats that are used by home computer users to decide what security software to use, and which security advice to follow: four different conceptualizations of `viruses’ and other malware, and four different conceptualizations of `hackers’ that break into computers.  I illustrate how these models are used to justify ignoring some security advice.  Finally, I describe one reason why botnets are so difficult to eliminate: they have been cleverly designed to take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

The brilliance of the paper is its insight into how people actually make decisions about their PCs. Report after report has highlighted what people do (or fail to do), but not why they do it. Wash points out that his sample was small, so his findings may not represent the whole population. Still, it’s reasonable to assume that many users have similar mental models as the individuals Wash interviewed.

I do wish that he hadn’t steered away, at times explicitly, from a natural conclusion: that helping people understand the reality of cybercrime might change their models and thus their behavior. That said, his own conclusion that those of us that want to change users’ behavior have to meet users where they are, not where we wish they would be, is well taken.

Hat tip to Bruce Schneier for his mention of this paper.

Two billion and counting

According to a report by the United Nations, the number of Internet users globally exceeded two billion a few months ago. It is clear to me, as someone who works with badware, that the bulk of those 2 billion users are ill prepared to navigate the online world safely and securely. That’s really no surprise, when you think about it. We humans are genetically programmed to survive in the physical world, or at least that of our ancestors. We instinctively flee from stronger predators, become uneasy if someone looks “shifty,” and pull our hands away from a flame when it starts to burn.

As Bruce Schneier has written about extensively, our hard-wired tendencies do not always help us make the best decisions in assessing risk generally, or online in particular. They are, after all, mostly designed to keep us safe from immediate harm in the physical world.

Beyond this genetic programming, we all learn how to navigate the world as we grow up. Parenting, teaching, media, social cues and our own experience help guide us in our learning.

Navigating safely online is simultaneously less complex and more complex than doing so in the “real” world. The distinction is an artificial one, in one sense, as the Internet is integral to the daily fabric of many of our lives. Still, there’s nothing in our genetic programming or in the experience of most people alive today that has prepared them for protecting their computers, deciding where to click, or choosing which software to install.

So, what can we do—where “we” might include the technology industry, government, educators, parents, society—to help equip over two billion Internet users to make safer, more secure choices? What can (or must) change in how we educate, how we design user interfaces, how we signal danger, how we govern—to compensate for the instinctual cues and shared cultural experience that we lack in cyberspace?

I think a lot about these questions. I started this blog in part to give me a place to share thoughts, conversations, further questions, relevant resources, and hopefully an occasional answer. I hope others interested in this subject will engage, as well, whether through their own blogs, comments here, Twitter, or other avenues.