01 June 2009

Cyber Security

The Obama administration released the results of its Cyber-Security Review last week. The report's conclusions and recommendations aren't going to do any harm, but they're not going to solve the cyber-security problem either.

Start with the obvious: information security has failed, as a technology and as a discipline. A lot of security professionals object to this statement, but let's get real. Hundreds of millions of credit card numbers are stolen from retailers, processors, and other online properties every year. Foreign hackers roam the systems supporting major national defense projects. Spam, malware, and viruses circulate constantly despite the purchase and use of millions of dollars worth of anti-malware tools. Serious penetration tests succeed essentially 100% of the time. The list goes on; the news is all bad, and it's on all the time.

The Cyberspace Policy Review team wants to fix this by building "next generation secure computers and networking for national security applications"; it also wants the government to "provide a framework for research and development strategies that focus on game-changing technologies." These are fine goals, but they're not going to solve the cyber-security problem.

If we eventually do solve the cyber-security problem, the cause of our success is almost certainly not going to be the new, smart things we start doing. It is going to be the old, dumb things we stop doing.

The dumbest thing we're doing right now, in a nutshell, is optimizing our systems for low cost, fast performance, and convenience in the average case. Designing systems this way requires tradeoffs, and those tradeoffs make the systems unsafe in the worst case.

We optimize for the average case because when we're using systems, we're almost always using them in the average case. Things are fine, and we want our systems to be cheap, easy, and fast (insert your own obvious joke here).

We almost never see the worst case, so we don't worry about it much. But that doesn't mean that the worst case isn't a big problem. We built New Orleans' levees for the average case; hurricane Katrina destroyed them (Katrina, of course, wasn't even close to the worst case). We've built our cyber infrastructure, like New Orleans' levees, for the average case.

Unless we (the information security industry, the technology industry as a whole, and society generally) stop sacrificing worst-case safety on the altar of average-case convenience, we're going to continue to fail. But rebuilding cyberspace for a safe worst case is going to require sacrifices.

The Cyberspace Policy Review says "The national dialog on cyber-security must begin today." I agree. Let's start the dialog with a conversation about what sacrifices we're willing to make to get to an acceptable worst-case performance. Here are four questions to get the ball rolling:

Question 1: Are we willing to give anything up?

Not everything can be made secure, and securing some things makes them slower, or less convenient, or less flexible. If we really want security, we're going to have to give up some other things we want. We need to be clear about where security falls on our scale of priorities, we need to be clear about what type and magnitude of loss we're willing to sustain as a result of cyber-security failures, and we need to be clear about what we will have to give up in order to get the security we decide we need.

Are we willing to give up the ability to execute code downloaded from untrusted sources? Are we willing to give up instant enrollment in new services with no meaningful background checks? Are we willing to give up anonymity? Are we willing to give up connecting critical systems to the public internet, even if it means we've got to put human operators in more places to manage disconnected systems?

Question 2: Are we willing to do anything different?

For years the technology community and society generally have been headed down the path of using cheap, generic, general-purpose components for everything. The government calls this strategy "COTS" (Commercial Off-The-Shelf). The strategy is great for building a cheap, convenient average case. It's terrible for building a safe worst case.

Complicated general-purpose systems are impossible to secure. A complicated system sometimes does things its users and even its designers didn't expect. This makes complicated systems unsafe. But we keep using complicated general-purpose systems (Windows computers, for example) to handle security-critical and even safety-critical tasks. It's a recipe for disaster, but it's easy because complicated general-purpose systems are cheap, and they're easy to customize for new applications. Small, simple systems, carefully designed for one specific job, are much safer - but they're also more expensive to buy and more time-consuming to build and test.

Are we willing to build new, special-purpose tools to help us secure cyberspace? Are we willing to tell the general-purpose system vendors that we're not going to use their wares in critical applications?

Question 3: Are we willing to take any blame?

If your painkiller kills not just pain but patients' hearts, you pay the victims. If your Pinto turns into an external combustion engine, you pay the victims.

If your financial software leaks a couple hundred million credit card numbers to a hacker, you write a press release describing your commitment to security.

Are we, the information security community, willing to assume a duty of care to those who use our products? Are we willing to submit to liability when our products are defective or fail to meet a minimum standard of fitness for purpose?

Bruce Schneier has argued in favor of liability for security failures; in fact he's argued that we can't solve the problem without it.

Question 4: Are we willing to give any guarantees?

If your lawnmower doesn't cut grass, or if your printer doesn't load paper, you can take it back to the store and get your money back.

If your information security product doesn't keep hackers out of your corporate secrets, you're out of luck.

Are we, the information security community, willing to stand behind the performance of the products we build?

John F. Kennedy, in his inaugural address, promised the world that the United States would "pay any price, bear any burden, meet any hardship, support any friend, and oppose any foe" to assure the survival of liberty. What, if anything, are we willing to do to assure information security?

6 Comments:

Blogger Unknown said...

I've often thought that the dumbest thing we do is try to protect massive amounts of information as "secret" rather than reduce the value of the information. I want to be able to put my CC# on a million websites and have a very, very limited chance of fraud impacting me (or the shareholders of my bank/credit union). I want to be able to tell everyone my SS# and not fear negative consequence.

But doing so involves stopping how we currently view the problem and adopting a different solution. And one that is institutional and not beholden to the empty promise of technology silver bullets.

June 02, 2009 1:57 PM  
Blogger D said...

I strongly suspect that as a society we are not willing to sacrifice much to improve worst-case security, and we'll continue to optimize for the average (or, well, the one- or three-sigma) case. Effectively, things are working "well enough" if commerce and daily life continue to function, regardless of how bad things obviously are from our (the security professionals') point of view.

I don't think there will be any fundamental changes in this area. But the tradeoff can be nudged a bit in the directions that we like through, as you suggest, mechanisms of accountability and blame. If we can persuade the system to push some of the consequences of insecure systems back on the system makers, so that no single maker can safely undercut the others by putting less effort into security, we might be able to make things more secure.

I think expecting anything more radical than that, in the way of societal behavior change, is a nice fantasy :) but not much more than that. Nice as it would be!

June 02, 2009 2:30 PM  
Blogger Bob said...

@alex, using an identifier as an authenticator is indeed a dumb practice; neither your credit card number nor your SSN is a secret, and neither should be treated as such.

I'll argue that the use of CC# and SSN as authenticators is just another example of optimizing for average-case convenience.

Most of the time, someone who's calling in to a call center is the real cardholder; it's quick and easy to ask the real user for an easily-remembered piece of information that other casual users are unlikely to know in order to get some small amount of assurance that the caller is genuine.

But a determined attacker is not another casual user; he's going to take the trouble to learn your non-secret SSN or credit card number.

The determined attacker is the worst case, and the protocol hasn't been designed to prevent him from doing his dirty work - because a protocol which would do that would be expensive for the call center and inconvenient for you - in the average case.

June 02, 2009 2:43 PM  
Blogger Lydia Fiedler said...

1) Funny that the Blakley jets all scrambled on that announcement. My antennae have been bristling since and I've been growling various speeches under my breath since.

2) I agree with you & @alex - it's not a secret - let's fling them around fearlessly.

3) Here's an excellent use for waterboarding. Let's drown the little f-ers when we catch them. That's just for fun though, not as a solution.

4) The answer to your question about sacrifice is one the government is never going to ask people to answer. They are going to tell people to climb into the big feel good snuggie of the nanny state and tell everyone everything's fine - Big Daddy's got it. You keep handing over your passwords and contact lists to those facebook apps that want you to display what kind of pantiliner you're most like...

Now I'm mad again. Thanks a bunch, bro. :D

June 02, 2009 4:57 PM  
Blogger Unknown said...

So, I agree. But, just to be a bit contrarian, perhaps part of the problem is that in the worst case, the systems keep working AS IF they were still operating under average case.

Not sure it is a valid comparison but biological systems detect, seek, and eliminate intruders using a "bottom up" approach from the cellular level into a specialized system of cells especially built to combat the threat. For better or worse, our brains don't have to tell us we've got the flu, the brain figures it out from the negative effect it has on the body. The specialized system wrecks havoc on all the other systems and it gets your attention.

We've engineered our systems to feel no pain. Consequently, we are not aware there is a problem until it is too late. Just like our more inconsiderate co-workers, by the time our brain is engaged enough to say, "duh, things are not going so well" we've gotten all our friends sick...

June 02, 2009 9:37 PM  
Anonymous Anonymous said...

This comment has been removed by a blog administrator.

August 18, 2009 9:27 AM  

Post a Comment

<< Home