The Obama administration released the results of its Cyber-Security Review last week. The report's conclusions and recommendations aren't going to do any harm, but they're not going to solve the cyber-security problem either.
Start with the obvious: information security has failed, as a technology and as a discipline. A lot of security professionals object to this statement, but let's get real. Hundreds of millions of credit card numbers are stolen from retailers, processors, and other online properties every year. Foreign hackers roam the systems supporting major national defense projects. Spam, malware, and viruses circulate constantly despite the purchase and use of millions of dollars worth of anti-malware tools. Serious penetration tests succeed essentially 100% of the time. The list goes on; the news is all bad, and it's on all the time.
The Cyberspace Policy Review team wants to fix this by building "next generation secure computers and networking for national security applications"; it also wants the government to "provide a framework for research and development strategies that focus on game-changing technologies." These are fine goals, but they're not going to solve the cyber-security problem.
If we eventually do solve the cyber-security problem, the cause of our success is almost certainly not going to be the new, smart things we start doing. It is going to be the old, dumb things we stop doing.
The dumbest thing we're doing right now, in a nutshell, is optimizing our systems for low cost, fast performance, and convenience in the average case. Designing systems this way requires tradeoffs, and those tradeoffs make the systems unsafe in the worst case.
We optimize for the average case because when we're using systems, we're almost always using them in the average case. Things are fine, and we want our systems to be cheap, easy, and fast (insert your own obvious joke here).
We almost never see the worst case, so we don't worry about it much. But that doesn't mean that the worst case isn't a big problem. We built New Orleans' levees for the average case; hurricane Katrina destroyed them (Katrina, of course, wasn't even close to the worst case). We've built our cyber infrastructure, like New Orleans' levees, for the average case.
Unless we (the information security industry, the technology industry as a whole, and society generally) stop sacrificing worst-case safety on the altar of average-case convenience, we're going to continue to fail. But rebuilding cyberspace for a safe worst case is going to require sacrifices.
The Cyberspace Policy Review says "The national dialog on cyber-security must begin today." I agree. Let's start the dialog with a conversation about what sacrifices we're willing to make to get to an acceptable worst-case performance. Here are four questions to get the ball rolling:
Question 1: Are we willing to give anything up?
Not everything can be made secure, and securing some things makes them slower, or less convenient, or less flexible. If we really want security, we're going to have to give up some other things we want. We need to be clear about where security falls on our scale of priorities, we need to be clear about what type and magnitude of loss we're willing to sustain as a result of cyber-security failures, and we need to be clear about what we will have to give up in order to get the security we decide we need.
Are we willing to give up the ability to execute code downloaded from untrusted sources? Are we willing to give up instant enrollment in new services with no meaningful background checks? Are we willing to give up anonymity? Are we willing to give up connecting critical systems to the public internet, even if it means we've got to put human operators in more places to manage disconnected systems?
Question 2: Are we willing to do anything different?
For years the technology community and society generally have been headed down the path of using cheap, generic, general-purpose components for everything. The government calls this strategy "COTS" (Commercial Off-The-Shelf). The strategy is great for building a cheap, convenient average case. It's terrible for building a safe worst case.
Complicated general-purpose systems are impossible to secure. A complicated system sometimes does things its users and even its designers didn't expect. This makes complicated systems unsafe. But we keep using complicated general-purpose systems (Windows computers, for example) to handle security-critical and even safety-critical tasks. It's a recipe for disaster, but it's easy because complicated general-purpose systems are cheap, and they're easy to customize for new applications. Small, simple systems, carefully designed for one specific job, are much safer - but they're also more expensive to buy and more time-consuming to build and test.
Are we willing to build new, special-purpose tools to help us secure cyberspace? Are we willing to tell the general-purpose system vendors that we're not going to use their wares in critical applications?
Question 3: Are we willing to take any blame?
If your painkiller kills not just pain but patients' hearts, you pay the victims. If your Pinto turns into an external combustion engine, you pay the victims.
If your financial software leaks a couple hundred million credit card numbers to a hacker, you write a press release describing your commitment to security.
Are we, the information security community, willing to assume a duty of care to those who use our products? Are we willing to submit to liability when our products are defective or fail to meet a minimum standard of fitness for purpose?
Bruce Schneier has argued in favor of liability for security failures; in fact he's argued that we can't solve the problem without it.
Question 4: Are we willing to give any guarantees?
If your lawnmower doesn't cut grass, or if your printer doesn't load paper, you can take it back to the store and get your money back.
If your information security product doesn't keep hackers out of your corporate secrets, you're out of luck.
Are we, the information security community, willing to stand behind the performance of the products we build?
John F. Kennedy, in his inaugural address, promised the world that the United States would "pay any price, bear any burden, meet any hardship, support any friend, and oppose any foe" to assure the survival of liberty. What, if anything, are we willing to do to assure information security?