Kim's First Law
Kim Cameron's First Law of Identity (
User Control and Consent) says
Technical identity systems must only reveal information identifying a user with the user's consent.
Let's start by stipulating that this isn't a "law". In the technical world, laws describe things
as they necessarily are, rather than
as we want them to be. In these terms, Kim has stated a
requirement, rather than a law. You can see this at a glance just by examining the grammar; "User Control and Consent" is in
imperative mood ("Technical identity systems
must only reveal..."); whereas scientific laws are in indicative mood ("A body at motion
remains in motion"). It doesn't make any sense to ask whether a requirement is "true" or "false", but we
can talk about whether "User Control and Consent" is
feasible and whether it's
desirable.
Owning Identities
We'll have to talk about several cases, because
Identity is Subjective. There are lots of versions of your identity out there, but we'll lump them into two broad categories: your
reputation (the story others tell about you), and your
self-image (the story you tell about yourself). For each category, we'll talk about "Control" and then we'll talk about "Consent".
Owning My Story About You
Your reputation is my story about you.
You can't
own this by definition; as soon as
you own it, it's no longer
my story about you; it instantly becomes an autobiography instead of a reputation.
Control
In principle, you could "Control" my story about you, but there are all sorts of good reasons you shouldn't be given this control.
At least in public, and at least in the USA:
There are exceptions, of course, but they're quite limited. The rules are similar elsewhere in the world, and you really don't want to change these rules. Let's say you want to stop me from talking about you - (maybe you don't want me to talk about what diseases you have, or how much you spent on beer last month). You'd start by getting rid of
this rule:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
...because as long as the first amendment is around, I can tell anyone I want how much you spent on beer last month, and (as long as I'm telling the truth) you're just going to have to suck it up. Once our constitutional rights are out of the way, you'll still have to clean up little annoyances like
this:
Copyright in a work protected under this title vests initially in the author or authors of the work.
US Code Title 17, chapter 2, section 201, part (a) (Copyright) actually
is a law - though not a scientific law - and it's not compatible with
User Control and Consent. Copyright says Michael Moore owns
this story about George W. Bush. Because Moore owns the story, he, not Bush, controls its publication and distribution. It's not a story Bush likes, so if Bush
did control it, you wouldn't get to see it.
After you've gotten rid of the laws that prevent people controlling their identities, you'll have to decide what to put in their place. If your goal is to stop me from telling the truth about you once I know it, and you want to stick with proven techniques, you might decide on something like this.
But then you'll run into another inconvenience - to enforce your control over whether and how I tell your story, you have to watch me all the time, to make sure I'm following the rules. And of course, I have to watch you all the time, to control your use of my identity. If your point was privacy in the first place, you might be getting worried about how much surveillance is creeping into the solution.
It's clear that this path is not leading us anywhere we want to go, and it's also clear why. Applying "User Control" to my story about you requires the government to give you authority over me - or to exercise authority over me on your behalf. Letting you control me this way creates fundamental conflicts with other values in a free society. This may be one of the reasons it's so hard to find a "right to privacy" in the founding documents of the United States - it's not just that the government finds it convenient to look into your affairs; it's also that enforcing a "right" to privacy requires the government to look into your affairs on behalf of others far too often.
Consent
If you can't get a workable privacy regime by giving you a right to control my behavior, you might want to do it by making me "Consent" to control my own behavior before you let me see your private information. At first blush, a Consent regime doesn't seem to create conflicts between fundamental rights (
e.g. my right to free speech
vs. your right to privacy) the way a Control regime does. By moving privacy into the realm of contract law, a Consent regime allows me to "opt out" of receiving your information from you if I don't like your rules for the use of that information, so it appears to preserve my autonomy.
In practice, though, a Consent regime quickly becomes a Control regime, because the consent relationship doesn't involve the right set of parties.
If a third party (say a business) asks you for information about yourself, there's a reasonable chance you'll make good decisions about what uses to consent to and what uses to prohibit. But remember: we're talking about my story, not your story; you're not involved in the telling of the story - I'm telling it to the third party. If a business asks me for information about you, I don't have any reason to withold consent for any use whatsoever. This was one of the issues in the JetBlue disclosure; the government asked Axicom for information about a bunch of individuals who weren't around when their stories got told.
Imposing a requirement that I get in touch with you and ask for consent whenever a situation like this arises drags Control back into the equation; it makes me ask for your permission every time somebody wants me to tell my story about you. This might be reasonable if all my information about you originates from you - but if that were true, we'd be talking about your story about you, not my story about you.
Imagine what happens in a Consent regime if I want to give "Conglomerex" a piece of information about you. If you gave me the information in the first place, you're free to tell me what I can and can't do with the information, and you're free to withold the information if I don't agree. You're also in a position from which it's possible to impose a Consent rule: since I have to get in touch with you to collect the information, we can have a discussion about consent while I'm doing the collecting, or (if I'm not sure how I'm going to have to use the information) I can ask how to get in touch with you to get your consent when I need it. But if somebody else gave me the information, or if I discovered it myself, then I certainly don't have your consent, I may not have any way to contact you to get your consent, and I may object to getting your consent, on the grounds that you have no right to constrain my liberty.
Owning Your Story About You
Your self-image is your story about you.
Control
In principle, controlling the information that makes up your self-image seems easy - you just choose what you tell to whom, and under what conditions. You can indeed
decide on any rules you like for distributing identity information about yourself, but you have to make tradeoffs to
enforce those rules.
You value your privacy, of course, but you also value other things, like the ability to get a credit card and the ability to travel on airplanes.
If you want to get on an airplane, you have to show ID, and all the acceptable forms of ID display your home address. This means you have to make a choice between getting on an airplane and keeping your home address to yourself. If you want to establish credit, you have to submit to a credit check, which reveals a lot more about you than just your home address. Here again, you have a choice between getting the credit and controlling information about yourself - if you want the credit, you have to give up information somebody else chooses, and you have to do it on somebody else's terms.
So while you might control your self-image information in principle, in practice you can't really exercise control - you have to negotiate the terms for the use of your information.
I'll note in passing that letting individuals control the use of information about themselves violates Kim's own criterion that "Claims need to be subject to evaluation by the party depending on them" - because invidual control over information gives the individual the unfair advantage of being able to keep adverse information about himself secret.
Consent
Negotiating the terms on which you will disclose self-image information is what Consent is all about.
In many cases there are laws and regulations constraining what an organization can do with information it collects about you in situations like this, but you don't control the content of those laws and regulations - so you're not making the rules (and in fact the interests of society and the interests of corporations influence the content of laws and regulations at least as strongly as the interests of individuals).
If you want to control your identity based on consent, you have to decide between two approaches:
- Build one set of terms which covers all uses of your information, and let an automated system take care of negotiating your terms and enforcing your rules. In this case, you need to figure out in advance what all the possible scenarios for use of your identity are, and write a policy which covers each scenario.
- Negotiate terms manually each time someone asks for your information. In this case, you need to get notified each time someone tries to use your identity, and make a decision about whether or not to grant consent.
Case 1 clearly isn't going to work all the time; you can't know in advance what benefits are going to be offered in exchange for identity information, and you can't know in advance what risks are going to be created by giving that information out - so no matter what your policy is, there will always be cases it doesn't handle correctly. This means there will be lots of exceptions to your policy, and when these exceptions arise you'll have to fall back on case 2.
Case 2 doesn't really work either. We know because we've tried it. Look here, or here, or here, or here for examples of what you're already being asked to consent to. How well do you understand these terms? How likely are you to take the time to clear up the things you're not sure about? How likely are you to say "no"?
The forces at work here are obscurity, coercion, and burdens.
Obscurity exists not because the people who write consent agreements are trying to confuse you, but instead because of the third axiom of identity
IDENTITY ALLOCATES RISK
Let's imagine that you don't have a lot of money, and you have a poor credit history, but you're facing a big expense. You go to the bank and ask for a loan. In this case, there's a fairly big risk that you won't be able to make the payments if you get the loan. The interesting question in this case is "who suffers the consequences of this risk?" If the bank gives you the loan, they suffer the consequences - because they're out the money if you don't pay it back. If the bank denies the loan application, you suffer the consequences - because you can't meet your expenses. The bank makes its decision based on a piece of identity information: your credit score. If the score is high, you get the loan, and the bank gets the (small) risk; otherwise, you don't get the loan and you get the (larger) risk. Your credit score allocates the risk either to you or to the bank.
Because Identity Allocates Risk, society makes rules to make sure Identity is used fairly. Two typical rules are (1) someone who wants to use your information has to tell you what it will be used for ("notice"), and (2) someone who wants to use your information in a way that might create risks for you has to get your permission ("consent"). You have to pay close attention here: the rules don't say that businesses and other parties can't create risks for you - all the rules say is that other parties have to tell you when they create risks for you, and they have to get you to agree to the creation of the risks.
These rules create obscurity, because in business, the language of risk is law. The bank makes lots of loans, and therefore it is exposed to lots of risk. Because it's exposed to lots of risk, the bank is willing to spend some money to protect itself against that risk. It spends that money on people who speak the language of risk - lawyers - and those lawyers write consent agreements that let the business do what it needs to do profitably (in this case, it needs to create risks for you by using your identity information) without breaking the rules.
You probably aren't a lawyer, so the language in which consent agreements are written is foreign, and confusing, to you. On the other hand, you don't value your privacy enough to hire your own lawyer each time you encounter a consent disclosure - so you end up doing something (reading a complicated legal agreement which allocates risks between you and the corporation) which you're not really qualified to do, and it's confusing and frustrating (Don Davis calls this kind of situation a "compliance defect").
Coercion works because you need services like banking and medical care more than banks and insurance companies need your business. Sometimes the coercion is completely explicit; look over here for an example (go to page 78; the document is PDF because the html version didn't render properly for me in Firefox or Safari, and I don't want to inflict it on your browser). Email me from prison if you decide to withold consent.
Burdens work by wearing you down with work you don't really want to do. When you sign up for an online service, you really want the service (otherwise you wouldn't be at the signup page), and you're impatient to get it set up. If you have to do a lot of work (reading agreements, consulting the Better Business Bureau, the FTC, or other reputation services, checking out the information handling practices of all the partners the business shares your information with, and so on), you're very likely to give up and consent, thinking "how bad could it be?"
Even if you always read consent agreements carefully before you decide whether to accept them, wording like the following (adapted from an actual consent agreement) makes it hard to know exactly what you can expect:
Our business changes constantly, and our Privacy Notice and the Conditions of Use will change also. We may e-mail periodic reminders of our notices and conditions, unless you have instructed us not to, but you should check our Web site frequently to see recent changes. Unless stated otherwise, our current Privacy Notice applies to all information that we have about you and your account. We stand behind the promises we make, however, and will never materially change our policies and practices to make them less protective of customer information collected in the past without the consent of affected customers.
And, of course, sometimes
things just go horribly wrong.
The Nub of the Problem
Remember the wording of Kim's First Law:
Technical identity systems must only reveal information identifying a user with the user's consent.
It's clear that this "First Law requirement" isn't
feasible - a system which actually obeyed this law would be illegal (because it would withold information in cases in which the law requires it to disclose information without the data subject's consent), and it would be dangerous to the data subject (because it would withold personal information even in critical situations if consent couldn't be obtained - for example when the data subject is unconscious and injured after an accident).
If you agree with most or all of what I've written above, you'll agree that the "First Law requirement" isn't desirable either, because it creates a lot of work for the individual without really solving the privacy problem.
The reason the First Law doesn't work is actually very deep and subtle, and I'll write more about it soon. But I'll leave you with a hint. The nub of the problem with the First Law is the assumption that privacy is based on secrecy.