12 January 2006

On The Absurdity of "Owning One's Identity"

Kim's First Law

Kim Cameron's First Law of Identity (User Control and Consent) says

Technical identity systems must only reveal information identifying a user with the user's consent.
Let's start by stipulating that this isn't a "law". In the technical world, laws describe things as they necessarily are, rather than as we want them to be. In these terms, Kim has stated a requirement, rather than a law. You can see this at a glance just by examining the grammar; "User Control and Consent" is in imperative mood ("Technical identity systems must only reveal..."); whereas scientific laws are in indicative mood ("A body at motion remains in motion"). It doesn't make any sense to ask whether a requirement is "true" or "false", but we can talk about whether "User Control and Consent" is feasible and whether it's desirable.

Owning Identities

We'll have to talk about several cases, because Identity is Subjective. There are lots of versions of your identity out there, but we'll lump them into two broad categories: your reputation (the story others tell about you), and your self-image (the story you tell about yourself). For each category, we'll talk about "Control" and then we'll talk about "Consent".

Owning My Story About You

Your reputation is my story about you. You can't own this by definition; as soon as you own it, it's no longer my story about you; it instantly becomes an autobiography instead of a reputation.


In principle, you could "Control" my story about you, but there are all sorts of good reasons you shouldn't be given this control. At least in public, and at least in the USA: There are exceptions, of course, but they're quite limited. The rules are similar elsewhere in the world, and you really don't want to change these rules. Let's say you want to stop me from talking about you - (maybe you don't want me to talk about what diseases you have, or how much you spent on beer last month). You'd start by getting rid of this rule:
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
...because as long as the first amendment is around, I can tell anyone I want how much you spent on beer last month, and (as long as I'm telling the truth) you're just going to have to suck it up. Once our constitutional rights are out of the way, you'll still have to clean up little annoyances like this:
Copyright in a work protected under this title vests initially in the author or authors of the work.
US Code Title 17, chapter 2, section 201, part (a) (Copyright) actually is a law - though not a scientific law - and it's not compatible with User Control and Consent. Copyright says Michael Moore owns this story about George W. Bush. Because Moore owns the story, he, not Bush, controls its publication and distribution. It's not a story Bush likes, so if Bush did control it, you wouldn't get to see it.

After you've gotten rid of the laws that prevent people controlling their identities, you'll have to decide what to put in their place. If your goal is to stop me from telling the truth about you once I know it, and you want to stick with proven techniques, you might decide on something like this.

But then you'll run into another inconvenience - to enforce your control over whether and how I tell your story, you have to watch me all the time, to make sure I'm following the rules. And of course, I have to watch you all the time, to control your use of my identity. If your point was privacy in the first place, you might be getting worried about how much surveillance is creeping into the solution.

It's clear that this path is not leading us anywhere we want to go, and it's also clear why. Applying "User Control" to my story about you requires the government to give you authority over me - or to exercise authority over me on your behalf. Letting you control me this way creates fundamental conflicts with other values in a free society. This may be one of the reasons it's so hard to find a "right to privacy" in the founding documents of the United States - it's not just that the government finds it convenient to look into your affairs; it's also that enforcing a "right" to privacy requires the government to look into your affairs on behalf of others far too often.


If you can't get a workable privacy regime by giving you a right to control my behavior, you might want to do it by making me "Consent" to control my own behavior before you let me see your private information. At first blush, a Consent regime doesn't seem to create conflicts between fundamental rights (e.g. my right to free speech vs. your right to privacy) the way a Control regime does. By moving privacy into the realm of contract law, a Consent regime allows me to "opt out" of receiving your information from you if I don't like your rules for the use of that information, so it appears to preserve my autonomy.

In practice, though, a Consent regime quickly becomes a Control regime, because the consent relationship doesn't involve the right set of parties.

If a third party (say a business) asks you for information about yourself, there's a reasonable chance you'll make good decisions about what uses to consent to and what uses to prohibit. But remember: we're talking about my story, not your story; you're not involved in the telling of the story - I'm telling it to the third party. If a business asks me for information about you, I don't have any reason to withold consent for any use whatsoever. This was one of the issues in the JetBlue disclosure; the government asked Axicom for information about a bunch of individuals who weren't around when their stories got told.

Imposing a requirement that I get in touch with you and ask for consent whenever a situation like this arises drags Control back into the equation; it makes me ask for your permission every time somebody wants me to tell my story about you. This might be reasonable if all my information about you originates from you - but if that were true, we'd be talking about your story about you, not my story about you.

Imagine what happens in a Consent regime if I want to give "Conglomerex" a piece of information about you. If you gave me the information in the first place, you're free to tell me what I can and can't do with the information, and you're free to withold the information if I don't agree. You're also in a position from which it's possible to impose a Consent rule: since I have to get in touch with you to collect the information, we can have a discussion about consent while I'm doing the collecting, or (if I'm not sure how I'm going to have to use the information) I can ask how to get in touch with you to get your consent when I need it. But if somebody else gave me the information, or if I discovered it myself, then I certainly don't have your consent, I may not have any way to contact you to get your consent, and I may object to getting your consent, on the grounds that you have no right to constrain my liberty.

Owning Your Story About You

Your self-image is your story about you.


In principle, controlling the information that makes up your self-image seems easy - you just choose what you tell to whom, and under what conditions. You can indeed decide on any rules you like for distributing identity information about yourself, but you have to make tradeoffs to enforce those rules.

You value your privacy, of course, but you also value other things, like the ability to get a credit card and the ability to travel on airplanes.

If you want to get on an airplane, you have to show ID, and all the acceptable forms of ID display your home address. This means you have to make a choice between getting on an airplane and keeping your home address to yourself. If you want to establish credit, you have to submit to a credit check, which reveals a lot more about you than just your home address. Here again, you have a choice between getting the credit and controlling information about yourself - if you want the credit, you have to give up information somebody else chooses, and you have to do it on somebody else's terms.

So while you might control your self-image information in principle, in practice you can't really exercise control - you have to negotiate the terms for the use of your information.

I'll note in passing that letting individuals control the use of information about themselves violates Kim's own criterion that "Claims need to be subject to evaluation by the party depending on them" - because invidual control over information gives the individual the unfair advantage of being able to keep adverse information about himself secret.


Negotiating the terms on which you will disclose self-image information is what Consent is all about.

In many cases there are laws and regulations constraining what an organization can do with information it collects about you in situations like this, but you don't control the content of those laws and regulations - so you're not making the rules (and in fact the interests of society and the interests of corporations influence the content of laws and regulations at least as strongly as the interests of individuals).

If you want to control your identity based on consent, you have to decide between two approaches:

  1. Build one set of terms which covers all uses of your information, and let an automated system take care of negotiating your terms and enforcing your rules. In this case, you need to figure out in advance what all the possible scenarios for use of your identity are, and write a policy which covers each scenario.
  2. Negotiate terms manually each time someone asks for your information. In this case, you need to get notified each time someone tries to use your identity, and make a decision about whether or not to grant consent.
Case 1 clearly isn't going to work all the time; you can't know in advance what benefits are going to be offered in exchange for identity information, and you can't know in advance what risks are going to be created by giving that information out - so no matter what your policy is, there will always be cases it doesn't handle correctly. This means there will be lots of exceptions to your policy, and when these exceptions arise you'll have to fall back on case 2.

Case 2 doesn't really work either. We know because we've tried it. Look here, or here, or here, or here for examples of what you're already being asked to consent to. How well do you understand these terms? How likely are you to take the time to clear up the things you're not sure about? How likely are you to say "no"?

The forces at work here are obscurity, coercion, and burdens.

Obscurity exists not because the people who write consent agreements are trying to confuse you, but instead because of the third axiom of identity

Let's imagine that you don't have a lot of money, and you have a poor credit history, but you're facing a big expense. You go to the bank and ask for a loan. In this case, there's a fairly big risk that you won't be able to make the payments if you get the loan. The interesting question in this case is "who suffers the consequences of this risk?" If the bank gives you the loan, they suffer the consequences - because they're out the money if you don't pay it back. If the bank denies the loan application, you suffer the consequences - because you can't meet your expenses. The bank makes its decision based on a piece of identity information: your credit score. If the score is high, you get the loan, and the bank gets the (small) risk; otherwise, you don't get the loan and you get the (larger) risk. Your credit score allocates the risk either to you or to the bank.

Because Identity Allocates Risk, society makes rules to make sure Identity is used fairly. Two typical rules are (1) someone who wants to use your information has to tell you what it will be used for ("notice"), and (2) someone who wants to use your information in a way that might create risks for you has to get your permission ("consent"). You have to pay close attention here: the rules don't say that businesses and other parties can't create risks for you - all the rules say is that other parties have to tell you when they create risks for you, and they have to get you to agree to the creation of the risks.

These rules create obscurity, because in business, the language of risk is law. The bank makes lots of loans, and therefore it is exposed to lots of risk. Because it's exposed to lots of risk, the bank is willing to spend some money to protect itself against that risk. It spends that money on people who speak the language of risk - lawyers - and those lawyers write consent agreements that let the business do what it needs to do profitably (in this case, it needs to create risks for you by using your identity information) without breaking the rules.

You probably aren't a lawyer, so the language in which consent agreements are written is foreign, and confusing, to you. On the other hand, you don't value your privacy enough to hire your own lawyer each time you encounter a consent disclosure - so you end up doing something (reading a complicated legal agreement which allocates risks between you and the corporation) which you're not really qualified to do, and it's confusing and frustrating (Don Davis calls this kind of situation a "compliance defect").

Coercion works because you need services like banking and medical care more than banks and insurance companies need your business. Sometimes the coercion is completely explicit; look over here for an example (go to page 78; the document is PDF because the html version didn't render properly for me in Firefox or Safari, and I don't want to inflict it on your browser). Email me from prison if you decide to withold consent.

Burdens work by wearing you down with work you don't really want to do. When you sign up for an online service, you really want the service (otherwise you wouldn't be at the signup page), and you're impatient to get it set up. If you have to do a lot of work (reading agreements, consulting the Better Business Bureau, the FTC, or other reputation services, checking out the information handling practices of all the partners the business shares your information with, and so on), you're very likely to give up and consent, thinking "how bad could it be?"

Even if you always read consent agreements carefully before you decide whether to accept them, wording like the following (adapted from an actual consent agreement) makes it hard to know exactly what you can expect:

Our business changes constantly, and our Privacy Notice and the Conditions of Use will change also. We may e-mail periodic reminders of our notices and conditions, unless you have instructed us not to, but you should check our Web site frequently to see recent changes. Unless stated otherwise, our current Privacy Notice applies to all information that we have about you and your account. We stand behind the promises we make, however, and will never materially change our policies and practices to make them less protective of customer information collected in the past without the consent of affected customers.
And, of course, sometimes things just go horribly wrong.

The Nub of the Problem

Remember the wording of Kim's First Law:

Technical identity systems must only reveal information identifying a user with the user's consent.
It's clear that this "First Law requirement" isn't feasible - a system which actually obeyed this law would be illegal (because it would withold information in cases in which the law requires it to disclose information without the data subject's consent), and it would be dangerous to the data subject (because it would withold personal information even in critical situations if consent couldn't be obtained - for example when the data subject is unconscious and injured after an accident).

If you agree with most or all of what I've written above, you'll agree that the "First Law requirement" isn't desirable either, because it creates a lot of work for the individual without really solving the privacy problem.

The reason the First Law doesn't work is actually very deep and subtle, and I'll write more about it soon. But I'll leave you with a hint. The nub of the problem with the First Law is the assumption that privacy is based on secrecy.


Blogger Frank Yeh said...

Perhaps the rule should be revised to state that Technical identity systems must only reveal information identifying a user and provided by the user with the user's consent.

This would serve to further qualify the user's identity data as
- non-user-provided, which as you state above, the user has no control or consent over
- user-provided-public, where the user consents to revealing data he has provided
- user-provided-private, where the user does not consent to revealing data he has provided.

There is a perfect example of the two types of user=-provided data in the registration process for blogspot, where you can specify in your profile which pieces of data you wish to publish.

January 12, 2006 10:30 PM  
Blogger Jon Lebkowsky said...

I agree with Frank. Bob, I don't think the intent is as broad as Bob suggests. Kim et al are advocating for users to own their data in specific contexts. What we hope for is an identity framework wherein a user can determine how standard demographic data is transferred and used; a technology to support this would require *less* work on the user's part, not more.

January 13, 2006 12:05 PM  
Blogger Libra said...

Hi Bob,

What are your views on strong authentication? Feel free to send via email :)

Libra White

January 18, 2006 4:48 PM  
Blogger Deji said...

I couldn't tell if you were being facetious or not, and this could be because I find it difficult to continue through the rest of your article after suffering the first couple of paragraphs.

Your arguments are very specious - this, mind you, is why I'm wont to believe that you were just trying to be sarcastic and did not intend for this article to be taken seriously. It appears to me that you were attempting to project the ephemerals into into real life. When Kim references Identity, I do not infer that he was talking about pictures or discomforting stares in public. But, again, I don't want to insult your intelligence, so I will choose not to believe that you thought so either.

You wrote:
Your reputation is my story about you. You can't own this by definition.....

No sir, my reputation is my reputation. Your story about me is your interpretation of my reputation and if it tarnishes my reputation in any way, I get to seek remedies. I get to haul you before a judge and seek redress - all because I (not you) own that reputation. You spread rumors about, you slander me, you libel me with YOUR story, you hurt MY reputation and I own you thenceforth as far as redress is concerned.

Your story about me is not my reputation.

You wrote:
Let's say you want to stop me from talking about you.

No. We don't want to, and I don't think Kim was talking along this line. What we want to do, going along the line of your misfitting analogies, is to stop you from pretending to be me, or pretending to be speaking for me when you don't have the permission to do so from me.

Like I said, I feel dumb here. I think you are too smart to not realize the incongruity of your disagreement. Part of me is thinking that you probably straightened things out way down in your article, and the opening paragraphs were just intended to turn away trolls like me who have little tolerance for such flippancy. If that was the intention, well, congrats! you succeeded. But if the first couple of paragraphs are a representative sampling of the overall article, please let me register my disagreement with you.

February 09, 2006 7:14 PM  
Blogger John M said...

Bob, the best thing about your thoughts is that they challenge the basic position in the world of privacy advocates that I can and should control every bit of information about me. That is simply not a workable approach. But nobody is putting forward an alternative paradigm.

You might want to consider an old article by Phil Becker of Digital ID World that talks about "fair use". The idea that by participating in social transactions in a digital world, you leave behind information tracks that you ultimately cannot control brings some reality to the discussion. The idea should not be that you can control all this info, but that limited uses of it should be permitted without specific permission, bound by codes of conduct.

Excellent work.

February 20, 2006 12:41 PM  
Blogger Bob said...

Well, Deji, as you observe, we simply disagree. While I am indeed sometimes sarcastic, I did very much intend the article to be taken seriously. I have no idea what you mean by "project the ephemerals into real life", but what I am trying to do is to point out that real life is much more complicated than most of the explanations of real life which people use to justify the design of "identity management systems".

I'm genuinely not sure what Kim means when he references "identity" - in fact I think he means different things at different times; sometimes I think he means a collection of data in a profile, sometimes I think he means a human concept, and sometimes I think he means something like what a philosopher would call "core identity" (in whose existence I do not believe).

When you say "my reputation is my reputation", you are clearly aware that this is not a definition and carries no meaning. When you say my story is "an interpretation of your reputation", you do not specify what it is that I'm interpreting - which is precisely the problem I'm trying to point out here. And when you say that you get to seek remedies if my story "tarnishes" your reputation, you're right - but you won't be granted those remedies except in very specific circumstances (in the USA my story has to be both false and malicious, for example).

A question I'd love to hear you address is this: if "your reputation is your reputation", why do you & I need a judge to determine whether a particular statement "tarnishes" your reputation or merely confirms an existing known aspect of it?

If all you want me to stop doing is pretending to be you, you seem to be concerned about identity theft but not about privacy. Is this correct, or do you also want to stop me from revealing information you consider private? If you do indeed care about privacy, I'm at a loss to understand what you're disagreeing with me about - and I *am* sure that Kim cares about privacy.

Despite all this disagreement, I'm happy that you posted this comment, and it by no means qualifies you as a troll.

February 21, 2006 4:05 PM  
Blogger jitendra said...

Great article...couldn't agree with you more.

The basic issue is that man is a social being...and full control of one's identity just does not work in social interactions.


September 19, 2006 12:33 AM  
Blogger Mark Lizar said...

The 7 law’s as a technical solution into social theory.
The 7Laws of Identity, I believe are more of a pathway forward for identity rather than a code that provides a solution for identity. Bob’s article reveals contradiction with the first of 7 laws, and I agree this contradiction is revealed through a sincere effort of Microsoft, mediating the identity market space. The systemic problem inherent with the identity marketplace is apparent throughout society. Therefore any laws or solutions requires a systemic approach. To cut to the point, all of this is underlined by the economy of identity, identity is a currency of power and new technologies are changing the traditional hegemony.

I think, the identity camp can be split into those that are motivated on behalf of traditional hegemony (commercial interests) or those that are motivated by the hegemonic use of ones own identity. Bob’s article paints colour into the paradigm that is now the current state affairs in regards to the economy of the identity. One of the most noteworthy points I have found from reading this article is the term ‘compliance defect’ that is a term I would like to see used in the civil courtroom of the future. Above all else this article, and its operational definition of consent, paints a picture of Microsoft which remind one of the words of Durkheim the social theorist; Durkheim had an interesting control theory, that like Bob, has inspired me to separate the benefit of the technology Microsoft has enabled upon me with the need for a structure that represents the camp that I am sitting in. (the hegemonic use of my own identity camp). There is much at stake, and if I get what Bob’s point is, there does certainly exist conflicts with the subject that the Microsoft 7 laws are attempting to mediate, which do not address the (f)actual issues that are at the root of the problems regarding ‘the economy of the identity’.

For example Kim Cameron unofficially, opened the IAPP conference in Toronto last Wednesday with the Ontario Privacy Commissioner. At which point, Ann Cavouklan the Ontario privacy commissioner released a white paper which has grafted the Fair Information Principles over the 7 laws of identity. Is this the solution? Hardly. Microsoft provides, what looks like a systemic interface and Bob’s article is a very ‘path’ revealing article. Ultimately, solutions need to address the systemic issues that our identities deal with, which requirea central focus of autonomy. Where is the transparency? Where is the balance? What do you think Bob is it a secret?

Mark Lizar – Identity Trust

October 26, 2006 5:14 AM  
Anonymous Anonymous said...

This comment has been removed by a blog administrator.

December 27, 2009 5:41 AM  
Blogger Unknown said...

This is a very interesting post, touching on the tricky topics of privacy and identity. I like the critique of the somewhat overwrought Laws of Identity. But I find the privacy angle less convincing. It’s not that the points about personal privacy are wrong, but rather they are very much tangential to the main thrust of modern information privacy law.
The cause of practical privacy governance is much helped if we actually decouple privacy and identity, as ironic as that might sound. “Identity” is indeed a slippery, soft concept, and like “privacy”, resists definition. But this is why information privacy law tends not to rest on “identity” but rather on the much drier idea of personally identifiable information. Likewise, privacy law tends not to even use the terms “private” and “public” because they are so problematic. Instead, we can construct effective information privacy laws without any philosophical complexity, by basing it on principles that provide individuals some rights to control the flow of information that pertains to them.
Modern information privacy law is quite simply framed. It generally forbids the collection by governments and businesses of personally identifiable information (PII) without good cause, and the re-use of PII for purposes unrelated to the primary reason for collection. These are quite straightforward, enforceable measures.
While it is true that you can't stop people from observing you or taking your picture or talking about you, these are not actually the concerns of information privacy law. To begin with, most privacy law governs the acts of organisations, not of individuals. And it’s specifically concerned with recorded information (not merely “talk”) about identifiable individuals. So under privacy law you can in fact stop corporations and governments from recording observations about you in which you are named, linking in other material, putting such records to new purposes, making them available to others and so on.
Finally, the suggestion that enforcing privacy laws necessitates individuals to put others under their own surveillance really is a red herring. Privacy law is about certain types of information and the availability of that information beyond defined limits. I don’t need myself to watch an infringer of my privacy in real time in order to catch them at it, and I certainly don’t need to infringe their privacy. Instead, I need to become aware of information artefacts of the infringer that violate Information Privacy Principles.

August 11, 2010 7:08 PM  

Post a Comment

<< Home