Agreeing About Identity
This article appeared as my column for Connect Magazine in April 2006.
This morning I listened to a program on the Diane Rehm Show entitled "Surveillance Society." The guests were discussing something techies have known for a long time: advances in data mining, storage capacity, and processing power are challenging conventional notions of privacy. Not surprisingly, public policies on privacy are playing catch-up to advances in technology. Well known privacy expert Daniel Solove calls our current legal situation an "architecture of vulnerability."
You can't talk about digital identity for very long before the issue of privacy comes up. The most interesting applications of digital identity also touch on the parts of our lives we most want to keep private. Finances, health, and our daily activities spring to mind.
Kim Cameron, Microsoft's Chief Identity Architect, has written famously about the "laws of identity," seven rules that he thinks digital identity systems have to obey if they are to gain public trust and widespread adoption. The first law has important privacy implications: Technical identity systems must only reveal information identifying a user with the user's consent.
Most people, upon first hearing this law, think that it means that they should be able to prohibit any use of information about themselves without their consent. I think that broad interpretation is bound to lead to unsolvable problems. For example, free speech allows me to write about you, even post your description and address on the Web, without getting your permission. Are we ready to trade free speech for privacy?
The key to overcoming these contradictions is defining what information identifies a user. The fact is that there's lots of information about us that isn't identifying in the narrow sense, but rather describes transactions we've entered into or represents opinions others have about us--what we call reputation.
Rehm's show on the Surveillance Society dealt with a topic very worrisome to many people: companies collect and aggregate a lot of information about each one of us and are willing to sell it to anyone, including the government, who can meet their price. So, while the government is prohibited from amassing large databases on citizens, nothing prevents ChoicePoint or LexusNexus from doing the same.
Most people don't realize the extent of these collections. We tend to think of them as mailing lists, but the data goes well beyond that and the conclusions that companies reach using that data can control everything from whether you see an ad for cars or perfume to whether or not you get to buy that house you've been dreaming of.
Most of the data in these collections records transactions--records of interactions you've had with numerous companies. Companies analyze the transactions to create reputational data. Your credit score, for example, is reputational data that allows financial institutions to quantify the risk associated with giving you a loan.
These collections are a two-edged sword. On one hand, targeted ads seems less like advertisements and more like helpful recommendations. And without credit scores, banks would be much less likely to give many of us loans. On the other hand, we're all fearful that the data might be used to harm, rather than help, us.
I believe that part of the answer to the contradictions inherent in privacy and digital identity lies in a concept called identity rights. Under a dual system of identity rights agreements (IRAs) and a network for service provider reputation, identifying information would be protected through a voluntary system enforced by public pressure. These ideas are currently under development at Identity Commons.
Identity Rights Agreements (IRAs) are a way for users to express their preference about how their personally identifying information is collected and used. IRAs would be expressed as a set of jargon-free choices that are linked to legally enforceable contracts. Parties requesting identity information would agree to abide by the user wishes or not. Users would have a clear idea what could happen to their data if they turn it over.
The service provider reputation network would give information to users about who to trust. Service providers who honor by IRAs would be identified in the network and users would be able to tell which providers were part of the network and which were not.
Protecting user identity data on the Internet requires more than technical solutions. What's more, new legal solutions are likely to be a long time coming and are likely to be unsatisfying when they do. I believe IRAs and a reputation networks, or something like them, could provide an alternate route to protecting user data.
Phil Windley teaches Computer Science at Brigham Young University. Windley writes a blog on enterprise computing at http://www.windley.com and is the author of Digital Identity published by O'Reilly Media. Contact him at phil@windley.com
Last Modified: Saturday, 11-Mar-2006 21:59:25 UTC