Dave Winer's Club140.org gives us a good example of how hard it is to protect data. For those of you not following along at home, Dave created a site, called Club140, that lists any tweets he sees on Twitter that are exactly 140 characters long (the max allowed by Twitter).
Today, Dave posted this on Twitter:
i just added code to http://club140.org/ to filter out messages from people posting from "protected" accounts. hadn't thought of it before.
The issue is that some people have their tweets protected so that only people who are following them can see what they write. Dave, by reposting those protected tweets, was allowing the protected tweets to leak onto the 'Net for all to see.
Not a big deal in this case, and Dave corrected it, but it's illustrative of the problem we have with explicit authorizations of any kind--one that's at the heart of many of the discussions surrounding privacy.
If you try to protect data with explicit permissions (e.g. "you can share my blog URL freely, keep but not share my email address, and use my SSN once for the explicit purpose agreed to and then must destroy it.") This makes the problems of DRM look like child's play and we know how well that's worked.
The problem is that explicit permissions scale geometrically. Picture a 3 dimensional table with people on one axis, resources (like tweets) on the second, and possible actions along the third. Put a T or F at each intersection indicating whether or not person P is allowed to take action A on resource R. Now, make sure these travel around with each resource (including fragments) in a way everyone can read, no one can tamper with, and is extensible as others add their own data. Eek!
There are systems that scale better. Auditing is one. Someone who wants their tweets protected can see that Dave is sharing them and call him on it. Auditing scales linearly but requires transparency. If Dave weren't posting the tweets, but rather sending them off surreptitiously to the CIA, then no on would be the wiser.
That's where trust comes into play. Presumably, people allow Dave to see their protected tweets because they trust him to protect their privacy. He did and I'm certain that would be the case whether or no there was transparency.
What we're after, of course, is accountability. We use things like explicit authorization as proxies for accountability so often that we're in danger of confusing the means with the end. In reality, there are many ways of achieving those ends and with varying degrees of cost and effectiveness, but there's no silver bullet. The techniques that have served us well offline, based on transparency, are guides to what will work online as well.