Trust is complicated. But for some reason, online trust mechanisms assume it is outrageously simple.
For example, firewalls imply that once you’re in the network, you’re trusted. It’s baked into the framing of the problem. Similarly, Trust Frameworks assume that once you are in the Framework, you’re trusted (although you could build a framework that is dynamic). Even a user directed approach like Facebook Connect assumes that once you click “allow”, you trust that website to use your information appropriately, essentially forever… even if you revoke that permission later.
Trust isn’t broad-based and it isn’t static. It is directed and dynamic.
Think about it. We don’t trust our accountant to babysit and we don’t trust our babysitter with our finances. Trust is given for specific purposes and in specific contexts and it changes as quickly as we can fire that babysitter.
We trust the receptionist at the doctor’s office with our written medical histories because he is behind the counter, apparently employed by the doctor who needs that information to do her job. We trust the bartender with our credit card because she’s behind the bar serving drinks and we accept that it will be kept safe and not used until we close out the bill. But we wouldn’t give that receptionist our medical history if we met him in a bar later that evening, and we wouldn’t give that bartender our credit card if we met her as a fellow patient in the doctor’s office the next day.
We trust people to do specific things—or not to do certain other things—and that trust is based on the context in which we give it and the state of our relationship with the trusted party.
That means that just like our relationships, trust changes over time. Trust systems need ways to discover that trust should change and allow for that change to be managed. Reagan put it perfectly, “Trust but verify.”
When verification fails, trust changes.
Whether it’s a romantic partner, a subcontractor, a company, or top-secret agent, trust is granted incrementally. When it is lost, it is often destroyed.
Incremental trust happens all the time. We don’t like logging in just to view a web page, but we don’t mind it to see confidential information like order history. We aren’t comfortable giving our credit card just to enter a store–the relationship isn’t ready for that yet–but we don’t mind once we start the check out process.
When we lose trust, we sometimes throw the jerks out on the street. Betrayal is an unfortunate fact of life; it also has great significance to how we handle online trust. How do we “break up” with service providers? Revoking consent and demanding our data purged is an obvious need, but one that is often obscured or impossible. As our relationships change, our trust changes. Yet our digital trust models mostly don’t.
Online trust models assume that trust is binary, broad, and stable—that you either have it or you don’t—for one simple reason: because it’s easy to implement.
When we log into a website with Facebook Connect, Facebook verifies that we want to share information with the website. However, there is no way for us to modify the permissions. We can’t say what use is allowed and what isn’t. We can’t pick and choose which data they get. We can’t ask for additional consideration. And we can’t put a time limit on access. Facebook’s interface presumes all-or-nothing and forever, for anything. But what we’d really like is something like this:
“You can write to my wall, but only for messages I explicitly approve. You can have my email address but only for account maintenance, not for “special offers” from you or your associates. You can’t have access to my home address. You can use the photos tagged “public” for one month after I post them, but I want a revenue share from any money you make from them. Ask me another time about reading my inbox.”
In order for our trust model to support transactions like this, it needs to be specific and flexible. It should not only let us direct our trust to specific purposes, it should make it easy to moderate that trust as our relationships evolve.
Lawrence Lessig famously said “Code is Law“. Trust models like Facebook’s, and the code behind it, make it nearly impossible for sites to allow the kind of user-driven permissions we need. While our relationships evolve, the current platforms are actually too brittle for developers to implement flexible, user-respecting approaches to privacy and permission unless they are willing to jump through hoops and hack around arbitrary technical limitations. We need a new code base that actually makes it easy for developers to do the right thing, rather than code that enshrines restrictive and disempowering practices as strongly as if the law made it mandatory.
Because the one thing I know is that tomorrow will be different, and the harder it is for developers to support changing relationships, the harder it is for the entire ecosystem to respond to changing needs.
Stop the monolithic permissioning ceremonies!
Deal with it.
Until we do, online trust will remain brittle and untenable for our most important, powerful, and profitable relationships.