Towards a node.js Auto-Tweeter

I’ve been intrigued by node.js as a platform for highly-scalable server applications written in javascript and finally found a super simple application I wanted to try with it: an auto-tweeter that would let me schedule future tweets to my own account.  I’m organizing a pub crawl for Repeal Day and I want a flight of tweets to go out during the crawl… without me or my partners having to do it manually.

I mentioned this to my buddies at Santa Barbara Hacker Space and we made it a collaborative project.  I’ll miss this week’s WebTech Wednesday session, but perhaps a write up will help keep the conversation going.

The idea is pretty simple.  There are three components.

  1. The tweet engine
  2. The datastore
  3. CRUD Interfaces to the data store

The tweet engine will query the datastore every so often and see if there are any tweets that need to be tweeted (because their tweet_by timestamp is in the past).

The datastore will hold the tweets and their tweet_by time, stored as UTC timestamps.

The CRUD interface will let us create, read, update and delete those tweets.

Future ideas:

  1. Use Twitter authentication for accessing the app
  2. Bulk uploader for multiple tweets.
  3. Support multiple users
  4. Support queues of tweets, which get tweeted at a regular schedule
  5. Support additional entity data with the tweets, such as geolocation

Issues:

  1. Managing the timestamp will take some thought. The CRUD interface should use localtime—with an option to override the timezone—but store UTC in the database. That will avoid some common problems with tweets going out on the server’s time zone when the user thought it was set for theirs.
  2. This first version will, unfortunately, be totally exposed to anyone who knows the URL. We’ll add twitter authentication as soon as possible.
  3. Timing the engine will take some finesse. We could just poll the database, but that wastes a lot of cycles.  Instead, I’d like the engine to schedule itself to run at the next tweet_by time and have the CRUD either wake the process up early or kill & restart it.

That’s it for the start. More as it happens.

Posted in AutoTweeter, coding | Tagged , , , , , , | 6 Comments

Trust Me… Things Change.

Trust is complicated. But for some reason, online trust mechanisms assume it is outrageously simple.

black and white handshake

For example, firewalls imply that once you’re in the network, you’re trusted. It’s baked into the framing of the problem. Similarly, Trust Frameworks assume that once you are in the Framework, you’re trusted (although you could build a framework that is dynamic). Even a user directed approach like Facebook Connect assumes that once you click “allow”, you trust that website to use your information appropriately, essentially forever… even if you revoke that permission later.

Trust isn’t broad-based and it isn’t static. It is directed and dynamic.

Think about it. We don’t trust our accountant to babysit and we don’t trust our babysitter with our finances. Trust is given for specific purposes and in specific contexts and it changes as quickly as we can fire that babysitter.

multiple multi color handshakesWe trust the receptionist at the doctor’s office with our written medical histories because he is behind the counter, apparently employed by the doctor who needs that information to do her job.  We trust the bartender with our credit card because she’s behind the bar serving drinks and we accept that it will be kept safe and not used until we close out the bill.  But we wouldn’t give that receptionist our medical history if we met him in a bar later that evening, and we wouldn’t give that bartender our credit card if we met her as a fellow patient in the doctor’s office the next day.

We trust people to do specific things—or not to do certain other things—and that trust is based on the context in which we give it and the state of our relationship with the trusted party.

That means that just like our relationships, trust changes over time. Trust systems need ways to discover that trust should change and allow for that change to be managed. Reagan put it perfectly, “Trust but verify.”

When verification fails, trust changes.

Whether it’s a romantic partner, a subcontractor, a company, or top-secret agent, trust is granted incrementally. When it is lost, it is often destroyed.

Incremental trust happens all the time. We don’t like logging in just to view a web page, but we don’t mind it to see confidential information like order history. We aren’t comfortable giving our credit card just to enter a store–the relationship isn’t ready for that yet–but we don’t mind once we start the check out process.

When we lose trust, we sometimes throw the jerks out on the street. Betrayal is an unfortunate fact of life; it also has great significance to how we handle online trust. How do we “break up” with service providers? Revoking consent and demanding our data purged is an obvious need, but one that is often obscured or impossible.  As our relationships change, our trust changes. Yet our digital trust models mostly don’t.

Online trust models assume that trust is binary, broad, and stable—that you either have it or you don’t—for one simple reason: because it’s easy to implement.

When we log into a website with Facebook Connect, Facebook verifies that we want to share information with the website. However, there is no way for us to modify the permissions. We can’t say what use is allowed and what isn’t. We can’t pick and choose which data they get. We can’t ask for additional consideration. And we can’t put a time limit on access. Facebook’s interface presumes all-or-nothing and forever, for anything. But what we’d really like is something like this:

“You can write to my wall, but only for messages I explicitly approve. You can have my email address but only for account maintenance, not for “special offers” from you or your associates. You can’t have access to my home address. You can use the photos tagged “public” for one month after I post them, but I want a revenue share from any money you make from them. Ask me another time about reading my inbox.”

In order for our trust model to support transactions like this, it needs to be specific and flexible. It should not only let us direct our trust to specific purposes, it should make it easy to moderate that trust as our relationships evolve.

Lawrence Lessig famously said “Code is Law“. Trust models like Facebook’s, and the code behind it, make it nearly impossible for sites to allow the kind of user-driven permissions we need. While our relationships evolve, the current platforms are actually too brittle for developers to implement flexible, user-respecting approaches to privacy and permission unless they are willing to jump through hoops and hack around arbitrary technical limitations.  We need a new code base that actually makes it easy for developers to do the right thing, rather than code that enshrines restrictive and disempowering practices as strongly as if the law made it mandatory.

Because the one thing I know is that tomorrow will be different, and the harder it is for developers to support changing relationships, the harder it is for the entire ecosystem to respond to changing needs.

In short:

Stop the monolithic permissioning ceremonies!

Trust evolves.

Deal with it.

Until we do, online trust will remain brittle and untenable for our most important, powerful, and profitable relationships.

Posted in Information Sharing, Personal Data Store, Shared Information, User Driven Services, Vendor Relationship Management | Tagged , , | 3 Comments

Fourth Parties are agents. Third Parties aren’t necessarily.

Fourth Parties is a powerful, but sometimes confusing term. In fact, I think Doc recently mischaracterized it in a recent post to the ProjectVRM mailing list.

Normally, I wouldn’t nitpick about this, but there are two key domains where this is vital and I’m knee deep in both: contracts and platforms.

Doc said:

Like, is the customer always the first party and the vendor the second party?

Well, no. So, some clarification.

First and second parties are like the first and second person voices in speech. The person speaking is the first person, and uses the first person voice (I, me, mine, myself). The person being addressed is the second person, and is addressed in the second person voice (you, your, yourself).

And

To sum it up, third parties mostly assist vendors. That is, they show up as helpers to vendors.

The first point is great, and if you continue this further (and make the leap from parties to data providers), you get something like this:

The ownership of “your” and “my” data is usually clear. However, ownership of the different types of “our” data is a challenge at best.  To complicate matters further, every instance of “my data” is somebody else’s “your data”. In every case, there is this mutually reciprocal relationship between us and them. In the VRM case, we usually think of the individual as owning “my data” and the vendor as owning “your data”, but for the vendor, the reverse is true: to them their data is “my data” and the individual’s data is “your data”. Similar dynamics occur when the other party is an individual. I bring my data, you bring your data, and together we’ll engage with “our” data. We need an approach that respects and applies to everyone’s data, you, me, them, everybody..

Which is from my post on data ownership. The trick is that 1st party and 2nd party perspectives are symmetrical.  We are their 2nd party and they are their 1st party. Whatever solution we come up with in the VRM world needs to work for everyone as their own 1st party. Everyone. Including “them”. Including Vendors.

In fact, that’s the only way we can get out of the client-server, subservient mentality of the web. It’s also the only way to make sure that our solutions work even when the “vendor” is our neighbor, our friend, or our family.

This is particularly clear in the work we are doing at the Kantara Initiative’s Information Sharing Work Group. We are creating a legal framework for protecting information individuals share with service providers. As such, it’s vital that the potential ambiguities of language are anchored in rigorous definitions. And what has emerged is that every transaction is covered by a contract between two parties. Not three. Not four. Not one. Two. And to the extent that third (or fourth) parties are mentioned, they are outsiders and not party to the contract. Since we are building a Trust Framework, there is a suite of contracts covering the different relationships in the system, but the legal obligations assumed in each contract have clear and unambiguous commitments between the first and second parties only.

Platforms

But where I think where Doc’s framing most needs a bit of correction is that, in fact, historically, third parties are never presumed to be working for second party. Not in the vernacular and not in any legal context. This presumption only emerges once you add a Fourth Party claiming that it works on behalf of the user. That is, 3rd-party-as-ally-of-the-2nd-Party is a corollary to Fourth Party concept, not a foundation for explaining it.

Take Skype, which I have on my Verizon cell phone. In the contract with Verizon, Skype is a third party application and Skype, Inc. is the third party.  But Skype isn’t working on Verizon’s behalf.

This is not only true in the sense of 3rd party applications whose value proposition is clearly at odds with the 2nd party, it is even more true when it comes to platforms. And especially when you consider the relevance of VRM as a platform for innovation.

In every platform, there are third parties who create apps that run on the platform. Microsoft built Windows, but Adobe built Photoshop. Apple built the iPhone, but Skype built Skype.  For platforms to be successful, they necessarily bring in 3rd party developers to build on top of the platform. These developers aren’t necessarily working on behalf of the platform provider, and it would be a miscarriage of alignment to claim that they are. They are out for themselves, usually by providing unique value to the end user. Some new widget that makes live better.

This becomes even more true when you are dealing with open platforms, or what I called Level 4 Platforms (building on Marc Andreeson’s The 3 Platforms You Meet on the Internet). In open platforms, you actually have 3rd parties helping contribute to the code base of the platform itself.  Netscape adds tables to HTML. Microsoft adds the <marquee> tag.  But here, it is even crazier to imagine that these 3rd parties are acting on behalf of the platform party… because there really isn’t a platform party. Nobody owns the Internet.

I think the right way to think about 4th Parties is that they have a fiduciary responsibility to the 1st party and 3rd parties may or may not.

Fourth Parties answer to the 1st party.

3rd Parties may not answer to anyone.

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , | 2 Comments

World Economic Forum and Personal Data as an Asset Class

At this last week’s Personal Data Deep Dive in Palo Alto, I had a chance to talk with some of the folks working with the World Economic Forum about their recent report Personal Data: The Emergence of a New Asset Class.

While I remain concerned about how the institutions of globalization might co-opt personal data to further their own ends, it almost certainly isn’t as bad as recently discussed on the Project VRM mailing list.

My realization: of course WEF would see data as an asset class. If it weren’t, it wouldn’t even make it onto their radar. Complaining about the WEF seeing personal data as an asset is a bit like complaining that Mozart sees everything as music.  Sure, it might be a limited framework and might be abused if applied too broadly, but it’s perhaps the most real way for the WEF to think about how personal data will lead to changes in the global economy.

It is worth understanding that the paper is an early step in acculturating Fortune 1000 CEOs to a new reality about user-driven services, volunteered personal information, and the entire VRM gestalt. It’s a baby step.

But it is a step.

Indeed, the folks at the workshop were well aware of the kind of reaction they are bound to get from communities like VRM. Bottom up groups tends to distrust top-down institutions.  Fair enough. But think about it from the perspective of the folks inside the WEF that are fighting the good fight, not just because it’s moral or politic, but because it is perhaps the only viable route beyond the information overload facing our entire information infrastructure.  Those folks need to light the minds of global business leaders without igniting fear that the house is on fire.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | 4 Comments

Constellations of Privacy

Public v PrivatePrivacy issues dominate the global debate about protecting the rights of individuals online. Yet, the conversation almost entirely misses a vital point: public or private isn’t a black or white choice and it never has been.

Sociologists have long recognized that there is no single “public”, no monolithic context where social norms congeal and deviant behavior is shunned. Instead, we’ve recognized that each of us engages in multiple, distinct publics, with their own boundaries and rules of etiquette. We act one way at PTA meetings, another in Las Vegas, yet again a third way at work. These are different flavors of public behavior and yet we expect most people to respect boundaries between these areas.  We manage these publics as easily and naturally as we greet a newcomer to the workplace. We do it without conscious effort, but with a sensitivity to the place and moment.

We also constantly manage any number of private contexts, where certain topics and behaviors are understood to be held in confidence, from family matters and Santa Claus to corporate secrets. Just as there is no monolithic “public”, there is no single “private” domain, where insiders know everything and outsiders know nothing. We share confidences with our kids, our spouses, our lawyers, our doctors, our psychiatrists, our bartenders, our business partners, all with certain, often unstated, rules about appropriate use and redistribution. Sure, these confidences are occasionally broken, but when they are, it violates our privacy and breaks trust. Frackin’ jerks!

Privacy leaks occur when information entrusted to one context finds its way into another: when our doctor’s receptionist tells our co-worker a bit too much about our visit, a boss overhears an embarrassing story about where we really were last Monday, or an ex tells her gossiping friends about our intimate moments. These context violations hurt whether they occur between “private” contexts or “public” ones. The issue isn’t whether something was said or done in “public” or “private”. The issue is the boundary breaking, the violation of expectations, and the betrayal of trust.

Our worlds are not defined by a single boundary between public and private, but by a constellation of privacy, comprised of multiple, distinct contexts, each with their own set of participants and expectations. There are literally billions and billions of contexts worldwide, each of us participating in dozens, perhaps hundreds.

We’ve been figuring this out for a long time. Over thousands of years, we’ve developed social norms that help us navigate different contexts. Space and time are our most common tools, marking the boundaries between strip clubs, schools, churches, homes, bedrooms, and restrooms. Don’t interrupt the magician, don’t talk during mass, don’t make personal calls on company time. Sometimes it’s topical, like spelling out “S.A.N.T.A” to keep the magic of Christmas alive.  These social rules keep naked people out of the cafeteria and accountants out of our bathroom and thank goodness for that!

Today, we are faced with a rapidly growing digital domain with new boundaries and connections, where uncertain rules confound expectations. In a relatively short period, huge portions of our daily lives have moved online, into contexts that lack clear social norms. These online services are often interpreted and promoted as being far more discreet then they actually are. We post a photo to Facebook to share with our friends, forgetting for a moment that co-workers or students might also see that indulgent image. We post a political rant on our blog, only to later have it come up in a job interview. Our Foursquare sign-ins get linked to Twitter without realizing it… and now our location is in the public stream for anyone to find. To make matters worse, many of these services regularly release new features or change their privacy policies… the rules are not just uncertain, they keep changing.

NGC 1566 NASA Spitzer Space Telescope CollectionFocusing on public verses private misses the point. The analog world was never that black and white, why would we expect it to be that way online? Rather than an opt-in or opt-out, track or do-not-track, we need a solution that allows us to participate with the same shades of gray we use in the rest of our lives. This isn’t about the end of privacy nor is it about the inevitability of living in public. It’s about figuring out a new set of viable contexts with clear, understandable boundaries, rules, and participants. It’s about giving people as clear and simple control over their online social contexts as we have in the analog world.

We should be able to explicitly manage our online contexts: what we share, with whom, for what purpose, and with what constraints. Once we do that, the overly simplistic model of public verses private will yield to a beautiful constellation of privacy that is more understandable, more flexible, more realistic, and more empowering.

So, put down your pen and step away from the regulatorium. The last thing we need is half-baked black and white thinking turned into law.

Posted in Information Sharing, Shared Information, User Driven Services, Vendor Relationship Management | Tagged , , | 4 Comments

Facebook as Personal Data Store

With over 150 million people using Facebook Connect every month at over 1 million websites, Facebook has ushered in a new era, as the world’s largest personal data store.

Personal Data Stores

Personal data stores allow individuals to share online data with service providers. Facebook Connect users can give third-party web sites like Digg, Amazon, and YouTube access to information stored at Facebook, turning Facebook into a personal data store for over 500 million people.

What makes personal data stores special is the seamless sharing with websites for real-time personalization of the web. It’s more than just file back-up or synchronization.  It’s not just publishing “content” to our friends or the public. Personal data stores allow us to bring our information to websites when we want to. It’s a way to treat the user as the point of integration.

Personal data stores can be anywhere, shared with websites whenever we want. Consider giving FedexKinko‘s a link to a Flickr account so they can download photos to print a new calendar. Or giving a new doctor permission to access our personal health history rather than filling out a paper form while we sit in the waiting room. Or giving a website access to our Outlook contact list on our desktop computer so they can give us birthday reminders and gift suggestions. The key is user-managed access, wherever the data lives. Facebook Connect gives this kind of access control over all the data we store at Facebook, enabling web-wide personalization built around the individual.

Mash-ups

In recent years, mash-ups and real-time APIs have made it easier and easier for companies to combine information from different services into a single user experience. Instead of building bigger and more complicated proprietary data silos, companies take advantage of services like Google Maps and IP-address geolocation, using real-time information to enhance their websites.

Some service are even built around other companies’ data: Twitter clients like Seesmic and Tweetdeck, which access our Twitter data on our behalf; Trillian, which works with various instant messaging networks; and Mint, which pulls in our financial data from hundreds of websites. The “real-time web” is constructed on the fly, using linked data and real-time APIs to dynamically customize services for each of us.

Personal data stores let us bring our own data to the mash-up party. Not only do we have better control over who sees what, we can provide more timely, higher quality data than service providers can get from other sources. Effective integration with personal data stores means no more ads for that car we’ve already bought; no more recommendations based on false assumptions. Unfortunately, data in the wild is constantly becoming outdated, miscopied, and misconstrued, because that’s the best companies can do using the billions of dollars worth of proprietary data that’s gathered about us rather than provided by us. Personal data stores easily allow individuals to give the most relevant, most up-to-date information to just those companies we want to do business with. That means not just better data, but more intimate relationships with our favorite companies and organizations.

Perhaps the most liberating aspect of personal data stores is that everyone gets to have as many as we want. We all have our favorite websites for different online activities. As those sites open up their data with a user-driven permissions mechanism, they become personal data stores. So, whether it’s YouTube for videos, Flickr for Photos, Foursquare for location updates, TripIt for travel plans, or RunKeeper for exercise data, we get to bring our best data with us wherever we go. Savvy websites pull in this high quality data to personalize our visits, while those with unique data open it up for use elsewhere to maximize value to their users, which is exactly what Facebook is doing with Facebook Connect.

Facebook Connect

Facebook Connect makes this kind of access simple for everyone, with industry changing adoption rates. Over 66% of the top 100 websites and over 1 million total websites now integrate with Facebook in some way. Nearly 1/3 of Facebook users—over 150 million people—use Facebook Connect every month. Every time we do, we give websites access to information stored in our Facebook accounts, such as our name, gender, names of our friends, and all the posts currently on our wall or posted by us. It’s an archetypal personal data store, with highly credible and timely data in the form of our friend list and our status updates. Sure, Facebook Connect is still far too limited in the amount of information we can store and we lack control over how that information gets used… but architecturally, Facebook has changed the game for a vast portion of the World Wide Web.

To find out what information Facebook is sharing, I built a website called “I Shared What?!?“, an information sharing simulator for Facebook. The site uses javascript and Facebook Connect to display everything it can get from Facebook. Visitors see in specific detail exactly what they share when hitting the “allow” button in the Facebook Connect permissions dialog.

Facebook uses open standard technology to bring mash-ups to a new level, built on information provided directly by the user, in real-time, with minimal fuss or bother. There are shortcomings, of course. A lot of them, but I’ll save those for future posts. For now, think of Facebook as the 800 pound icebreaker of a new way for companies to connect with their customers.

To this veteran VRM evangelist, Facebook has done more in 2010 to usher in the era of the personal data store than anyone, ever. In one fell swoop, Facebook launched a World Wide Web built around the individual instead of websites, introducing the personal data store to 500 million people and over one million websites.

Unexpectedly, Facebook has moved VRM from a conversation about envisioning a future to one about deployed services with real users, being adopted by real companies, today. We still have a lot of work to do to figure out how to make this all work right—legally, financially, technically—but it’s illuminating and inspiring to see the successes and failures of real, widely-deployed services. Seeing what Amazon or Rotten Tomatos or Pandora do with information from a real personal data store moves the conversation forward in ways no theoretical argument can.

There remain significant privacy issues and far too much proprietary lock-in, but for the first time, we can point to a mainstream service and say “Like that!  That’s what we’ve been talking about. But different!”

Posted in Information Sharing, Personal Data Store, ProjectVRM | Tagged , , , , , , , | 12 Comments

Personal Data Stores, Exchanges, and Applications

Words matter.

In the last few years, we in the VRM community have been using term “personal data store” as cornerstone concept.  We all understood we were talking about the same thing: a VRM approach for allowing people to use share personal information with third- (and fourth-) party service providers on their own terms.

Yet as our community grows, it has become clear that the Personal Data Store is but one aspect of a complete system, one we might as well call the Personal Data Architecture. As developers get more traction with their technology and the marketplace, it gets more and more useful to delineate the elements of the architecture more rigorously.

To do that, Doc Searls, Craig Burton, Iain Henderson, Kaliya Hamlin, and I set some time aside after the VRM+CRM 2010 Workshop. Like the ongoing VRM conversation, it was a rolling evolution of a dialog. It started with just Craig & Iain with the rest of us showing up and participating throughout the afternoon. The core agreement is the terms I present here.  I’ll post a following up with some new ideas in a few days.

These are the key definitions underlying the Personal Data Architecture at the core of VRM and all of our work:

Personal data store – a service where data is actually stored, with the ability to provision access to that data by permissioned applications; may be distributed and/or replicated across multiple physical stores.

Personal data exchange – a service where an individual manages personal data store permissions.

Personal data API – the interface a permissioned application uses for accessing personal data stores and personal data exchanges.

Personal data application – a service permissioned by an individual to access their personal data store for particular use of specific information. Also an “app”.

Service – an instance of software running on a machine.

Individual – a living human being.

Bonus Term: (which wasn’t the topic of this particular meeting, but syncs up with the work Iain and I are doing over at the Kantara Initiative’s Information Sharing Work Group.)

Information Sharing agreement – an agreement between the discloser and the recipient governing the permissioned use of specific personal information.

This language will continue to evolve, I’m sure. For now, this is a good snapshot for those of us building systems for sharing information between users and applications and for using that shared information.

So, hopefully, the next time you hear PDS, you’ll know what it is…

Posted in Uncategorized | Tagged , , , , , | 5 Comments

Looking for feedback on pRFP and Information Sharing

At the VRM+CRM workshop last month, we (the Kantara ISWG) released two papers for comment.

One on the Personal Request For Proposal (pRFP) Engagement Model and the other the Information Sharing Report.

The first is a look at a the negotiation stage in the Car Buying Engagement Model, which paints a detailed picture of one person’s experience through the entire Customer-Supplier Engagement Framework for a new car.  Think of this as “car buying VRM style.” In the pRFP Engagement Model, we do a deep dive on how Sally would use a pRFP broker to buy a new car.

The second is a report placing Information Sharing in the global context. Based on a comprehensive literature review by Mark Lizar, the ISWG takes a look at the historical conversations about privacy and data protection to illustrate how we see Information Sharing as part of an emerging solution to managing the increasing risks and challenges of individuals sharing personal information online.

If you have an interest, please take a look and give us some feedback. We’ll be incorporating input from the comment period starting next week, September 13, 2010. Extended to September 27, 2010.

I hope to hear from you.

Posted in Information Sharing, Personal Data Store, ProjectVRM, Shared Information, Uncategorized | Tagged , , , | Comments Off on Looking for feedback on pRFP and Information Sharing

Asymmetry by Choice

Perhaps the most powerful form of asymmetric information is missing from JP Rangaswami’s post addressing whether the web is making us dumber. I agree with the core point of JP’s article, but I think he oversimplifies the argument on asymmetry in a way that misses something important about the power of information.

JP defines four types of informational asymmetry, which he argues is key for information to have power:

Asymmetry-by-access — You can get it, they can’t.

Asymmetry-in-creation — You create it, you can control access or have unique benefit from it.

Asymmetry-in-education — The information may be symmetrically available, but only “experts” can make effective use of it.This could also be called asymmetry-by-capability: the capability to utilize information more effectively than others.

Asymmetry-by-design — Take abundant information and design a system to create scarcity. For example, the iPhone (or Android) app store as the only – (or dominant) way to get new apps on your phone.

JP goes on to argue that

This approach, asymmetry-by-creation, and its alter ego, asymmetry-by-design, are about creating artificial scarcity. This is fundamentally doomed. I’ve said it many times. Every artificial scarcity will be met by an equal and opposite artificial abundance.

With all due respect, I must politely disagree.  At first, I thought it was a flaw in the argument about asymmetry, but then I realized it was perhaps because I was considering a fifth asymmetry that simply didn’t fit JP’s mold.

Asymmetry-by-choice – The information is shared with mutual agreement by all parties to respect certain limits, typically requested by the discloser, although often required by regulators. This asymmetry is typically  bootstrapped from asymmetry-by-creation and maintained as asymmetry-by-access.

One example: I tell my therapist things because I know they won’t get revealed. The therapist agrees to keep that information in confidence because she knows that if she doesn’t, I won’t reveal it. After the fact, she keeps her promise because she knows that the ethical, legal, and financial consequences aren’t worth breaking it. This is a good thing.

A second example: non-disclosure agreements (NDAs). A receiving party agrees to respect the rights in confidential information in order to better understand the disclosing party’s business. Normally, the discloser wouldn’t be comfortable disclosing certain information, but that would prevent the parties from pursuing mutually beneficial business interests. Only with assurances by the receiving party is the disclosing party comfortable revealing information that ultimately, may be vital to forging a more sustainable, more meaningful, and more profitable relationship. The NDA allows the two parties to continue the conversation with a certain level of expectation about subsequent use of the disclosed information. This is a good thing.

These types of voluntary acceptance of asymmetry in information are the fabric of relationships. We trust people with sensitive information when we believe they will respect our privacy.

I don’t see abundance undoing that. Either the untrustworthy recipient develops a reputation for indescretion and is cut off, or the entire system would have to preclude any privacy at all. In that latter scenario, it would become impossible to share our thoughts and ideas, our dreams and passions, without divulging it to the world. We would stop sharing and shut down those thoughts altogether rather than allow ourselves to become vulnerable to passing strangers and the powers that be. Such a world of totalitarian omniscience would be unbearable and unsustainable. Human beings need to be able to trust one another.  Friends need to be able to talk to friends without broadcasting to the world. Otherwise, we are just cogs in a vast social order over which we have almost no control.

Asymmetry-by-choice, whether formalized in an NDA, regulated by law, or just understood between close friends, is part of the weft and weave of modern society.

The power of asymmetry-by-choice is the power of relationships. When we can trust someone else with our secrets, we gain. When we can’t, we are limited to just whatever we can do with that information in isolation.

This is a core part of what we are doing with VRM and the ISWG. Vendor Relationship Management (VRM) is about helping users get the most out of their relationships with vendors. And those relationships depend on Vendors respecting the directives of their customers, especially around asymmetric information. The Information Sharing Work Group (ISWG) is developing scenarios and legal agreements that enable individuals to share information with service providers on their own terms. The notion of a personal data store is predicated on providing privileged information to service providers, dynamically, with full assurance and the backing of the law. The receiving service providers can then provide enhanced, customized services based on the content of that data store… and individuals can rest assured that law abiding service providers will respect the terms they’ve requested.

I think the value of this asymmetry-by-choice is about artificial scarcity, in that it is constructed through voluntary agreement rather than the mechanics/electronics of the situation, but it is also about voluntary relationships, and that is why it is so powerful and essential.

Posted in Information Sharing, Personal Data Store, ProjectVRM, Shared Information, User Driven Services, Vendor Relationship Management | 4 Comments

Steve Blank at the Pescadrome in Santa Barbara April 14, 2010

Steve Blank, Entrepreneur and Author

Steve Blank, Entrepreneur and Author

Steve Blank, Silicon Valley serial entrepreneur and author of Four Steps to the Epiphany, will be presenting at the Fishbon Event Lab, Wednesday April 14 in Santa Barbara. The Event Lab starts at 7ish with a potluck barbecue, followed by the presentation at around 8:30.

Steve’s startup experiences include E.piphany, Zilog, MIPS Computers, Convergent Technologies, Ardent, SuperMac, ESL and Rocket Science Games. Total score: two large craters (Rocket Science and Ardent), one dot.com bubble home run (E.piphany) and several base hits.

After he retired, he wrote a book (actually his class text) about building early stage companies called Four Steps to the Epiphany. He now teaches entrepreneurship to both undergraduate and graduate students at U.C. Berkeley, Stanford University and the Columbia University/Berkeley Joint Executive MBA program. With “Customer Development” the core theme in these classes.

In 2009, he was awarded the Stanford University Undergraduate Teaching Award in the department of Management Science and Engineering. The same year, the San Jose Mercury News listed me as one of the 10 Influencers in Silicon Valley. In 2010, I was awarded the Earl F. Cheit Outstanding Teaching Award at U.C. Berkeley Haas School of Business.

He has also given several well-received talks on “The Secret History of Silicon Valley“.

In 2007 Governor Arnold Schwarzenegger appointed him to serve on the California Coastal Commission, the public body which regulates land use and public access on the California coast. In 2010 he was appointed to the Expert Advisory Panel for the California Ocean Protection Council. He is on the board of Audubon California (and its past chair) and spent several years on the Audubon National Board. He is a board member of Peninsula Open Space Land Trust (POST). In 2009 he became a trustee of U.C. Santa Cruz and joined the board of the California League of Conservation Voters (CLCV).

The event will take place at the Pescadrome, 101 S Quarantina St, Santa Barbara, CA 93103.

Event: Steve Blank, entrepreneur & author
Date: April 14th
Time: Potluck BBQ ~7PM, talk at ~8:30PM
Location: The Pescadrome 101 S Quarantina St, Santa Barbara, CA 93103.

Fishbon is an artists’ collaborative based in Santa Barbara.

It should be a fun environment and good people. Please feel free to invite others. If you can RSVP to me at joe@switchbook.com, it would be appreciated.

Posted in Uncategorized | 1 Comment