Google sees the value of Free Customers

This is fascinating:
http://www.adweek.com/news/technology/google-bringing-trueview-ads-apps-games-147558

Google has an ad program on YouTube that let’s users skip ads and they are now extending it to other ad formats.

Even though it is the same old advertising game–something that could use some fixing–what’s impressive is that with the ad-skipping feature Google saw “a 40 percent reduction in the number of people who click away from a video when shown a pre-roll” ad.

It’s real-world proof that a free customer is more valuable than a captive one. Give people the freedom to leave and more will stay than if you had forced the issue.

I’ve done this myself. Initially, I was ready to leave the page because the content didn’t seem worth the extra delay of the ad. But then I see that if I wait just a few seconds, I can click past it. Not only does the ability to click past keep me from just abandoning the video altogether, but in a few instances the opening bit was funny or intriguing or just interesting enough for me to want to see the rest of the ad.

It’s a brilliant example of how giving people the freedom to leave can actually keep them around more.

Posted in ProjectVRM, User Driven, Vendor Relationship Management | Leave a comment

Badges for the Standard Label Kickstarter

We’ve been asked if we have any badges to help promote our Kickstarter for the Standard Information Sharing Label.

The answer is now yes!

If you’re a backer or just want to help promote the idea, put these babies on your website or blog or Twitter or Facebook and help get out the word!

We have less than two weeks left to rally support for a radical new way for companies to communicate about what they do with the information we share online.

Once we finish the Kickstarter, the link will go directly to our new website and will help promote the use of the Standard Label for websites everywhere.

Feel free to download the image or just use the URLs as SRC tags. We’ve also provided example HTML that links directly to http://standardlabel.org.

I Support The Standard Label! <a href="http://standardlabel.org"><img src="http://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.1.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
I Support The Standard Label! <a href="http://standardlabel.org"><img src="http://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.2.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
I Support The Standard Label! <a href="http://standardlabel.org"><img src="http://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.3.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
I Support The Standard Label! <a href="http://standardlabel.org"><img src="http://blog.joeandrieu.com/wp-content/uploads/2012/05/standard-label-badge.4.png" alt="I Support The Standard Label!" title="I Support The Standard Label!"/></a>
Posted in Information Sharing, Shared Information | Tagged , , , , | 1 Comment

Rethinking Context

Insights from PII2012

The FTC Privacy Report makes it clear that context is the key to privacy. For example, notice and consent need not be presented and secured if the use is obvious from context: If you buy a book from Amazon, it’s clear they need an address to ship you the book.

But sometimes the context isn’t clear to the average user, even when it is obvious to developers. My Mom believes she doesn’t share anything on Facebook because she mostly just comments on other people’s posts. Ilana Westerman’s work shows the same disconnect: many people just don’t see their privacy exposure because they have simplified models of their actions. They think what they are doing isn’t the risky stuff, but they rarely have the awareness of what they are really doing.

Making that harder are monolithic Terms of Service and Privacy Policies that bury the details arbitrarily far away from the point of exposure, and in confusing legalese.

The answer is some form of bite-size context management. For example, Smart Disclosure, which is the US administration’s language for greater clarity about risks of information sharing:

One of the most powerful versions of smart disclosure is when data on products or services (including pricing algorithms, quality, and features) is combined with personal data (like customer usage history, credit score, health, energy and education data) into “choice engines” (like search engines, interactive maps or mobile applications) that enable consumers to make better decisions in context, at the point of a buying or contractual decision

Or perhaps something along the lines of Personal Levels of Assurance, a term from AT&T describing piece-wise on-demand disclosure and consent.

This is also the approach behind the Standard Information Sharing Label, which let’s you see in simple, consistent terms, exactly what happens with the data you are about to share, before you share it. That instance of sharing defines the context for which the information may be used, and the label makes it easy for individuals to understand that context.

We aren’t compressing the entire Terms of Service and Privacy Policy for a given site, we’re presenting just the essential details about a particular instance of information sharing. Bite-size disclosure, right at the point of sharing, because nobody wants to read 47 pages of legalese.

We think that’s the right model for untangling the world wide web.

Posted in Information Sharing, Shared Information | Tagged , , , , , | Leave a comment

It all starts with sharing…

From kindergarten through our professional life, sharing binds us together as friends, colleagues, and collaborators, so perhaps it should be no surprise that online sharing through services like Facebook, Twitter, and email shapes our online social life. Yet sharing online is anything but simple.

The details of what happens with the information we share is often hidden behind long, complicated legal agreements that almost no one reads. If we’re lucky, they are explained in Terms of Service and Privacy Policy documents, sometimes buried out of view, other times forced on us like ransom notes forcing us to state our compliance or leave the site.

It doesn’t have to be that way.

Today, at the Internet Identity Workshop, we officially launch the Standard Information Sharing Label, which makes it easy for websites to say in simple, consistent language what they do with our information, making it easier for individuals to make better decisions about the information we share online.

The Information Sharing Work Group has published a draft specification defining the Standard Label as well as a Kickstarter project to finance its graphic design.

The Kickstarter has a brief video explaining the effort. The official press release is here.

The work is free to use and open to collaborators.

In all my years contributing to the VRM conversation, few projects have made me as proud as I am of the work behind the Standard Label.

Check it out. If you like it, please spread the word and consider chipping into help take this work to the next level.

Posted in Information Sharing, ProjectVRM, Shared Information, User Driven, Vendor Relationship Management | Tagged , , , , , , , , , , , , , , , , , , , , | 3 Comments

The World’s Simplest AutoTweeter (in node.js)

Last month, I set up a quick little autotweeter using Node.js to help me with Repeal Day Santa Barbara. I wrote a short blurb about it before hand, here’s what actually shipped.

(Many thanks to the guys at the Santa Barbara Hacker Space for inspiring and contributing to this project.)

The plan was simple.

  1. Set up a free server at Amazon Web Services.
  2. Write a simple daemon that processes a queue of tweets, sending them to the RepealDaySB twitter account at just the right time.
  3. Write a bunch of tweets and schedule them.
  4. Run that daemon on the evening in question (December 5)

AWS

Setting up with Amazon was pretty easy. I created an instance at the “free tier” level with port 22 as the only inbound port, using an ubuntu AMI (ami-6006f309). Installing node.js and emacs was pretty easy, once I connected using Van Dyke’s SecureCRT, which handled the public key authentication like a charm. With that setup, it was pretty straightforward to start coding.  (I did need some other ports to explore some of the examples that turned out to be dead ends, but for the live service, all I needed was SSH access on port 22.)

The First Tweet

The next step was to work through the details to get node.js to send just one tweet to Twitter. A lot of the examples out there offer more functionality than I needed, using frameworks like ExpressJS to support authenticating into Twitter as any user.  But I didn’t need that. In fact, I didn’t want an interactive service. I didn’t really need a database and I didn’t need a real-time interface. I just wanted to tweet as me (well, as my RepealDaySB persona).

Twitter has pretty good support for this single-user use case:  https://dev.twitter.com/docs/auth/oauth/single-user-with-examples  If only they’d had example code for node.js…

The good news is that node-OAuth is the go-to library for OAuth on node.js and after a bit of wrangling it did the trick:

So, the first thing I did was put my secret keys into twitterkeys.js

exports.token = '3XXXXXXXXX89-3CbAPSxXXXXXXXXXXy42A9ddvQkFs96XXXXXXX';
exports.secret = 'HHXXXXXesTKZ4bLllXXXXXXXXXX8zAaU';
exports.consumerKey = "XXXXXXXbgfJRXXXXXXXX";
exports.consumerSecret = "9XXXXXXXXXQJ9U8VuoNMXXXXXXXXX";

Then, I could import that file like this:

var keys = require('./twitterkeys');

And access it like this:

var tweeter = new OAuth(
  "https://api.twitter.com/oauth/request_token",
  "https://api.twitter.com/oauth/access_token",
  keys.consumerKey,
  keys.consumerSecret,
  "1.0",
  null,
  "HMAC-SHA1”
);

I did the same thing with my tweets in tweets.js, since I thought that might be useful:

module.exports =
[ {
  status:"test1",
  timestamp: "2011-11-5"
},{
  status:"test2",
  timestamp: "2011-11-7"
}];

And to access that,

var tweets = require('./tweets.js');

The astute observer will note my brilliant plan to use a timestamp for scheduling. We’ll return to that later.

To figure out what to do with my shiny new OAuth object I looked up Twitter’s API:

POST statuses/update  Updates the authenticating user’s status, also known as tweeting. To upload an image to accompany the tweet, use POST statuses/update_with_media. For each update attempt, the update text is compared with the authenticating user’s recent tweets.Any attempt that would result in duplication will be…

Easy. So the URL to use with OAuth is

https://dev.twitter.com/docs/api/1/post/statuses/update

And here’s the code that actually combines all that into my very first tweet from node.js.

var https = require('https');

var OAuth= require('oauth').OAuth;
var keys = require('./twitterkeys');
var twitterer = new OAuth(
		   "https://api.twitter.com/oauth/request_token",
		   "https://api.twitter.com/oauth/access_token",
		   keys.consumerKey,
		   keys.ConsumerSecret,
		   "1.0",
		   null,
		   "HMAC-SHA1"
		  );

var tweets = require('./tweets.js');

var status = tweets[0].status;

var body = ({'status': status});

  // url, oauth_token, oauth_token_secret, post_body, post_content_type, callback

twitterer.post("http://api.twitter.com/1/statuses/update.json",
	       keys.token, keys.secret, body, "application/json",
	       function (error, data, response2) {
		   if(error){
		       console.log('Error: Something is wrong.\n'+JSON.stringify(error)+'\n');
		       for (i in response2) {
			       out = i + ' : ';
			       try {
				   out+=response2[i];
			       } catch(err) {}
			       out += '/n';
			       console.log(out);
			   }
		   }else{
		       console.log('Twitter status updated.\n');
		       console.log(response2+'\n');
		   }
	       });

Data Store

At first I thought I’d use a database. There are plenty that are easily accessible from node.js and I even signed up for a free service that hosted CouchDB. CouchDB is attractive for node.js work because you can basically store JSON objects directly in the database. But that also got me thinking…

Maybe a database is overkill: too much capability for what I really needed. I don’t need to update the tweets during the evening. I don’t need to support simultaneous access. I don’t need speed or scalability. That’s when I realized I was thinking like a client-side developer–the world I do most of my javascript coding in.  With node.js on the server, I could just read and write to a local file!  Turns out it’s easy.  I should have thought about that earlier, given that I already had been reading my tweets with the @require command, but I hadn’t thought about being able to WRITE to the file to keep track of what had been tweeted.

Here’s how you do it. First, set up the path, using the __dirname variable to access the local directory:

var path = __dirname+"/tweets.js";

Then, to read the file:

var fs = require('fs');
fs.readFile(path,"utf8", function(err,data) {
  if (err) throw err;
    tweets = JSON.parse(data);
});

And to write the file:

fs.writeFile(path,JSON.stringify(tweets,null,4),function(err) {
  if(err) throw err;
    console.log("It's saved");
});

Date & Time

Now, about that timestamp.  I had to represent when I wanted to tweet and having been here before, I knew it could be tricky to make sure the server agrees on the timezone. Javascript has a great Date() object which can parse ISO-8601 formatted dates (e.g., 2011-12-05T10:00-08:00) , so I tried using that. Since the timezone is explicit in ISO-8601, it doesn’t matter what timezone the server is in, as long as the comparison uses a fully qualified timestamp. It took a bit of trial and error because the parser is pretty strict, but eventually, I got it.  However, that raw timestamp isn’t that easy to work with, so I used an old trick that I ported from Excel into Google Docs. Put the data in a spreadsheet for editing, and use columns to format it into the right JSON. Then you can cut & paste the rows into a text editor, delete all the tabs, and get the format you need. Here’s the doc I actually used.

It worked like magic. I got to use spreadsheet math to track the number of characters remaining in the tweet plus to schedule the dates. Things like

=B9+25/24/60

set a time that’s 25 minutes after cell B9, which made scheduling our tweets a breeze. With a bit of wrangling, I was able to get the easy-to-edit date and tweet on the left translated into the proper JSON & ISO-8601 format on the right.

After deleting the tabs, here’s what a resulting line looks like:

{"status":"Good Morning, Santa Barbara!  Happy Repeal Day, everybody! 78 Years ago, we lifted the chains of Prohibition, ratifying the 21st Amendment!","timestamp":"2011-12-05 T 10:00 -08:00"},

Add a bracket or two and clip the extra comma, and you’ve got a nice JSON array suitable for our javascript code.

Google Docs was especially nice because it made collaborating on the tweets super easy. My business partner and I had a great way to review and revise the set of tweets as we got ready for the main event.

Timing

The next trick was figuring out how to run the code so that it hummed politely along and sent the tweets when necessary. Since this was the only important process on my machine, I could’ve ran a never-ending loop constantly checking the time, but that seemed inelegant, and after all, the point was to learn how to use node.js properly. What I wanted was to start the daemon and forget about it, and let it sleep & wake up just when it needs to send a tweet.

So, every once in a while, the daemon wakes up, builds a list of tweets that need to be tweeted (because the current time is after their timestamp), tweets them, and marks them as tweeted. Also, we keep track of the earliest timestamp in the rest of the tweets, so we can sleep the right amount of time.

Here’s how:

var now = new Date();
var next_time;
var tweet_now = new Array();
for(t in tweets) {
   if(tweets[t].tweeted)
     continue;
   time = new Date(tweets[t].timestamp);
   if(time < now ) {
     tweet_now.push(tweets[t].status);
     tweets[t].tweeted = true;
   } else if (!next_time || // either this is the first pass
       time < next_time) { // or this is a sooner timestamp than recorded
     next_time = time;
     console.log("setting next time = "+next_time);
}

And then, just a bit of house keeping. We tweet, save to file, and reset the timer.  The nice thing about saving to file is that in case we have to kill the daemon, when we loading the file at the start, we’ll know which tweets are already sent.

if(tweet_now.length) {
   tweet(tweet_now);
}
save_tweets();
if(next_time) {
  delay = next_time.getTime()-now.getTime();
  setTimeout(run_tweets,delay);
   console.log("Delay: "+delay);
} else {
   console.log("Done.");
}

And that’s pretty much it.  It’s quick and dirty, so I just dump the errors to console, which is great for debugging, but it may not be the best strategy. More on that later.

Here’s the complete file that we actually used the night of December 5, Repeal Day.

var fs = require('fs');
var OAuth= require('oauth').OAuth;
var keys = require('./twitterkeys');
var path = __dirname+"/tweets.js";
var tweets;
var auto_tweet = function() {
  console.log("auto_tweet");
  fs.readFile(path,"utf8", function(err,data) {
    if (err) throw err;
    tweets = JSON.parse(data);
    // tweets are only loaded once. If you change the file, restart
         run_tweets();
  });
};
var run_tweets = function() {
  console.log("run_tweets");
  //find all the tweets that happen before "now"
  // saving the soonest timestamp that is before "now"
  //mark them as "tweeted"
  //tweet them
  //save to file
  //reschedule
  var now = new Date();
  var next_time;
  // console.log("first next_time = " + next_time.toUTCString());
  var tweet_now = new Array();
  for(t in tweets) {
    if(tweets[t].tweeted)
      continue;
    time = new Date(tweets[t].timestamp);
    if(time < now ) {
      tweet_now.push(tweets[t].status);
      tweets[t].tweeted = true;
    } else if (!next_time || // either this is the first pass
      time < next_time) { // or this is a sooner timestamp than recorded
      next_time = time;
      console.log("setting next time = "+next_time);
    }
  }
  if(tweet_now.length) {
    tweet(tweet_now);
  }
  save_tweets();
  if(next_time) {
    delay = next_time.getTime()-now.getTime();
    setTimeout(run_tweets,delay);
    console.log("Delay: "+delay);
  } else {
    console.log("Done.");
  }
};
var save_tweets = function() {
  fs.writeFile(path,JSON.stringify(tweets,null,4),function(err) {
    if(err) throw err;
    console.log("It's saved");
  });
};
var tweet = function(tweets) {
   var tweeter = new OAuth(
    "https://api.twitter.com/oauth/request_token",
    "https://api.twitter.com/oauth/access_token",
    keys.consumerKey,
    keys.consumerSecret,
    "1.0",
    null,
    "HMAC-SHA1"
  );
  var body;
  for(t in tweets) {
    console.log("tweeting : "+tweets[t]);
    body = ({'status': tweets[t]});
    tweeter.post("http://api.twitter.com/1/statuses/update.json",
      keys.token, keys.secret, body, "application/json",
      function (error, data, response2) {
        if(error){
          console.log('Error: Something is wrong.\n'+JSON.stringify(error)+'\n');
        } else {
          console.log('Twitter status updated.\n');
        }
      });
    }
  }
// Now start it up
auto_tweet();

 Success

It worked, mostly. And it shipped on time. That was awesome. I made it as simple as possible. If I could have, I would’ve made it simpler. But that isn’t to say there weren’t problems.

Challenges

1. Too many libraries, too much functionality.

It took a long time to wade through the blog posts and tutorials on how to use node.js, OAuth, and Twitter. It’s great that there are so many approaches, many well documented. But I didn’t need all that. Short, and simple.  Maybe you’re looking for that too. If so, I hope it was easy to find this post.

2. Operations support

As I was out on the Repeal Day pub crawl, all I had was my Android phone to keep on top of things. Surprisingly, I was able to get an SSH client working, even using the public key authentication.  Unfortunately, the text was TINY.  And I couldn’t type the “|” character, making it impossible to use some of my favorite commands.  Apparently, that’s a well-known problem with my particular phone.  Also, the batteries got sucked dry REAL fast.  I had to resort to keeping the phone off most of the time. Even then, it died well before the tweets ran their course.

3. Twitter won’t send your own tweets to your phone

This was particularly annoying. Since my app was sending my tweets, but Twitter wouldn’t echo them to my phone, I had to keep checking, either manually or by asking my partner if the tweet went out.

Unresolved Issues

1. Process management was non-existent

So, not having dealt with server processes for a few years, I hadn’t fully thought through the fact that closing the session would kill my daemon. Next time I’ll try I’ll try using Forever.

2. Server may have been unstable

In hindsight, I think the server crashed on me and I have no idea why. I should’ve piped the error codes into a log file, which, presumably could be done with a proper process handling approach.  Killing and rerunning it restored the server, but there was something fishy going on that I never got to the bottom of.

3. Wacky characters didn’t paste well from Docs to JSON

Undoubtedly this was a UNICODE encoding issue, and it showed up in words that had tildes or accents. Which were quite a few given the exotic ingredients in some of the cocktails for Repeal Day. The best way around this would be to find a way to read from Google Docs directly. The second would be perhaps to build some sort of interface instead of using Google Docs. Alternatively, I could debug the copy & paste process and see if I could isolate where the problem happened. Maybe it was in my SSH terminal, pasting into emacs, which might suggest that copy & pasting into a real editor locally and sending the file to the server might avoid the problem.

 And Done

That’s it.  Perhaps it was a bit of overkill. There are plenty of free auto-tweeting services out there. But in addition to my doubts about how well they might work, I also thought it was a small enough use case to use to learn node.js. In that, it was a huge success.

Let me know if this was useful for you. I’d love to hear from fellow coders if this helped you along in any way.

Posted in AutoTweeter, coding | Tagged , , , , , , , , , , | Leave a comment

Kindle files to my iPad (Gutenberg eBooks)

How do you get a book at Project Gutenberg into your iPad Kindle app?

It’s easy.

Go to Gutenberg on your iPad and download the “Kindle” .mobi file.  It’ll automatically open in the Kindle app. Yay.

Don’t worry about instructions for how to get a .mobi file to your iPad Kindle.  It’s a bit of a chore. But as long as you have a URL for the .mobi file, just use the Safari browser to download it directly.  One you open it, it will be in your Kindle bookshelf until you delete it.

Works like a charm.

Posted in Uncategorized | Tagged , , , , | 2 Comments

Playing in the Treehouse

treehouse

For the last few months, I’ve been helping a friend find a good way to learn HTML. She’s an experienced professional designer… in fact her website designs were winning awards as far back as 1994. But she finally realized that because she never learned the brick and mortar work underlying the web, she was hampered in building breakthrough designs.  As the web moves more and more to interactive, real-time mash ups that is becoming even more true.

Unfortunately, the vast majority of options we considered were either way too expensive (like returning to Art Center to take a traditional college-level course), too focused on tools (learning yet another WYSIWYG editor is not what she’s looking for), or too scattershot (covering everything in so much detail it’s not clear where to start).

Enter Treehouse.  They just got a bunch of funding, which is why they crossed my RSS stream, and I immediately liked the idea: a flat, monthly fee for unlimited access to very focused, very straightforward set of online classes about web design and development. You start with the basics and work your way up to web guru. Plus, they have an iOS track that might be interesting later. If they were to add Android, that’d be cool and can we imagine a future of server-side code like Perl and PHP or perhaps apps like WordPress or Node.js? I sure can.

To brush up on my fundamentals, I decided to try it out before recommending it to my friend…

It definitely lives up to the hype.

It is simple and straightforward, with just the right amount of quizzes and coding challenges. There are badges to earn and a registry for folks looking to hire students of the class. The incentives and engagement are nicely packaged in a smooth, easy framework. I’m an old hand at most of this stuff—I have already sent in a few bits of feedback correcting a few technical errors on the quizzes—but I am going to work through the series just to reconnect to the fundamentals.  I expect it to be fast, easy, and nicely relevant to some of my current projects.

So, if you’re completely new to HTML or want a nice foundation layer for your professional work, I couldn’t recommend Treehouse more.  I don’t often give such sterling endorsements, but I like what they’ve done.

Kudos to Ryan Carson and Team. Well done.

Posted in Development | Tagged , , | 1 Comment

Towards a node.js Auto-Tweeter

I’ve been intrigued by node.js as a platform for highly-scalable server applications written in javascript and finally found a super simple application I wanted to try with it: an auto-tweeter that would let me schedule future tweets to my own account.  I’m organizing a pub crawl for Repeal Day and I want a flight of tweets to go out during the crawl… without me or my partners having to do it manually.

I mentioned this to my buddies at Santa Barbara Hacker Space and we made it a collaborative project.  I’ll miss this week’s WebTech Wednesday session, but perhaps a write up will help keep the conversation going.

The idea is pretty simple.  There are three components.

  1. The tweet engine
  2. The datastore
  3. CRUD Interfaces to the data store

The tweet engine will query the datastore every so often and see if there are any tweets that need to be tweeted (because their tweet_by timestamp is in the past).

The datastore will hold the tweets and their tweet_by time, stored as UTC timestamps.

The CRUD interface will let us create, read, update and delete those tweets.

Future ideas:

  1. Use Twitter authentication for accessing the app
  2. Bulk uploader for multiple tweets.
  3. Support multiple users
  4. Support queues of tweets, which get tweeted at a regular schedule
  5. Support additional entity data with the tweets, such as geolocation

Issues:

  1. Managing the timestamp will take some thought. The CRUD interface should use localtime—with an option to override the timezone—but store UTC in the database. That will avoid some common problems with tweets going out on the server’s time zone when the user thought it was set for theirs.
  2. This first version will, unfortunately, be totally exposed to anyone who knows the URL. We’ll add twitter authentication as soon as possible.
  3. Timing the engine will take some finesse. We could just poll the database, but that wastes a lot of cycles.  Instead, I’d like the engine to schedule itself to run at the next tweet_by time and have the CRUD either wake the process up early or kill & restart it.

That’s it for the start. More as it happens.

Posted in AutoTweeter, coding | Tagged , , , , , , | 6 Comments

Trust Me… Things Change.

Trust is complicated. But for some reason, online trust mechanisms assume it is outrageously simple.

black and white handshake

For example, firewalls imply that once you’re in the network, you’re trusted. It’s baked into the framing of the problem. Similarly, Trust Frameworks assume that once you are in the Framework, you’re trusted (although you could build a framework that is dynamic). Even a user directed approach like Facebook Connect assumes that once you click “allow”, you trust that website to use your information appropriately, essentially forever… even if you revoke that permission later.

Trust isn’t broad-based and it isn’t static. It is directed and dynamic.

Think about it. We don’t trust our accountant to babysit and we don’t trust our babysitter with our finances. Trust is given for specific purposes and in specific contexts and it changes as quickly as we can fire that babysitter.

multiple multi color handshakesWe trust the receptionist at the doctor’s office with our written medical histories because he is behind the counter, apparently employed by the doctor who needs that information to do her job.  We trust the bartender with our credit card because she’s behind the bar serving drinks and we accept that it will be kept safe and not used until we close out the bill.  But we wouldn’t give that receptionist our medical history if we met him in a bar later that evening, and we wouldn’t give that bartender our credit card if we met her as a fellow patient in the doctor’s office the next day.

We trust people to do specific things—or not to do certain other things—and that trust is based on the context in which we give it and the state of our relationship with the trusted party.

That means that just like our relationships, trust changes over time. Trust systems need ways to discover that trust should change and allow for that change to be managed. Reagan put it perfectly, “Trust but verify.”

When verification fails, trust changes.

Whether it’s a romantic partner, a subcontractor, a company, or top-secret agent, trust is granted incrementally. When it is lost, it is often destroyed.

Incremental trust happens all the time. We don’t like logging in just to view a web page, but we don’t mind it to see confidential information like order history. We aren’t comfortable giving our credit card just to enter a store–the relationship isn’t ready for that yet–but we don’t mind once we start the check out process.

When we lose trust, we sometimes throw the jerks out on the street. Betrayal is an unfortunate fact of life; it also has great significance to how we handle online trust. How do we “break up” with service providers? Revoking consent and demanding our data purged is an obvious need, but one that is often obscured or impossible.  As our relationships change, our trust changes. Yet our digital trust models mostly don’t.

Online trust models assume that trust is binary, broad, and stable—that you either have it or you don’t—for one simple reason: because it’s easy to implement.

When we log into a website with Facebook Connect, Facebook verifies that we want to share information with the website. However, there is no way for us to modify the permissions. We can’t say what use is allowed and what isn’t. We can’t pick and choose which data they get. We can’t ask for additional consideration. And we can’t put a time limit on access. Facebook’s interface presumes all-or-nothing and forever, for anything. But what we’d really like is something like this:

“You can write to my wall, but only for messages I explicitly approve. You can have my email address but only for account maintenance, not for “special offers” from you or your associates. You can’t have access to my home address. You can use the photos tagged “public” for one month after I post them, but I want a revenue share from any money you make from them. Ask me another time about reading my inbox.”

In order for our trust model to support transactions like this, it needs to be specific and flexible. It should not only let us direct our trust to specific purposes, it should make it easy to moderate that trust as our relationships evolve.

Lawrence Lessig famously said “Code is Law“. Trust models like Facebook’s, and the code behind it, make it nearly impossible for sites to allow the kind of user-driven permissions we need. While our relationships evolve, the current platforms are actually too brittle for developers to implement flexible, user-respecting approaches to privacy and permission unless they are willing to jump through hoops and hack around arbitrary technical limitations.  We need a new code base that actually makes it easy for developers to do the right thing, rather than code that enshrines restrictive and disempowering practices as strongly as if the law made it mandatory.

Because the one thing I know is that tomorrow will be different, and the harder it is for developers to support changing relationships, the harder it is for the entire ecosystem to respond to changing needs.

In short:

Stop the monolithic permissioning ceremonies!

Trust evolves.

Deal with it.

Until we do, online trust will remain brittle and untenable for our most important, powerful, and profitable relationships.

Posted in Information Sharing, Personal Data Store, Shared Information, User Driven Services, Vendor Relationship Management | Tagged , , | 3 Comments

Fourth Parties are agents. Third Parties aren’t necessarily.

Fourth Parties is a powerful, but sometimes confusing term. In fact, I think Doc recently mischaracterized it in a recent post to the ProjectVRM mailing list.

Normally, I wouldn’t nitpick about this, but there are two key domains where this is vital and I’m knee deep in both: contracts and platforms.

Doc said:

Like, is the customer always the first party and the vendor the second party?

Well, no. So, some clarification.

First and second parties are like the first and second person voices in speech. The person speaking is the first person, and uses the first person voice (I, me, mine, myself). The person being addressed is the second person, and is addressed in the second person voice (you, your, yourself).

And

To sum it up, third parties mostly assist vendors. That is, they show up as helpers to vendors.

The first point is great, and if you continue this further (and make the leap from parties to data providers), you get something like this:

The ownership of “your” and “my” data is usually clear. However, ownership of the different types of “our” data is a challenge at best.  To complicate matters further, every instance of “my data” is somebody else’s “your data”. In every case, there is this mutually reciprocal relationship between us and them. In the VRM case, we usually think of the individual as owning “my data” and the vendor as owning “your data”, but for the vendor, the reverse is true: to them their data is “my data” and the individual’s data is “your data”. Similar dynamics occur when the other party is an individual. I bring my data, you bring your data, and together we’ll engage with “our” data. We need an approach that respects and applies to everyone’s data, you, me, them, everybody..

Which is from my post on data ownership. The trick is that 1st party and 2nd party perspectives are symmetrical.  We are their 2nd party and they are their 1st party. Whatever solution we come up with in the VRM world needs to work for everyone as their own 1st party. Everyone. Including “them”. Including Vendors.

In fact, that’s the only way we can get out of the client-server, subservient mentality of the web. It’s also the only way to make sure that our solutions work even when the “vendor” is our neighbor, our friend, or our family.

This is particularly clear in the work we are doing at the Kantara Initiative’s Information Sharing Work Group. We are creating a legal framework for protecting information individuals share with service providers. As such, it’s vital that the potential ambiguities of language are anchored in rigorous definitions. And what has emerged is that every transaction is covered by a contract between two parties. Not three. Not four. Not one. Two. And to the extent that third (or fourth) parties are mentioned, they are outsiders and not party to the contract. Since we are building a Trust Framework, there is a suite of contracts covering the different relationships in the system, but the legal obligations assumed in each contract have clear and unambiguous commitments between the first and second parties only.

Platforms

But where I think where Doc’s framing most needs a bit of correction is that, in fact, historically, third parties are never presumed to be working for second party. Not in the vernacular and not in any legal context. This presumption only emerges once you add a Fourth Party claiming that it works on behalf of the user. That is, 3rd-party-as-ally-of-the-2nd-Party is a corollary to Fourth Party concept, not a foundation for explaining it.

Take Skype, which I have on my Verizon cell phone. In the contract with Verizon, Skype is a third party application and Skype, Inc. is the third party.  But Skype isn’t working on Verizon’s behalf.

This is not only true in the sense of 3rd party applications whose value proposition is clearly at odds with the 2nd party, it is even more true when it comes to platforms. And especially when you consider the relevance of VRM as a platform for innovation.

In every platform, there are third parties who create apps that run on the platform. Microsoft built Windows, but Adobe built Photoshop. Apple built the iPhone, but Skype built Skype.  For platforms to be successful, they necessarily bring in 3rd party developers to build on top of the platform. These developers aren’t necessarily working on behalf of the platform provider, and it would be a miscarriage of alignment to claim that they are. They are out for themselves, usually by providing unique value to the end user. Some new widget that makes live better.

This becomes even more true when you are dealing with open platforms, or what I called Level 4 Platforms (building on Marc Andreeson’s The 3 Platforms You Meet on the Internet). In open platforms, you actually have 3rd parties helping contribute to the code base of the platform itself.  Netscape adds tables to HTML. Microsoft adds the <marquee> tag.  But here, it is even crazier to imagine that these 3rd parties are acting on behalf of the platform party… because there really isn’t a platform party. Nobody owns the Internet.

I think the right way to think about 4th Parties is that they have a fiduciary responsibility to the 1st party and 3rd parties may or may not.

Fourth Parties answer to the 1st party.

3rd Parties may not answer to anyone.

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , | 2 Comments