Netizen Developer

soldier iconThere is a low grade market war going on in web augmentation services, part of a huge shift in how developers and users perceive the web.

The Web used to be about pages, then applications, followed by mashups. Today the interesting action is in augmentation.

The leading edge of the Web first moved from static pages to database-driven applications, where user interactions dynamically changed the presentation of content. That is rapidly giving way to multi-site mash-ups and interconnections through APIs and webhooks that, for example, allow one to dynamically use Flickr pictures elsewhere, allow Twhirl post to Twitter, and allow users log in to new websites with their Yahoo! ID.

dreamstime_3686556glowing-jigsaw-piecessmallThe mashup/API culture knows that not every website can be the best at everything. Instead, mash up those that are the best into custom-combined web pages.  The shift away from monolithic webservices  began here, applying applied multi-source content to a centralized experience. It was still predicated on users visiting a central website, but it was a start at redefining the perspective from which the web should be constructed.

Web augmentation takes that one step further, moving the locus of control into the browser.

It changes the context of value from a website with widgets on webpages, to capability that travels with the user to every web page they visit, improving the user experience no matter where they go. This enables multi-source/multi-destination content with a distributed, yet integrated experience.

Instead of integrating mashups at the point of the “hosting” web page, augmentation integrates at the point of the user, through the browser, while users visit anywhere online. Moving towards truly user driven services, web augmentation gives priority to the user’s experience rather than website owners’ goals.

For example,

  • Ad blockers remove ads from web pages, anywhere
  • Pop-up blockers stop annoying javascript pop-up windows
  • SkypeOut turns any phone number found in a web page to a button that launches Skype to call that number
  • Google Toolbar’s AutoLink button automatically links addresses to online maps, package tracking numbers to delivery status, VIN numbers (US) to vehicle history, and publication ISBN numbers to Amazon.com listings.
  • Adaptive Blue’s Glue uses a “topbar” to augment pages with social context about the content of the page you are on: friends’ reviews, recent visitors comments, etc.

As I wrote about previously, Kynetx is also getting into this game, as is Azigo with their RemindMe service. I like both Kynetx and Azigo. I know the folks behind those efforts and believe they are “fighting the good fight”, using user-centric identity to provide advanced and improved user experiences.  SwitchBook is also a web augmentation service, one which doesn’t rely on modifying web pages as Ad Blockers, SkypeOut, and Kynetix allow; we use a toolbar approach more like Adaptive Blue.

It’s a matter of being a good netizen.

One great thing about the Internet is that anyone can use it. And once you enable a capability on the Internet, anyone can do it. Then, when successful, everyone will do it.

So the social ethics question we have to ask ourselves as developers is “What if everyone does do it?”

What if everyone adopts Ad Blockers? What if all my desktop apps try to modify phone numbers on web pages? What if everyone writes web augmentation services that amend, inject, and otherwise manipulate the web pages we see online?

It isn’t about legal questions—which is apparently what killed Microsofts “SmartTags” initiative.

data and globeIt comes down to a question of open systems. Open systems that work, work when everyone does it, because that’s where you get game-changing economies of scale. The network effect only happens if the value of the system increases when more and more people use it and open systems are all about the network effect.

  • What happens if everyone uses TCP/IP?  WhoohoO! Seamless interconnected networks.
  • What if everyone uses SMTP, POP, and IMAP?  Yes! You can email anyone, anywhere, anytime!
  • What if every company, government agency, and organization uses HTML and http to build online services for their users? Mega efficiency. 24 hour engagement. Low-cost quick answers. Happier people and happier organizations.

Those are good open systems.

But what about these?

What if everyone were to use ad-blockers to completely block every ad they see online, banners and text–everything?  Google would go out of business. The New York Times online would have to go back to a paywall. Vast chunks of the online content business would collapse, because those same ads are what pay the bills.

board room attackWhat if every company that wanted to “augment” your web experience started inserting content, buttons, and javascript into web pages? Even assuming the augmentation is only done by those services you trust and appreciate—just those companies or organizations or movements that you want to help you—if we restrict the augmentation to just those firms, we still have a veritable cacophony of conflicting augmentations. What happens when your library, Borders.com, your favorite local bookstore, all want to “augment” a listing of 1984 by George Orwell? And I haven’t event started on the list of folks wanting to tell you about movie versions, plays, online videos, derivative works, Wikipedia articles, and discussion groups about the book.

Sure, one or two million people using ad blockers isn’t going to put anyone out of business. Nor will the first few intrusive web augmenters. That’s not the point. The point is, how do we build systems that not only work when everyone uses them, but actually gain in value when that happens?

As netizen developers, we have an obligation not just to do what makes us money, or even what makes users happy, but to build systems that work at Internet scale, when everyone does it.  If the systems we build don’t work when everyone tries to get into the game, then we are just being selfish, hording value just because we are first-to-market.

Think about pop-up blockers. On the one hand, pop-up blockers break certain websites. Especially, it seems, sites keen on opening windows for editing or sending a message. So, pop-up blockers aren’t the ideal solution… there is friction for users when people choose to use them. Yet, ask the netizen developer question and the answer is pretty straightforward. “What if everyone used pop-up blockers?” Sites that currently use pop-up windows would redesign to work within the browser rather than popping out new windows. In fact, most of the economic friction with pop-up blockers is in the middle-way, with just some users using them and others not. Arguably, the world would be a better place if everyone were to use pop-up blockers. So pop-up blockers aren’t ethically problematic, rather, they are an incomplete solution to a tricky problem.

paper dolls holding handsBeing a good netizen requires thinking about these issues, just as being a good citizen means thinking about how private actions affect the public good. To build out this next generation of identity-enabled web augmentation services, we would all do well to think through what happens when everyone does it.

Finally, although Adaptive Blue and SwitchBook both use a toolbar approach to augmentation—without manipulating the underlying web pages— similar issues challenge us as we aim for Internet scale. There are only so many toolbars users will install. Each is borrowing screen real estate from the core web experience. This gets even worse when you consider the possibility of augmenting web experiences in a mobile device. The mind boggles at that challenge.

Ultimately, what we need is an open system that allows all of these types of augmentations from Adaptive Blue, SwitchBook, Kynetx, Azigo, Google, Skype, and others, to mingle smoothly in the same interface.

handshakeWe (SwitchBook) haven’t begun to solve that problem, but we look forward to working with the rest of the open community to figure out how to make it work. At the end of the day, the collection of interfaces and services that provide the most value to users is going to win. Everything between here and there is just wasted development dollars, even if it generates millions in profits for those fighting the tide.

In an open world, the best solution eventually rises to the top. Let’s see if we can speed that up and stop wasting money on closed, proprietary, unscalable solutions.

Reblog this post [with Zemanta]
Posted in User Driven, User Driven Services, Web Augmentation | Tagged , , , , , , , , , , , | Comments Off on Netizen Developer

One Night Stand worth $300 Million

In the ProjectVRM Standards Committee discussions, we’ve talked quite a bit about a “One Night Stand” use case, where a personal data store is used with an online retailer and all personal data is erased–as much as possible–after the transaction.

The premise is simple: if users know they are safe giving personal data, they will give it more freely. Limits on long term data mining (and its attendant offensive behavior of junk mail, spam, and telemarketing) paradoxically increase data sharing and enhance the ability of vendors to provide more meaningful engagement at the moment of the transaction. Less long term data retention leads to more real-time data provided by users, resulting in better customer experiences, and more profit for vendors.

Until recently, this was a theoretical argument, a belief by those of us promoting VRM. As Doc Searls puts it, “A free customer is more valuable than a captive one.”

Now we have evidence of just how valuable that can be.

Jared Spool shares with us the real-world example of a redesign in the direction of the “One Night Stand” that created $300 million in value in the first year: [excerpt edited for brevity. see full article for details]

Now that’s real money.

Hat tip to of iface thoughts.

Posted in Intention Economy, Personal Data Store, ProjectVRM, Vendor Relationship Management | Tagged , , , , , , , | 2 Comments

Paper Prototype Rocks

Last week we used a prototyping technique that has changed the way I look at development: paper prototyping.

I had heard a bit about this before and it sounded great… but there’s always such a drive to just start coding! Before I reach Carolyn Snyder’s excellent Paper Prototyping, I hadn’t realized what I was missing.

The key difference from what you might be thinking–and definitely what I was thinking–is that paper prototyping isn’t just about doing mockups of the UI and asking users for feedback. It means recreating the a subset of the core user experience using a paper-based model, with a human “computer” and the user interacting not with the facilitators or testers, but directly with the pieces of paper, by pointing for hovering, touching for clicking and writing for typing.

The results were amazing.

We took two weeks to develop & construct a paper prototype for our upcoming Search Map software, which is the first implementation of the SwitchBook approach to User-driven Search (which is the focus of our work with the VRM community). That was followed by one week doing the actual testing.

To build out a paper prototype like this you have to have a critically specific definition of what it is you are building. Depending on how early you are in the gestation of your software idea, this might be impossible. But if you have a pretty good idea of the basics, the paper prototype will force you to actually write down both

  • a subset of tasks in sufficient detail for the user and the prototype to do with paper assets, and
  • create the pieces of paper that represent every window, every menu, every dialog box, every state in the interface.

It felt like we went through three revisions of our core UI before we actually had any users. We found there were so many assumptions hidden in our initial requirements documentation that we were missing key parts of the user experience. And then, by mapping out those parts of the experience, we made significant leaps in simplifying and unifying the overall design.

Prior to the actual testing, we first did a dry run with a fellow associate as a user. This was eye opening. Not only did we identify a few assets we were missing, we realized we had several key assumptions that we had overlooked in our design. These assumptions led us to revise the paper prototype significantly, leading to an even better run through with the live testers.

This pattern repeated itself for everyuser. Snyder strongly suggested doing just a handful of tests, no more than two per day. So, 4-6 testers over two-three days. After two weeks building the prototype, I was thinking we should have more testers to really get the value out of this thing. But after the trial run, I realized that there is so much opportunity to update & improve between tests, that you want to immediately assimilate each session and update the prototype.

That is the real beauty of the paper prototype.  Because it is just paper… and in particular just sketches on paper–not a beautifully designed custom UI–it is trivial to change. We were evolving in leaps and bounds between every test, taking out portions of the interface, creating new icons, adding buttons here and removing them there, even updating dialog boxes in mid-test.

It would be hard to overstate how powerful it was to engage in such rapid evolution based on real user feedback. With ~3 revs in the construction phase, 1 with our trial run, and 4 more revisions from each test, I estimate a good 8 substantial revisions to our design in just three weeks.

Our timing for this exercise was just about perfect. We just started a 6 month dev cycle in January, after over a year of internal development (largely on the server side with a basic architectural prototype on the client). The core experiential basics had been fairly stable, and we were ready to integrate a lot of conceptual learning into a new rev. So, we had a lot of detail about what we wanted to do, and were in a good point to take a moment, document our requirements rigorously, and sit down with users to see what really works. For the record, we spent about three weeks in requirements engineering prior to working on the paper prototype (which was another three weeks of effort). I don’t know if I’ve ever spent a more useful 6 weeks in any programming project.

We also quickly saw the limitations of this approach. Of course, it was slow. We joked that the McKinney 5000 was operating at about 110hz. (Sam McKinney acted as our “human” computer for the testing–and did a great job I must say.) A good portion of our user experience depends on finding “flow” during advanced searches across many different websites and different search providers. Just keeping track of the user’s behavior was a breakneck task… simultaneously updating the screen to indicate the real-time feedback and recommendations from our server was even harder. The result is that although we got to test our core workspace, the technique was too slow to really test the work flow.

So, Carolyn Snyder, thanks so much for the easy and thorough guide to paper prototyping. It was an amazing exercise for our whole team. And for those of you who are working through the details with disruptively innovative software, I encourage you to try it. I think you too will be amazed at your results.

p.s.

My apologies that we didn’t get any pictures of the experience. We were so busy doing it, nobody stopped to capture the look & feel of the interactions.

Posted in Search, User Driven Search, Vendor Relationship Management | Tagged , , , , , , , , | 4 Comments

Kynetx takes on Structured Browsing

Doc Searls recently brought my attention to a White Paper by Phil Windley, about his company, Kynetx. It does a good job explaining the thinking behind their architecture, and raises some questions that, for me, challenge some underlying assumptions and business choices.

Problem Domain

The distributed nature of the web is a big part of its power–nobody needs to ask permission from a central authority to use it or create with it. However, that disaggregation limits the cohesion for sophisticated uses, leaving users to hobble together ad-hoc mash-ups of value from multiple, diverse service providers.

For example, the average travel planner spends 29 days from their first query to their first purchase. No tool I know of facilitates that entire process effectively.

Solving this problem in a general way—while retaining the authority of the individual and the flexibility of open systems—is perhaps the greatest opportunity for VRM. The personal data store and VRM relationship services are two prongs of an architectural shift for enabling this kind of aggregation while remaining open. Once you put the user in the driver’s seat, with coherent controls over the flow and the data, the experience can integrate around the user, even as they drive anywhere on the Internet.

Solution

Kynetx’s solution is built on one primary capability:

A rules engine (and language) for contextual customization based on strong identity-based claims, using the user-centric Identity of Information Cards.

This puts Kynetx squarely in the web augmentation service business. Adaptive Blue (and their Glue product) is perhaps the most sophisticated approach to this space, but Yahoo’s Toolbar also augments web pages, as does Skype (putting its SkypOut button on any phone # it recognizes), and the granddaddies of all web-augmentation services are the ad blocker plug-ins that remove banner ads on websites.

I distinguish web augmentation from web media enhancements, like PDF and Flash and Java, in that the latter are embeddable or downloadable extensions to the core HTML/http architecture of the web, while augmentation services provide third-party manipulation of website presentation on behalf of the user. They actually tweak the web page as the user sees it, rather than offering websites a new way to package content or functionality.

Web augmentation isn’t new, but it is gaining adoption and breadth. There is a low-grade market war going on in this space. While browsers define the official battleground of the World Wide Web; augmentation services are the guerilla warriors of next generation browsing. The approach that reaches ubiquity first will create significant value throughout the architecture: for users, software vendors, and service providers.

So, the question that comes to my mind is where does Kynetx fit into all of this?

The value proposition of a rules-engine for customization is powerful, if that engine makes it easy to leverage strong identity. Every website will, imo, want to take advantage of the unique value of user-centric identity and  Information Cards in particular. However, rewriting your customization to do that will take resources and that will slow adoption. If Kynetx can simplify how websites plug- in to the Identity meta-layer that sounds like a real value.

Gaps

There are however, several gaps that I see in Kynetix’s approach mapped out in the white paper.

First, who are the target developers: websites or Third party services. Or both?

It’s not clear to me if the primary authors of KRL rulesets (and hence Kynetx’s customers) will be the destination website developers or third party augmentation services. For example, . Adaptive Blue‘s Glue augments web pages so that things like movies can be recognized across domains for social commentary, ratings, and sharing. That means that Glue modifies the presentation of web pages at IMDB, Netflix, Amazon, Blockbuster, etc. In this pattern, it is the third-party, Glue, that would be running KRL rulesets, not the websites.

Is this the intended architecture for Kynetx? Is the point of the Kynetx Information Card to provide authorization by the user to allow services like Glue to augment their web experience, while the rest of the plug- in handles injection into the web page within the browser?

Or, is the main point that web services themselves would leverage Kynetx’s Information Card approach to manage third party identity for customization? For example, so Hertz could seamlessly provide AAA or AARP discounts if, and only if, the appropriate AAA or AARP information cards (KIX) are presented by the user? In this case, Hertz writes the customization, but doesn’t need to know upfront what the user’s affiliations might be.

If the first case is intended, the white paper doesn’t do a good job explaining how this fits into a larger, open ecosystem, nor does it highlight this unique architectural opportunity. If a user wants Orbitz to help augment its travel planning experience, even when it is at Expedia or Southwest airlines or Hilton.com, it would be great to do that in a secure, authorized, privacy-sensitive way. But it isn’t quite clear if this is the point of Kynetx’s approach. (Although it is a great opportunity, one that r-buttons and SwitchBook see in the not-so-far future).

If the second case is the goal, it isn’t clear to me why Kynetx is better than other customization frameworks. With a card selector and cards issued from the right authority, users can already present AAA or AARP credentials to websites, which in turn can integrate that information into their existing CMS or other presentation code (Drupal, PHP, perl, Ruby-on-Rails, etc.). If the value proposition is in speed-to-market for identity-based customization, then the white paper needs to make that case first and foremost. If that’s the goal, then it also suggests a business model, which I talk about in a bit.

It could also be that both of these are part of the approach: allowing both the website developer and third parties augment the web experience based on strong identity. This is the general idea behind r-buttons and would almost certainly speed deployment. However, the white paper doesn’t address the issues of contention when multiple providers want to augment the same page. Given the open-ended javascript functionality associated with a KIX, this could be a challenge.

Second, isn’t re-aggregation actually about creating a coherent context?

While the Kynetx approach allows users to present a particular relationship at a particular website, that doesn’t seem to solve the stated problem. I don’t see how it actually achieves a cross-web aggregated experience. In fact, it seems that the best aggregated experience should combine many relationship cards at many different services. In the 29-day travel planning scenario, won’t users need to send their AAA and AARP cards to every site they visit? (Or some large subset?) Does the card selector require a ceremony for every website every session? Or just once and then it is a permanent approval, such as confirming once with Expedia that the user is a AAA member? Managing this A x B complexity with A Information Cards and B websites scales poorly if every site has a distinct ceremony–and even worse if each card presented at each site is a distinct ceremony.

This apparent model of KIX based aggregation seems to miss an opportunity, one that is near to my heart as the core of the Search Map architecture for User-driven Search. It seems to me that for a given web-based task–such as travel planning–what you need is a user-driven personal data store that tracks the user’s progress across the Web. This data store should be 100% transparent, 100% editable, and seamlessly transferable/accessible to authorized vendors under terms controled by the user. We call our version of this a Search Map, an electronic document that provides the user a concrete way to manage and express their Search intent. It is also a seamless way to manage and express user context.

In the white paper, Phil asserts that “users are freed from managing episode context themselves” as a core benefit. But, I don’t think this is actually a benefit. Attempting to achieve that goal could end up being more patronizing than useful, following in the footsteps of “Clippy” the Microsoft Windows help agent which tried to figure out the context and help users, but failed miserably. “I see you are writing a letter. Would you like assistance?” Ack!

It’s not that users don’t want to manage their context, it’s that they haven’t been given simple, value-producing tools to do so. Consider spreadsheets: it’s not that users want to balance the budget on a computer—doing budgets on a computer isn’t inherently rewarding. It’s that spreadsheets make it easy to get value out of balancing their budget on the computer. Managing KIX across 29 days of travel planning and potentially a hundred+ websites sounds like a chore… unless we have a coherent expression of the context (in something like a Search Map, perhaps) that is easy to use and immediately useful.

Third, over-centralization limits scale.

The Kynetx model, as I understand it, doesn’t scale to the full World Wide Web, because it centralizes two core functions: resolving requests for augmentation and the validation of injection javascript as safe, private, and secure. Both of these constrain the growth opportunity for a KRL-based approach to augmenting web services. First, it places the core usage-time server demand on a single service. Given the business model of charging for ruleset evaluations, there is no obvious incentive for Kynetx to release an open source reference implementation to make it easier for alternate KRE service providers. In fact, there is every expectation that Kynetx will be motivated to “win the market share” battle and be the primary KRE service. Which, unfortunately, makes it just another silo, and will face precisely the same sort of scaling issues that plague Twitter. Second, by making Kinetx the arbiter of “quality” it places a single entity in control ofwhat constitutes “safe”. Even with good intentions, such centralized moral authority is not just dangerous, it alienates potential innovation. Nobody wants to be forced to seek permission for their new functionality. That was, IMO, the primary reason the World Wide Web dominated AOL so quickly.

The way to reach web scale is to make it absolutely trivial for /anyone/ to play the game. Several open source implementations and open standards enabled anyone who wanted to, to set up their own web server and try out the World Wide Web as a service provider. And, despite that lack of central control, lots of companies made lots of money providing enhanced software to manage those systems. So don’t fall for the illusion that central control is required or desirable for a big financial win.

Signing software is understood technology; we can enable signed KIX functionality with a validated identity as a first step towards quality control. Then, by opening up the validation service–and separating it from the distribution/matching of those KIX functions, we can allow software developers and service providers the freedom to innovate and provide their own approaches to what is valid and what isn’t. Some providers will choose to accept ANY signed KIX and simply track reputation. Others will charge a fee for developers, but run through a quality control check and review. By opening it up, you allow users and developers the freedom to manage KIX quality however they like, without building a presumptive “download at your own risk” ecosystem.

With Kynetx the sole authority on “quality” for KIX functionality, we would have both a technical and a political bottleneck that would retard the adoption of a generalized approach to the disaggregated web experience.

[Btw, it would be great if there were a name for the javascript injected into the browser when a KRL rule fires after evaluating the context and the user identity. This is currently just the “associated KIX functionality”, which is a bit wordy.]

Fourth, what about privacy and data rights management?

On the whole, it isn’t clear to me what data might be sent around in the claims of various Information Cards, but there is no discussion in the white paper about the data rights associated with that information. If I’m telling Hertz that I’m an AARP member, can they use that data to start sending me junk mail or SPAM targeting AARP members? Frankly, this is a hole in the entire user-centric Identity framework. OpenID Attribute Exchange and Information Cards allow users to use a third party service for the management and presentment of minimally sophisticated facets of identity (much better than username & password), but neither inherently enables users to specify a data rights regime for the claims or attributes so provisioned. In effect, we’ve made it easy for users to provide additional data about themselves, but missed the opportunity for users to easily control the use of that data.

Since Kynetx has a goal of seamlessly augmenting users’ web experience, isn’t it incumbant on them to assure that seamlessness both protects users’ right to privacy and prevents unintended over-customization based on supposedly private data? This is another manifestation of the “Tivo thinks I’m gay” problem, where Tivo analyzes viewing behavior and assumes things about the user, with no way for the user to manage their profile. The data rights problem happens because there is nothing to keep Tivo from telling Hertz, GE, or NBC they think the user is gay. The problem in the Kynetx approach happens when service providers start passing presumably private data to third parties—and users lack the means to control that leakage once the service provider knows certain data. This level of data rights control needs to be built in from the start for VRM and user-driven applications.

Business Model

At the core, I think the business model needs rethinking. Although a CPM-based pricing for KRL evaluations seems to align the value proposition directly with costs, it actually presents more risk and less control to potential customers than other models. It also presents greater risk and less stability for Kynetx itself.

What service providers and developers want to see in a technology platform is one with a free entry point (so you can get testing and trying it ASAP, even if a production system would need a for-fee license), a constrained, predictable cost structure, and economies of scale. Charging per evaluation offers none of these.

This model instead creates an artificial scarcity and then charges by the drop. What you want is to create abundance and sell buckets and hoses and pumps. Doc calls this the “because of” effect. Constraining KRL evaluation to support a pay-by-drink business model will artificially constrain adoption. Instead, run to ubiquity and sell the best tools for leveraging the system you’ve helped create.

At the same time, the evaluation of rulesets will have highly variable demand, with great spikes and drops far outside of Kynetx’s control. Tying revenue to that demand volatility means an unpredictable, wild revenue profile, flattening out only with insanely large numbers of users. This works for mega services like Amazon Web Services, but for a start up moving from initial revenue to predictable cash flow, it can be unsettling. In contrast, an IDE sales model or subscription based service with monthly fees bounds developer expenses and stabilizes the revenue curve.

I like the idea of KRL rulesets. Currently, SwitchBook is planning on using Javascript, RegEx, and XPath, for similar evaluations. That approach not only feels ad-hoc, it is. I’d like to see a unified approach that is flexible, cross-platform, and supported by a good development and test environment.

I think Kynetx could go far by creating an open source platform for KRL rulesets, then providing a robust IDE and testing framework for those who want to manage KRL rules to meet business needs. I think this is nicely pointed to in the mention in the White Paper of A/B testing with different KRLs. This is precisely the kind of sophistication that businesses will need to make the most of KRLs and which can easily be separated from the core infrastructure that enables KRLs in an open way for everybody. Also, the consulting opportunities to analyze, customize, and manage KRL rulesets is a huge business opportunity. Doing that well is likely to remain a black art for a long time to come; helping Fortune 1000 companies do it well should be lucrative.

As Dale Olds put it referring to Novel’s Bandit Project: First, enable an open identity-metasystem, then sell tools to companies to help them manage it.

Collaborations

I like the value proposition of platform-independent identity-based customization. It fits well with VRM’s r-buttons, MyDex’s Personal Data Store service, and SwitchBook’s Search Maps. I think there’s still some brain work to be done figuring out how we can all support each other and simultaneously build sustainable business models, but I’ve no doubt there’s a way if we all invest in exploring those opportunities. Although I focused on questions and concerns about Kynetx in this post, I have great respect for Phil and hope to work with him as both our companies–and the entire VRM community–build out viable solutions to these kinds of problems.

Posted in Identity, Intention Economy, Personal Data Store, ProjectVRM, User Driven Search, Vendor Relationship Management | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | 1 Comment

Farewell Google Notebook, Move over SearchWiki, We need a Search Map

Alas, a noble experiment has been slayed by the relentless hand of corporate focus. Google has announced its web-clipping scrapbook Google Notebook will no longer be actively developed.

I’ve mentioned Google Notebook briefly in the past, as a tool for helping with user-driven searches (more) — or complex searches as I used to call them. Unfortunately, Google never connected the notebook with Search, despite it being a reasonable solution for keeping track of the kind of discoveries you find when doing advanced searches at a lot of different websites.

Instead, Google suggests you try one of their other products:

If you haven’t used Notebook in the past, we invite you to explore the other Google products that offer Notebook-like functionality. Here are a few examples, all of which are being actively improved and should meet your needs:

  • SearchWiki – We recently launched a feature on Search that will let you re-rank, comment, and personalize your search results. This is useful when you’ve found some results on Google Search that were really perfect for your query. You can read about how to use SearchWiki in this blog post.
  • Google Docs – If you’re trying to jot down some quick notes, or create a document that you can share with others, check out Google Docs.
  • Tasks in Gmail – For a lightweight way to generate a todo list or keep track of things, we recently launched Tasks in Gmail Labs.
  • Google Bookmarks – For a tool that can help you remember web pages that you liked and access them easily, take a look at Google Bookmarks. You can even add labels to your bookmarks to better organize and revisit them.

Sigh.

Google NotebookGoogle Notebook fit a unique spot in the Google product portfolio, and as you can see in the comments on the announcement, a lot of people will miss it.

It’s too bad we don’t have our Search organizer product ready, I’d love to swoop in and save the day for all those wayward soles stuck without their Google Notebook. The future holds promise… Still, there something to be gleaned from Google’s recent developments. As I’ve said before, Search is bigger than query/response. And at least some parts of Google know it. But I wonder how much of the rest of the company gets it.

Take SearchWiki for example. Google rolled this out in November of last year. If you use Google through a Google account, SearchWiki gives you three new icons on every result:

  • promote
  • delete
  • comment

Promoting an item moves it to the top of the result list. Delete, predictably, removes it from the results and Comment adds a comment to that result.  The first two are private–only you see the effect of promotions and deletions, while comments are public.

Wikia Search LogoIt is an interesting experiment, if only because it shows how seriously Google takes Wikipedia as competition; the functionality is nearly identical to Wikipedia’s search engine, Wikia. (And many of us have noticed how often Wikipedia entries show up early in Google results.)

It also shows a growing belief that users can help improve Search if they are actively involved. We don’t know what Google is doing with all the user data of who deleted or promoted what, but it will be fun to watch and find out. It will certainly present a different reference frame than PageRank–the core algorithm behind Google–which focuses entirely on the authority of HTML authors and the hyperlinks they put in web pages. If they shift the focus of their ranking to the actions of every day users, Google would shift the moral authority behind their results from web page creators to web page visitors–a much more representative population. That’d be pretty cool.

Unfortunately, the problem with SearchWiki is that it pivots on keywords rather than a more durable concept of Search. It turns out that the promotions and deletions apply only to subsequent queries with the exact same keywords.

Seriously.

southwest airplane cropped 2For example, let’s say you search Google for “travel” and delete Travelocity, Expedia, and CheapTickets, because you’ve already tried those sites and are looking for something new. Then after browsing a bit, you realize you want to see websites for air travel, so you change the query to “air travel”. Suprise!  All those results you deleted are back in the list.

This is more than useless, it makes you feel like all that effort to promote and delete was completely wasted. We know that keywords evolve during advanced Searches. As we explore more of the web, we learn more about what we are looking for and which keywords might work better. And yet, Google’s SearchWiki remains fixated on the keyword query as the central point for tracking user feedback for these kinds of advanced searches–because really, who is going to promote and delete results for one-off, simple searches like finding the phone number for a restaurant? SearchWiki seems like it should be most useful for Searches that take us to dozens and dozens of websites, over days, weeks, even months. And yet, its focus on keywords to track promotions, deletions and comments means SearchWiki is practically useless for anything but the most simplistic queries.

What advanced Searches call for is a tool that helps users track a specific Search across the entire web. One that tracks both explicit and implicit data about the search, lets users organize that data on their own terms, and then lets them share that data with anyone that might be able to help their Search. This combination of keyword queries, clickthroughs, and web captures would be an invaluable representation of their Search Intent. When captured on the user’s behalf, it is a great example of the VRM concept of the user as the point of integration. At SwitchBook, we call the resulting document a Search Map.

samurai silhouetteSearch Maps put the user in charge of all the data related to their Search. Search Maps enable true  user-driven searches (more), where the individual’s Search intent is effortlessly created, easily managed, and expressed to precisely those who can help the most. It is co-created with the user, with full transparency and editability. It allows a complete view of a particular search, organized and confirmed by the user.  It can be sent to any online service that can explicitly acknowledge the user’s own Terms of Use, specifying just exactly how that data can be used. The result is a verified, accurate representation of what the user is looking for right now, ready to be used by any Recommendation Provider capable of respecting the user’s data rights and then responding intelligently to the content of that Search Map.

Search Maps are at the core of SwitchBook’s approach to User-driven Search. We’re working with Doc Searls and the VRM community to explore how Search Maps work, how they can meet the needs of users, and how they can appropriately protect users’ privacy and interests when used to manage and express Search intent.

We hope to discuss this more at the Spring 2009 VRM Workshop, tentatively scheduled for March 2nd, 3rd, and 4th, somewhere on the West Coast. If User-driven Search intrigues you, save the date and look for future announcements on the VRM discussion list. We’d love to see you there.

Posted in ProjectVRM, Search, User Driven Search, Vendor Relationship Management | Tagged , , , , , , , , , , , , , | Comments Off on Farewell Google Notebook, Move over SearchWiki, We need a Search Map

Welcoming 2009

Happy New Year!

I am looking forward to 2009. There has been lots of change, both professionally and personally, this year, putting me (and SwitchBook) on a trajectory for amazing things in the year ahead.

One of my New Year’s resolutions is to create–and help to create–the world’s first VRM applications in 2009. There are more than a few in the works, at various stages of development. After a long gestation, I believe several are ripe for coming to market.

It’s fun working with a community of like-minded folks to create something grander and better than one could have alone. That’s a big part of the joy of open source and it is a huge part of what Doc Searls brings to Project VRM. It’s a great crowd. I’m excited about what we’re doing together and I hope you’ll join us in 2009, in some way, large or small.

Posted in Intention Economy, ProjectVRM, Vendor Relationship Management | Tagged , , , , | Comments Off on Welcoming 2009

Notes on User Driven Search

Whether it’s user-generated content like YouTube, user-written and edited knowledgebases, like Wikipedia and Freebase, or user-centric Identity like OpenID and Information Cards, user-driven thinking is transforming our world. With VRM— Vendor Relationship Management–that revolution reaches the market, creating tools for individuals to get more value out of their relationships with Vendors. The goal is to create a user-driven market, where individuals engage with vendors on their own terms, creating mutually beneficial relationships that generate new value for everyone involved.

So what would it mean to apply user-driven thinking to Search? Traditional search is a mix of user-driven and vendor-centrism. While users can enter any query and be directed to content anywhere on the ‘net, we can’t share our search history with Search Providers of choice, nor do we have control over how our activities are tracked and utilized. There are few, if any, open standards for the searcher side of the experience and few options for moving beyond traditional query-response Search.

At the VRM Workshop 2008, we fleshed out some ideas, building on the thoughts introduced in my previous post, as well as ideas discussed at VRM 2008 in Munich and IIW2008a in Mountain View. What I love about the conversations at these unconferences is that they are so rich, literally creating value on a moment-to-moment basis. And these were no exception.

Here’s what has emerged so far regarding User-driven Search.

1. User Driven Search is bigger than query/response.

Paris ResultsUser Driven Search is more than what we type into the query box and the results we get back from Search Engines. It covers an entire set of activities that span the Internet, including searches entered at site-specific Search Providers like Expedia, the USPTO, and Circuit City, and all the web pages we visit in-between. It is inherently cross-silo—even non-silo—as it encompasses all of our online efforts around a given Search topic.

A recent Google/Comscore study found that the average Travel searcher takes 29 days from their first query until their first online purchase. These advanced Searches don’t take place all at one Search Provider nor do they usually happen all in one sitting. Users need tools that empower them to manage these advanced, multi-site, multi-session Searches.

2. Users should be able to activate and deactivate Search and tracking easily and at will

With User-driven Search facilitating advanced searches across the entire scope of our online activity, users need to be able to turn it on and off at will. Sometimes we want help and are willing to share to get it. Other times, privacy is preferred. We need to be able to turn off the surveillance and just do our thing. Unfortunately, traditional search and advertising networks don’t let us do that in any reasonable way.

on off switchThere are ways to disable Doubleclick’s tracking and we can tell Google to stop personalizing our search results—if we also turn off our Search History. Yet most people have no idea this is possible and even more aren’t technically comfortable enough to mess with cookies or custom preferences. We shouldn’t have to jump through hoops to disable tracking, because if that’s the case, the vast majority of users will simply not do it, and even those who do will often opt-out completely, which means there really isn’t any choice at all. The decision shouldn’t be between using advanced search features or being treated like a digital transient. We should be able to get advanced features just when we want them and simply turn them off when we don’t. That choice needs to be transparently obvious and easy and available right in the Search interface.

3. Compartmentalization

forks and knivesWhen treating Searches that span more than single queries, users need to be able to separate them into their natural topical breakdown, in whatever way makes sense. Collecting our entire search history and/or clickstream into a single attention datastore literally destroys the context that makes the Searches relevant.

Users need a way to collect their Search-related activities into categories that make sense for them. We’d like to keep our summer vacation search activity together, yet separate from our financial planning Search. We’d like to collect our home buying search activity and store that in a different place than the queries and discoveries related to our child’s Search for information about George Washington. User-driven Search must deal with more than query/response and yet not so much that it encompasses our entire attention stream. It must capture the sweet spot of user-defined collections at a scale suitable to each Search individually, as determined by each searcher.

4. Visability and Editability

For users to drive Search, we need to be able to see and edit the all of the information used to provide results. Hidden or unauthorized data or tracking of our clickstream allow current Search Providers and advertising networks to analyze and guesstimate what we are looking for, but they don’t provide any way for us to contribute. Not only are they hiding in my virtual closet surveilling me—often without permission—they are missing a great opportunity to simply ask me what I want. By making all Search activity visible, Providers can say “Here’s the data we are using to try to help you.” By making that editable, they add “Can you help us improve it?” User interface challenges aside, there is no reason Search Providers shouldn’t ask for feedback and input. It is guaranteed to improve the quality of their view and ultimately their Search results.

Erasing errorCurrently, Google, and its DoubleClick division, track your entire search history and just about anywhere you might go online, yet you have no idea what information they have on you, except for Google’s Search History—and you certainly can’t edit it. So when you track something down on a lark, or someone else uses your machine, irrelevant data gets bundled into your history, only to clog up the machinery that is actually trying to help you. Buy a book on knots for your young cousin and Amazon will be recommending Boy Scout titles for months. This is sometimes referred to as the “Tivo thinks I’m gay” problem. If users have neither visibility nor control over the data used for recommendations, they can’t correct these types of errors. We must have both visibility into the data driving advertising and search results, and we must be able to edit it as well.

5. Selectable disclosure on users’ terms

Iconic courierHaving gone to the trouble to coordinate and maintain a collection of data for their Searches, users should be empowered to share that data with any service capable of responding intelligently. Search is a fundamental part of how we navigate the web; it makes no sense to restrict Search activity to any one provider. Just as your Search might take you to dozens of websites, it is also possible that it will bring you to dozens of Search Providers, from Google and Yahoo! to Amazon and eBay, even to microSearch Providers like Circuit City or Schwab. As users navigate across the web, their Search should go with them, seamlessly disclosed to authorized Search Providers as easily as possible.

Today, Google serves as a locked-in data silo for most people’s search history. There’s no way to send that history to Yahoo! Or MSN Live or Amazon or eBay to see what they might be able to do for you. As technologies for personalized search results improve, the value of that search history will continue to increase. We need to be able to send select parts of our search to providers of choice and we need to be able to do it trivially. As easily as we go from one website to another, we should be able to send our Search to a new Search Provider.

And yet, if we are to facilitate the easy transfer of this data, we also need to protect users’ rights, even as we expose more secrets to more people.

Schwab, for example, could greatly improve the ease of finding appropriate offerings if they could review the relevant parts of the current Search instead of relying on you entering just the right queries and properly navigating their site architecture. But it is unlikely that users are going to want to give Schwab any information unless there’s an understanding about just exactly how that information will be used (and the ability to select just what information is sent). We generally don’t want companies to start sending us junk mail or calling us with sales offers just because our Search shows that we are in the market for one of their products. But, if we could be assured they would use our Search just to provide better results and perhaps to improve their offerings, we are far more likely to share that part of our Search that could help them help us. We want explicit agreement for data rights access and we want it before we give them any data, and when we want to select what we send so they get just the parts that make sense, and not any personal information we don’t want to share. A User-driven Search solution must not only allow users to send select portions of their Search wherever they want, it must allow them to set the terms for exactly what recipients can do once they get it.

6. Impulse from the user as a specific statement of Search Intent

ordering from menuRecommendation systems presume that an analysis of your history is the best way to discover what you might want now. The NetFlix recommendation challenge and Amazon recommendations feature both use this approach. Not only does this place the user in a passive mode, it also has no facility for users to state what they actually want, right now. People have widely varying interests and easily switch between tasks even in the middle of a Search. Our past transactions may paint an interesting picture of who we are, but it rarely describes what we want in any given moment. What we really want from NetFlix isn’t the “perfect” movie for someone with my viewing history, we want the movie that’s perfect for the mood or situation we’re in right now.

Search systems, on the other hand, rely on a specific “objet de Search” as a trigger for directing efforts. The objet de Search is a keyword or other statement that explicitly represents the user’s intent in some way. At traditional search engines, the query serves this purpose, with the user essentially asking “what web pages have these words” in the hope that those words might be on the page that has what they are actually looking for. At structured Search Providers, like Expedia or Orbitz, the entries for departure, destination, date, and number of travelers in the combined form data comprise the objet de Search.

For User Driven Searches, we must move beyond the keywords and limited structured form fields to allow a more complex, more expressive statement of intent. This statement should include the entire range of Search activity for your given Search, including queries, Search Providers, clickthroughs, captures, and annotations. In short, it should bundle up the entire Search and present it to the Search Provider as an explicit statement of intent. This presentation must be independent of any data silo, unlimited by the offerings of any particular vendor. It should be a proactive statement of “Here’s what I’m looking for: here’s what I’ve found so far and where I’ve been. Got anything that might help?”

Most importantly, Search operates in the foreground, with an explicit impulse from the user. User-driven Search isn’t about background profiling and analysis to try to guess user intent. It requires an explicit means for users to state their intent in ways Search Providers can understand. Instead of predatorial “targeting” of users with particular demographic, psychographic, or behavioral profiles, User-driven Search operates exclusively on that objet de search, as the entire representation of user intent. No more guessing. No more secretive or unauthorized tracking. No more stereotypical clustering based on industrial-era models of consumer behavior. Instead, User-driven Search Providers respond directly to clear, unambiguous representations of confirmed user intent.

Towards an Open Standard

This is the kind of solution we are working on at SwitchBook. At the VRM Workshop 2008, I was excited to learn more about MatchMine from J Trent Adams; they are moving in a similar direction for media-based recommendations. There is currently no service we know of that fully delivers on the promise of User Driven Search, but I’m looking forward to working with Trent and others to develop the open standards and protocols to make it possible.

If you are interested in joining the conversation, send me an email. We’ll be setting up a listserv to talk on a more regular basis. All are welcome.

[Update 5/3/2009: “user-driven Search” to “User Driven Search”]

Posted in ProjectVRM, User Driven Search, Vendor Relationship Management | Tagged , , , , , , , | 7 Comments

Social Graph is Plural

“Social Graph” is not just a singular noun.

“The Social Graph” is a popular misnomer that has plagued the social networking portability conversation ever since Brad Fitzpatrick catalyzed the blogosphere with a vision about the Global Social Graph.

But in fact, “The Social Graph” has little real value outside of computer science elegance. Nobody but Big Brother, the TSA, the CIA, and [insert surveillance agency of your jurisdiction here], actually want that single, monolithic view of all the relationships in the world. That’s The Social Graph.

In contrast, my social graph is hugely valuable to me. Your social graph matters to you. And it might be interesting to discover where our graph (plural) overlap. But neither of us actually care about The Social Graph.

A few fishAt the VRM Workshop 2008, here at Harvard’s Berkman Center for Internet and Society, it came out that “social graph” is actually plural.

Like fish.

The Social Graph is a misleading distraction, a handy buzzword we can all slip into our cocktail conversations. But the real value is in the personal, independent social graph we all have. Plural.

If you think about it, that’s the only way you can really make sense of it in our user-centric, user-driven world.

Posted in ProjectVRM, Uncategorized, Vendor Relationship Management | Tagged , , , , , , | 1 Comment

Towards User Driven Search

It is time to give users more control over Search.

At VRM2008 in Munich and at IIW in Mountain View, I started a conversation about User-Driven Search, the premise: what would it mean for users to truly drive their searches?

User-driven is a new term that came out of the VRM community riffing on the meaning of user-centric development and user-centric identity. User-centric is a nice term, but it could be construed as limiting. For example, user –centric definitely implies that that the user is at the center of attention and the focus of the architecture, but it doesn’t necessarily mean the user is in charge of the experience. That’s a key distinction.

tuna saladNot just tuna salad

Adriana explains this difference between user-centric and user-driven as metaphorically the difference between buying ready-made tuna salad or picking and choosing your own ingredients and making the tuna salad yourself. When I first talked with Doc about user-driven instead of user-centric, Jim Carrey’s The Truman Show immediately sprang to mind: from birth, Truman is the protagonist in a huge reality show revolving around him… only he doesn’t know it. The climax of the show is Truman discovering the rest of the world and confronting his father/producer. Clearly the Truman Show is Truman-centric… but it is most definitely not Truman-driven.

It’s about impetus and authority

For me, user-driven means that the user provides the impetus and is the controlling authority throughout the transaction. Sure, sometimes there is negotiation or collaboration with others… the user isn’t omnipotent, after all. However, the user is in charge of creating his or her own experience. This fits with user-constructed or customized solutions, like the tuna salad recipe. However, it has implications far beyond the limits user-created or user-customizable architectures.

ordering from menuIs the user initiating the experience? Is the user’s moral authority the primary control throughout the system? Is the system transparent to users, enabling them to make their own informed decisions about what will be presented to them and how it is presented? Is it the user who is shaping the input, intermediary results, and final outcome? If so, then it is user-driven. If not, it isn’t.

When it comes to the tuna salad metaphor, this is the equivalent of the tuna salad being made when I ask for it and on my terms. Not before. And although I could choose to make the salad myself–that is definitely user-driven–it could also be made by someone else to my specifications… extra mayo and black pepper, no onions, thank you.

Search as user-driven

GoogleGoogle’s keyword query-response approach to Search is, of course, user-driven to some extent. Nothing happens until the user enters a query, users are free to enter any query, and the system responds with results tailored for exactly what the user queried. The user does shape the experience to a limited degree. And yet, it still provides only a slim façade of user control. There is no way to modulate the algorithm, no way to let Google know which results are good or bad, and no way to refine the search other than keyword guessing games. And, perhaps most importantly, there is no way to manage the search beyond the immediate query. For that, the user is dependent on other techniques: bookmarking, cut & paste, opening multiple windows or tabs, even printing to paper or PDF to keep track of good finds. Evolution in Search History management is starting in the right direction, but the ideas here have been rather uninspiring so far.

User-driven systems create value inherently

Gold in BoxThe limits on the user-driven aspects of Google are particularly ironic given that it is precisely the element of user control that creates Google’s greatest asset: focused attention. Google’s money making asset is the collection of user-specified queries, queries that explicitly state words related to the user’s interest and implicitly denote user intent. It is precisely because the individual enters a specific keyword that Google is able to sell targeted ads… at great profit and benefit to advertisers and searchers alike.

The query entered in the Search box gives Google a implicit statement of intent. That intention is the gold Google resells to advertisers. If Google didn’t let users drive that intention, if they looked more like a content site or “Internet portal”, they’d have a lot less intention to monetize.

If we can extend that control, if we can make search even more user-driven, if we can enable richer, more explicit, more user-driven expressions of Search intent, I believe we can create even more value for everyone involved: search companies, advertisers, searchers, even non-paying websites showing up in “organic” results.

What does it mean to have User Driven Search?

At SwitchBook, we’ve been doing a lot of thinking about what User Driven Search might mean. I like starting the conversation with a simple example: what would it mean if I could take my search history from one search provider to another? This “dataportability” example is just an initial notion of how Search might become more user driven.

So, what do you think of when you hear (or read) “User Driven Search”?

I’ll be leading a session on this topic at the VRM Workshop next week. I hope you can join us.

This material is based, in part, upon work supported by the National Science Foundation under Grant No. 0740861. Any opinions, findings and conclusions or recomendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF).

[Update 5/3/2009: “user-driven Search” to “User Driven Search”

Posted in ProjectVRM, User Driven Search, Vendor Relationship Management | Tagged , , , , , , , , | 9 Comments

More on Level 4 Platforms

If you’re going to bet your company on a platform, pick the open one.

That was my advice last month at the Caltech/MIT Enterprise Forum on Platforms. It turned into a lively debate (you can check out the audio for the June 8 2008 event), almost inevitably pitting me against Peter Coffee of Salesforce.com, with Marc Canter rallying the forces of small business and openness in his inimitable style: caustic, irreverent, and sourcing no small amount of passion.

One small side effect of the debate was that, at times, it slipped unavoidably into a referendum on Salesforce.com rather than an insightful discussion of the merits of open verses closed systems and how companies can reasonably navigate their choices. Pulling back from that focus on Salesforce left more than a few unanswered issues. Peter Coffee contacted me after the event and asked to keep the conversation going, which sounds like a great idea.

So, here’s a reprise of my presentation on June 14.

Open Platforms and Standards

Level 4 Platforms FTW

This article is about open platforms and what an entrepreneur should think about when choosing a platform for their business.

Two Questions

When choosing a platform, you’ll want to consider two major questions:

  • Will it do what you need?
  • Will it last long enough for you to do it?

The first is a matter of features and functionality, and must be evaluated on a case-by-case business for each business. Since I don’t know your business, I won’t spend much more time on this question.

The second question is about longevity. Will the platform itself be available as long as your business needs to use it? Will it be stable and robust enough on a moment-to-moment basis? Is there a self-sustaining community of developers and integrators to help you continue to help your company adjust as business needs change over time? In other words, will the platform continue to be able to provide value for your business in the long term?

What it means to be Open

In talking about open platforms, I want to be clear what I mean by open. An open platform, for the sake of this article (and as far as I’m concerned, for the purposes of understanding the revolution of open systems), is one that adheres to the principles of N.E.A:

  • No one owns it
  • Everyone can use it
  • Anyone can improve it

If a single entity or group owns the platform, it isn’t open. If there are barriers preventing users from accessing or developing on the platform, it isn’t open. If you can’t, with reasonable effort, improve the platform itself, it isn’t open.

Every platform is open to some degree. After all, that is the point: platforms open proprietary systems so that third-party developers can innovate and create value beyond the scope, resources, or expertise of the platform creators. But truly open platforms allow anyone to improve the actual platform, not just develop within its constraints. For example, SSL, the secure sockets layer used to secure web access, was developed in the early days of the world wide web when a small group of innovators figured out a way to automatically encrypt information transmitted between a web browser and a website. They didn’t need to negotiate a contract or get permission, they simply implemented their solution and made it available to everyone, changing the platform of the World Wide Web itself.

When you bet on a closed platform, you are betting that the future of your company fits into the future as designed by the platform owner. When betting on an open platform, you bet that someone, somewhere, will be able to evolve the platform to meet your continuing needs.

NEA is a concept from World of Ends. What the Internet Is and How to Stop Mistaking It for Something Else by Doc Searls and David Weinberger, written October 2003.

Marc Andreessen’s Three Platforms

In September 2007, Marc Andreessen, one of the enablers of the World Wide Web, wrote an article titled the The Three Platforms You Meet On the Internet. I’m going to revisit those platforms visually, and introduce a fourth kind of platform that Marc missed. Because the text may be hard to read, here’s a legend of the elements present in all four graphics.

ui yellow square User Interface

application green circleDeveloper Application

platform enabler blue arrowsPlatform Enabler

platform pink trianglePlatform Service Provider

With that, let’s look at Marc’s three levels of platforms.

Level 1 Access API

Level 1 Platform
Level 1 Platforms allow third party developers to access the platform via a well-defined and documented API, typically using HTTP or SOAP. This allows developers to create their own applications, running anywhere the developer chooses, with access to the data and services running on the platform. Twitter, eBay, PayPal, Flickr, and Del.icio.us all support this type of interaction.

Level 2 Plug-in API

Level 2 Platform Plug-in API

Level 2 Platforms allow third party developers to create their application anywhere, with specific, limited ways to affect the user interface running on the platform. The key shift here is that the platform provider controls the overall user experience, but allows the developer a way to create value within their interface. Photoshop, FaceBook, Firefox, Internet Explorer, and Outlook are all applications which act as Level 2 platforms.

Level 3 Run-time Environment

Level 3 Platform Run-time Environment

Level 3 Platforms let develpers create applications that actually run on the platform. That means the code written by third-parties executes directly in the platform context. The overall user experience is still controlled by the platform, but compared to Level 2 platforms, Level 3 applications typically customize the user interface more extensively. Salesforce, Ning, OpenSocial, Windows, Java, EC2, and Google Apps are all Level 3 platforms.

Common Assumption

All three levels share a common assumption. You might notice it if you consider the three levels together:

common assumption 3 platforms

All three levels assume a single platform service provider: Just one pink triangle. Of course, we can assume the number of developers is unlimited… that’s the point of a platform. However, all of the systems described above are built to enable access to one single platform, a platform owned and controlled by its creator.

When Marc wrote his article, I emailed him and asked “What about the World Wide Web?” It doesn’t fit any of his models, yet is clearly one of the worlds most widespread, most successful, and most relevant platforms in history. On the World Wide Web, there is no central server, no platform provider. It is a different animal altogether. I call it a Level 4 Platform.

Level 4 Open Protocols

Level 4 Platform Open Protocols

Level 4 platforms allow developers to build applications anywhere–on a website, on your desktop, even on your cell phone–and those applications can talk to any number of platform providers without restriction, using standard open protocols. Many of us have heard of the most successful protocols: SMTP, POP, HTTP, HTML, TCP/IP, RSS, but most users know these by the applications they enable: email, the World Wide Web, the Internet, blogs.

Level 4 platforms are truly open, even as each piece of technology is provided by separate for-profit companies. It wasn’t until the World Wide Web opened online services to literally any company with a modicum of technical capability that the world began to enjoy the power of ubiquitous interactive services. As long as CompuServe and AOL and AT&T controlled tightly integrated one-stop-shop subscription services, the number of third-party developers was inherently limited. To launch a new service on AOL, you had to negotiate your own contract, all the while knowing that AOL could already be working on something similar and would never allow you to compete directly.

The web changed all that, allowing literally thousands upon thousands of entrepreneurs to explore their own (perhaps crazy) ideas of how to create value for people. No longer limited or burdened by a central platform provider, the pace of innovation exploded. Certainly, most of those ideas were crazy, but we couldn’t have known which were which without trying them first. The open nature of the web as a platform allowed exactly that.

The whole point of a platform is to encourage third party developers, so that everyone gets more value–value which is inherently beyond the purview of its creators. As long as a platform is closed to some degree, it is limiting the possibility for innovation. Limiting the innovation limits value to users. And limiting value to users limits the success of the platform.

Choose Open When Possible

The only guarantee for longevity is if you have options.

  • Multiple Service Providers allow you to switch if you need to. You never know if your provider will continue to meet your needs. They may even start to compete with you. The freedom to move to another standards-compliant provider gives you control. Just like you have over your web hosting company and email provider.
  • Source Code allows you, or developers working on your behalf, to improve, fix, simply maintain the platform your business depends on. If an open platform breaks, source code access keeps you from depending on the platform provider for a fix. Just like you can with Apache, Linux, Joomla, or Drupal.
  • Intellectual Property Use Rights assure users and third-party developers that you are free from fear about changing licensing terms and future lawsuits.

You can’t always choose an open platform. Sometimes it is worth it to develop on a proprietary platform because it helps you get to market faster, with more features. Depending on your needs, you might be better off building on a closed platform. But, if you can, I recommend choosing an open platform whenever possible.

Note on VRM

In case folks are wondering why I am talking about Level 4 Platforms, it is because that is precisely what we are working to build over at Project VRM, where I am the Chair of the Standards Committee.

VRM is the conceptual reciprocal of CRM, Customer Relationship Management. Instead of large-scale enterprise software that helps big companies extract more profit from every customer, VRM is about tools for individuals to create more value in their relationships with vendors.

  • Our mission is to enable both buyers and sellers to build mutually beneficial relationships, where everyone benefits from the zero-distance network that is the Internet.
  • Our approach is open standards and open source development.
  • Our goal is a level 4 platform of the kind described above for user-driven online commerce.

If you can make it, you are welcome to join us at the 2008 VRM Workshop next week at Havard’s Berkman Center for Internet and Society.

Bonus Link: Michael Cote on Platforms As A Service and Lock-in with Force.com

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , , , , , , | 3 Comments