More on Level 4 Platforms

If you’re going to bet your company on a platform, pick the open one.

That was my advice last month at the Caltech/MIT Enterprise Forum on Platforms. It turned into a lively debate (you can check out the audio for the June 8 2008 event), almost inevitably pitting me against Peter Coffee of Salesforce.com, with Marc Canter rallying the forces of small business and openness in his inimitable style: caustic, irreverent, and sourcing no small amount of passion.

One small side effect of the debate was that, at times, it slipped unavoidably into a referendum on Salesforce.com rather than an insightful discussion of the merits of open verses closed systems and how companies can reasonably navigate their choices. Pulling back from that focus on Salesforce left more than a few unanswered issues. Peter Coffee contacted me after the event and asked to keep the conversation going, which sounds like a great idea.

So, here’s a reprise of my presentation on June 14.

Open Platforms and Standards

Level 4 Platforms FTW

This article is about open platforms and what an entrepreneur should think about when choosing a platform for their business.

Two Questions

When choosing a platform, you’ll want to consider two major questions:

  • Will it do what you need?
  • Will it last long enough for you to do it?

The first is a matter of features and functionality, and must be evaluated on a case-by-case business for each business. Since I don’t know your business, I won’t spend much more time on this question.

The second question is about longevity. Will the platform itself be available as long as your business needs to use it? Will it be stable and robust enough on a moment-to-moment basis? Is there a self-sustaining community of developers and integrators to help you continue to help your company adjust as business needs change over time? In other words, will the platform continue to be able to provide value for your business in the long term?

What it means to be Open

In talking about open platforms, I want to be clear what I mean by open. An open platform, for the sake of this article (and as far as I’m concerned, for the purposes of understanding the revolution of open systems), is one that adheres to the principles of N.E.A:

  • No one owns it
  • Everyone can use it
  • Anyone can improve it

If a single entity or group owns the platform, it isn’t open. If there are barriers preventing users from accessing or developing on the platform, it isn’t open. If you can’t, with reasonable effort, improve the platform itself, it isn’t open.

Every platform is open to some degree. After all, that is the point: platforms open proprietary systems so that third-party developers can innovate and create value beyond the scope, resources, or expertise of the platform creators. But truly open platforms allow anyone to improve the actual platform, not just develop within its constraints. For example, SSL, the secure sockets layer used to secure web access, was developed in the early days of the world wide web when a small group of innovators figured out a way to automatically encrypt information transmitted between a web browser and a website. They didn’t need to negotiate a contract or get permission, they simply implemented their solution and made it available to everyone, changing the platform of the World Wide Web itself.

When you bet on a closed platform, you are betting that the future of your company fits into the future as designed by the platform owner. When betting on an open platform, you bet that someone, somewhere, will be able to evolve the platform to meet your continuing needs.

NEA is a concept from World of Ends. What the Internet Is and How to Stop Mistaking It for Something Else by Doc Searls and David Weinberger, written October 2003.

Marc Andreessen’s Three Platforms

In September 2007, Marc Andreessen, one of the enablers of the World Wide Web, wrote an article titled the The Three Platforms You Meet On the Internet. I’m going to revisit those platforms visually, and introduce a fourth kind of platform that Marc missed. Because the text may be hard to read, here’s a legend of the elements present in all four graphics.

ui yellow square User Interface

application green circleDeveloper Application

platform enabler blue arrowsPlatform Enabler

platform pink trianglePlatform Service Provider

With that, let’s look at Marc’s three levels of platforms.

Level 1 Access API

Level 1 Platform
Level 1 Platforms allow third party developers to access the platform via a well-defined and documented API, typically using HTTP or SOAP. This allows developers to create their own applications, running anywhere the developer chooses, with access to the data and services running on the platform. Twitter, eBay, PayPal, Flickr, and Del.icio.us all support this type of interaction.

Level 2 Plug-in API

Level 2 Platform Plug-in API

Level 2 Platforms allow third party developers to create their application anywhere, with specific, limited ways to affect the user interface running on the platform. The key shift here is that the platform provider controls the overall user experience, but allows the developer a way to create value within their interface. Photoshop, FaceBook, Firefox, Internet Explorer, and Outlook are all applications which act as Level 2 platforms.

Level 3 Run-time Environment

Level 3 Platform Run-time Environment

Level 3 Platforms let develpers create applications that actually run on the platform. That means the code written by third-parties executes directly in the platform context. The overall user experience is still controlled by the platform, but compared to Level 2 platforms, Level 3 applications typically customize the user interface more extensively. Salesforce, Ning, OpenSocial, Windows, Java, EC2, and Google Apps are all Level 3 platforms.

Common Assumption

All three levels share a common assumption. You might notice it if you consider the three levels together:

common assumption 3 platforms

All three levels assume a single platform service provider: Just one pink triangle. Of course, we can assume the number of developers is unlimited… that’s the point of a platform. However, all of the systems described above are built to enable access to one single platform, a platform owned and controlled by its creator.

When Marc wrote his article, I emailed him and asked “What about the World Wide Web?” It doesn’t fit any of his models, yet is clearly one of the worlds most widespread, most successful, and most relevant platforms in history. On the World Wide Web, there is no central server, no platform provider. It is a different animal altogether. I call it a Level 4 Platform.

Level 4 Open Protocols

Level 4 Platform Open Protocols

Level 4 platforms allow developers to build applications anywhere–on a website, on your desktop, even on your cell phone–and those applications can talk to any number of platform providers without restriction, using standard open protocols. Many of us have heard of the most successful protocols: SMTP, POP, HTTP, HTML, TCP/IP, RSS, but most users know these by the applications they enable: email, the World Wide Web, the Internet, blogs.

Level 4 platforms are truly open, even as each piece of technology is provided by separate for-profit companies. It wasn’t until the World Wide Web opened online services to literally any company with a modicum of technical capability that the world began to enjoy the power of ubiquitous interactive services. As long as CompuServe and AOL and AT&T controlled tightly integrated one-stop-shop subscription services, the number of third-party developers was inherently limited. To launch a new service on AOL, you had to negotiate your own contract, all the while knowing that AOL could already be working on something similar and would never allow you to compete directly.

The web changed all that, allowing literally thousands upon thousands of entrepreneurs to explore their own (perhaps crazy) ideas of how to create value for people. No longer limited or burdened by a central platform provider, the pace of innovation exploded. Certainly, most of those ideas were crazy, but we couldn’t have known which were which without trying them first. The open nature of the web as a platform allowed exactly that.

The whole point of a platform is to encourage third party developers, so that everyone gets more value–value which is inherently beyond the purview of its creators. As long as a platform is closed to some degree, it is limiting the possibility for innovation. Limiting the innovation limits value to users. And limiting value to users limits the success of the platform.

Choose Open When Possible

The only guarantee for longevity is if you have options.

  • Multiple Service Providers allow you to switch if you need to. You never know if your provider will continue to meet your needs. They may even start to compete with you. The freedom to move to another standards-compliant provider gives you control. Just like you have over your web hosting company and email provider.
  • Source Code allows you, or developers working on your behalf, to improve, fix, simply maintain the platform your business depends on. If an open platform breaks, source code access keeps you from depending on the platform provider for a fix. Just like you can with Apache, Linux, Joomla, or Drupal.
  • Intellectual Property Use Rights assure users and third-party developers that you are free from fear about changing licensing terms and future lawsuits.

You can’t always choose an open platform. Sometimes it is worth it to develop on a proprietary platform because it helps you get to market faster, with more features. Depending on your needs, you might be better off building on a closed platform. But, if you can, I recommend choosing an open platform whenever possible.

Note on VRM

In case folks are wondering why I am talking about Level 4 Platforms, it is because that is precisely what we are working to build over at Project VRM, where I am the Chair of the Standards Committee.

VRM is the conceptual reciprocal of CRM, Customer Relationship Management. Instead of large-scale enterprise software that helps big companies extract more profit from every customer, VRM is about tools for individuals to create more value in their relationships with vendors.

  • Our mission is to enable both buyers and sellers to build mutually beneficial relationships, where everyone benefits from the zero-distance network that is the Internet.
  • Our approach is open standards and open source development.
  • Our goal is a level 4 platform of the kind described above for user-driven online commerce.

If you can make it, you are welcome to join us at the 2008 VRM Workshop next week at Havard’s Berkman Center for Internet and Society.

Bonus Link: Michael Cote on Platforms As A Service and Lock-in with Force.com

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , , , , , , | 3 Comments

Answers to a few questions about VRM

Pignerol Antoine recently asked some questions about VRM and I thought I’d answer them publicly.

Is VRM really different from social CRM ?

Yes, although exactly how depends on how you define social CRM. Based on my understanding, I would suggest that VRM is first and foremost about providing value for the user with any vendor, as opposed to using social networking tools with a particular vendor. VRM is vendor agnostic and silo-adverse. The goal is to catalyze the development of tools for individuals through protocols and standards that let them work with any vendor seamlessly, without loss of functionality or services.

Does VRM work with a CRM ?

Sure. A CRM is a company-centric system. Every company should pay attention to its customers and CRM is currently the best-of-class thinking on the enterprise-side for how to do that. Different VRM services act on behalf of the individual, yet still require connecting to enterprise systems. For things to be seamless, VRM services should marry into CRM services for fulfillment.

Can VRM be implemented in all kinds of business?

Yes. Any business can support VRM services and be compliant with general VRM principles. Ultimately, it will be as easy for a small company to be VRM compliant as it is for a small company to run a blog or a wiki today. That takes some level of technical sophistication, but it is within grasp of any company that wants to invest a small amount of effort using freely available open source tools. Eventually, VRM will be available in the same way.

What’s needed for VRM to work ?

We need to work through electronic marketplace issues from customers’ perspectives, with attention to the full power of relationships, finding consistent ways to create new value through the network. For the Standards Committee, that means a public conversation starting with users and requirements. Once that is vetted in an open source manner, we can explore particular implementations. We believe that with a well defined, high quality requirements specification, service providers will emerge to deliver those services.

As customers are looking for lower prices, don’t you think that Personal RFPs are gonna cost more for customers (because they are personalized offers) ?

Two things here. First, I don’t think customers are just looking for lower prices. They are looking for better value.

https://blog.joeandrieu.com/2008/03/07/pricing-markets-and-demand-vrm-style/

One of my favorite examples of this is Shopatron’s business where they sell everything at 100% manufacturer suggested retail price, no discounts, no rebates:

https://blog.joeandrieu.com/2007/01/19/shopatron-redefines-vendor-relationships/

Second, the personal RFP is designed to eliminate transaction costs in the marketplace. Currently, product and vendor discovery is slow, expensive, and uncertain. That means buyers waste time and vendors waste advertising and lead generation dollars seeking the right match between needs and solutions. Any time transaction costs are reduced, you have an opportunity for better prices.

At the same time, Vendors will be discovering ways to provide more value to customers and the net result could easily be that customers will end up paying more for enhanced services or products. Ideally, this will mean that commodity products continue to drop in price while value-added customizations are welcomed by buyers and voluntarily paid for at a premium over the commodities.

What do you expect from VRM?

I expect it will take longer and be more work than any of us would prefer. However, I think that the concepts behind VRM, and hopefully our work developing standards and catalyzing working solutions, will enable a fundamental shift in the marketplace. As Doc Searls has said more than once, the industrial revolution is over: industry won. There is an incredibly powerful legacy of using computers and networks to help companies make more money (and create more value as they do so). Unfortunately, companies tend to think for themselves first, often to the detriment of overall economic benefit.

I see a world where every individual is engaged and empowered to get the most out of their relationships with vendors–vendors of all sizes. In that world, not only are individuals and vendors each getting and creating more value directly, the entire economy is operating at a higher efficiency as less money is spent on wasted advertising and product development and more is spent on fulfilling verified demand. This would supercharge Adam Smith’s invisible hand and provide a significant increase in aggregate global wealth for everyone. It takes the benefits of the zero-distance network and extends it efficiently into the domain of user-driven commerce.

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , , | 2 Comments

R-cards “ah-hah!” at IIW

At last month’s Internet Identity Workshop and the subsequent DataSharing Summit, Markus S and Drummond Reed unpacked several ideas about r-cards, which, to a certain extent, are an evolution of the Information Card at the heart of CardSpace.

Going into IIW, I understood r-cards simply as a hybrid of InfoCard’s managed and personal card models. Managed cards are issued by another party–all the data associated/transmitted with that card is controlled by that managing party, while personal cards are self-asserted, allowing individuals to serve as their own card provider, controlling all of the associated data. R-cards then, allow a managing party to co-control a card with the user–with some data controlled by the managing party and some controlled by the user.

However, during the IIW demo of r-card, I had an epiphany about how powerful the r-card is, once we actually allow the user to manage the personal claims through multiple, dereferenceable links.

One issue that came up during the demo was that if the “personal” side of the r-card is manually entered claims, such as contact information, then the user is creating a management nightmare: duplicate claims would need to be entered and maintained across many different r-cards. The more r-cards, the worse the problem.

The “obvious” solution discussed at the session was to allow the user to specify specific claims that are served by other IdPs, such as a Personal Address Manager. And for completeness sake, let’s note that such claims could be mashed up from multiple other IdPs, not just a single one. Thus, any number of claims from a particular IdP could act as a sort of sub-card, combining with other subcards at presentation time.

The net result of this is a realization that that perhaps the most interesting thing about r-cards is their use as dynamic cards or aggregate cards or mashup identity cards.

That’s pretty cool in itself.

However, it also struck me that this also potentially fixes usability problems around authorizing a bunch of vendor’s (M) access to identity claims at a variety of different identity providers (N). This potentially requires N points of authorization and authentication for each M vendors (or relying parties). Sub-cards (or r-cards) may combine that task at the point of presentation for much greater user understanding and simplicity.

Since the Card Selector is itself a trusted point of authorization, we should be able to use the “mashup” gesture as explicit authorization for relying parties to access the claims specified in the sub-cards. That is, the UI of creating the r-card/mashup card/dynamic card also explicitly approves access to specific claims from multiple IdPs, since after all, the selector is where you select which claims to present to relying parties.

This adjustment to the Information Card ceremony greatly simplifies the user experience, while retaining all the power of distributed claims at appropriate IdPs. For example, it would allow me to specify my Passport # to United Airlines, as a verifiable claim served by the US Secretary of State IdP (which should be trusted by UA), streamlining any international travel I might do, while retaining my contact info at my Personal Address Manager. All with the same authorization ceremony I use with any information card relying party.

This realization was, for me, the most surprising insight into the power of the r-card. In fact, I’m wondering if the name “r-card” captures it best.

Posted in Identity, Personal Data Store | Tagged , , , , , , | 2 Comments

Bandit, Higgins, Open Source, Profit and Novell

At EIC2008 last month, Dale Olds of Novell’s Bandit Project gave me a few minutes and some insight into how Novell (and others) are mixing open source with proprietary software to architect a whole new Identity paradigm online.

I’ve been following the user-centric Identity movement ever since Doc Searls talked me into attending IIW2006b, an unconference. EIC is a classic Enterprise technology sales conference on identity management. The two events couldn’t be more different, even though both have excellent content and are focused on Identity. EIC was all about big business selling to each other, while IIW is all about engineers making user-centric Identity work.

Identity? A lot of you are familiar with the term, but for those who might not know what I mean, I’m talking about how people authenticate themselves for access to online systems. Traditionally based on usernames and passwords, online Identity presents a host of problems, not the least of which is that an individual may have dozens or even hundreds of different usernames and passwords, one for each new web service or corporate LAN accessed. This proliferation is itself a security risk–as people reuse passwords despite the best efforts of zealous IT gurus everywhere. It is also an information management nightmare: how are we supposed to remember all of that? Which reinforces the problem of reused passwords and unfortunately typically insecure password reset. Today’s identity management software provides solutions to this problem, largely through federation and user-centric Identity.

In short, federation is how corporate IT systems rely on other corporate systems–provided by other departments or even other companies–to authenticate your identity and share information about you. It can be used for authentication, or as in the case of FaceBook’s Beacon, it can be used to pass on highly sensitive personal data. (Blockbuster is now in a lawsuit over this, which I expect they’ll lose.) As Doc Searls likes to put it, federation is about large companies having safe sex with each other, using your data. You can see how this starts to relate to your offline identity, as bits and pieces of your data trail could be used to build a profile and steal your identity or use it for other nefarious purposes, like spamming you with “targeted” ads.

In contrast, user-centric Identity is an architecture where individuals present the credentials of their choice for authentication at online services. Instead of the vendor-to-vendor systems integration and trust contracts of federation, “Relying Parties” authenticate a visitor by relying on the Identity services of an “Identity Provider” of the visitor’s choice. Relying parties may not accept all ID Providers, but in general, the choice of who authenticates your identity lies with you. Key technologies in this space are OpenID, InfoCards, and a variety of standards from the Liberty Alliance. These are the core of the conversation at IIW.

Of course, you can do federation with a user-centric Identity architecture; that’s not the point. The point is that in the user-centric world, the user is in charge of their identity. Or, as Doc Searls advocates, in the user-driven world, the user is driving the transaction.

So, when I sat down with Dale at EIC, I had already heard about Bandit—I even have the t-shirt—yet, I was wondering how Bandit fit into the whole mash up of technology behind user-centric Identity. I know that OpenID is a URL-based approach for identity that has generated significant traction because it is easy for relying parties to implement and for tech savvy users to use. I also know that Higgins and CardSpace both implement Information Cards, or InfoCards: one an open source, extendable client and server implementation, the other a polished proprietary client app from Microsoft. I even had some inkling of the various protocols created and under development by the Liberty Alliance, who started life as a federation standards group and has embraced user-centric approaches as it builds out its services stack. And I even knew about Sxipper and Vidoop, the first a client application that helps users manage their identity presentation online, whether the online services are user-centric or not, and the latter an Identity Provider with a unique method for verifying that you are you.

But what I didn’t quite get was how Bandit fit into it all. I know they are supporters of Higgins and Information Cards, but is Bandit a client app like Sxipper? A card selector like CardSpace? Is it a server implementation that could be used by companies like Vidoop? Is it open source and if so, how does it fit into Novell’s business model?

Dale was able to make it fairly clear: Bandit is an open source project supported by Novell. Bandit provided the card selector for the Higgins project and participate in OSIS (Open Source Identity Systems), a working group of the Identity Commons comprised of different Identity technology providers working towards interoperability. They also support the soon to be announced InfoCard Foundation, although there have been no official announcements by anyone yet about that particular project. Novell, as a separate entity, is putting engineering and organizational resources into these open source and interoperability efforts because they see a bright future in selling Identity management tools once we get the Internet Identity-enabled.

That’s when the light went on. Bandit is about helping create the entire infrastructure of Identity, the Identity Meta-System, as Kim Cameron calls it. Once that infrastructure is in place, Novell will be able to sell companies a number of tools that make it easy to leverage that infrastructure. As Dale put it, the open source part of this is about enabling Identity: assuring that the basic plumping and services are present and understood. The subsequent business model is helping companies manage identity, once we have the essential plumbing in place.

Think of it like http and HTML as enabling the world-wide-web, while products like Cold Fusion, IIS, and Drupal help companies manage web services. The web wouldn’t exist without the open source gift from CERN some fifteen years ago, and without that underlying plumbing of protocols and formats, software providers like Netscape, Microsoft, IBM, Sun, and Novell, wouldn’t have made a dollar selling web technologies to anyone. Instead, with a web-enabled world, literally thousands of companies competed to provide web software, making billions of dollars in the process.

Novell sees a similar dynamic with Identity. Clearly, so does Microsoft and Sun, and hundreds of other companies.

So do I. And it looks pretty damn cool from here.

p.s. my apologies for the lack of links and images. I realized I better post this before the real-time world overtakes me. I hope to see a bunch of you at IIW

p.s. bonus link: Doc Searls on vendors bankrolling open source.

Posted in Identity | Tagged , , , , , , , , , | 1 Comment

Running the Numbers

Bart Stevens recently suggested a breakdown on the potential economic impact of VRM, based largely on a post by Steve Rubel arguing that $1B is wasted in online advertising today.

First, I anticipate the Personal Data Store to become a design pattern that underlies other VRM services, rather than a service by itself. In fact, a PD isn’t really a PD unless it enables VRM services explicitly… Personal Data Stores aren’t just online storage like Amazon’s S3.

Eye trackingSecond, I think the $1 Billion number is far too small. Steve is only estimating the CPM costs for display ads that are literally missed by users during eye tracking studies. That’s an intriguing number because those ads truly are wasted… there isn’t even any brand exposure because the ads are not even seen. It’s like paying for ads in a magazine that is never opened by a real reader.

On the other hand, there are still plenty of ads that are seen by the wrong people and CPC ads that are clicked on by the wrong people. Note that for the “right” people, those ads arguably generate useful brand exposure, so they aren’t wasted.

Burning moneyWhen advertising starts with the advertiser, it inherently wastes money, as it inevitably buys placement in ineffective or misaligned media. By now it is an old chestnut that advertisers waste half their budget–they just don’t know which half. Sometimes advertising is an investment in exploring potential markets… the goal is the data gained in the test marketing, which isn’t entirely a waste. Other times advertising is educational outreach where the goal isn’t so much to trigger a sale, but instead to introduce people to new products and services. Sometimes this is called demand generation. And that still leaves a vast amount of waste, buying media (offline or online) that just doesn’t perform or create any value. The potential savings in these areas is not only missing from Rubel’s analysis, I’d wager it is far more than $1 billion.

Question MarkExclamation markThe huge potential of VRM is to turn these models inside-out, by providing a scalable pipeline directly into the product development and sales divisions of capable firms. Instead of Vendors guessing what people want, VRM services can cost-effectively tell Vendors what people truly do want. If the product is available, the sales team can enable purchase and delivery. If the product doesn’t exist, the Vendor can create it if demand is sufficient.

This new paradigm is exactly the shift from Attention to Intention that Doc and I have been advocating. The Attention game is the world of traditional advertising, where the industrial manufacturer competes in mass media to get the attention of the right consumers in order to generate demand for their products and services. Given that attention, they seduce, cajole, and entertain in hopes of winning new sales.

The Intention game, on the other hand, starts with explicit requests from the user to fulfill actual demand. Sometimes that intention will be nascent, needing further exploration and discovery. But eventually, for the segment of the population that finds something they want or need, that intention shifts from educating oneself about available options to seeking specific satisfaction, that is, buying a solution. Because intention starts with the user’s commitment to take the relationship to the next level, it immediately takes a vast amount of guesswork and wasted advertising out of the equation.

Raining DollarsThis guesswork and wasted advertising is probably closer to $100 billion/year, but that’s just my gut feeling. And that number only addresses the loss side of the equation, that is, the money we save by not wasting product development and advertising dollars. It ignores the value of products and services that today languish as innumerable missed opportunities–missed because companies have no way to efficiently gauge true market demand. There are undoubtedly services and products that exist–or could be profitably offered today–which fail to reach customers because we don’t have a suitable mechanism for connecting the right customers with the right companies. This potential to close the gap between potential sales and unmet demand, is simply too large to estimate.

The Cost-Per-Action/Pay-for-Performance business model of Affiliate Marketing is likely to continue to transform the ad industry, significantly reducing billions in unnecessary expenses, including the $1B wasted on unseen display ads in Rubel’s analysis.

It won’t be until we transform explicit intent into new offerings and new sales that we unleash the vast potential that is VRM.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | Tagged , , , , , | 2 Comments

Zen and Technology

I’m not sure how I found it, but today I discovered a bit of a gem in the blogosphere: ValleyZen.

For a quick taste, check out the interview with Drue Kataoka on View from the Bay. It is amazing how a few simple words can have such a profound visceral impact.

Drue’s suggestions resonate with my user-centric world-view:

  1. SIMPLIFY
    Focus on what’s important. Eliminate what’s not.
  2. IMMEDIACY
    React to the moment — not to your fears and concerns.
  3. BREAK YOUR RHYTHM
    Surprise yourself and those around you.
  4. BE CALM
    Find Tranquility in Action.
  5. GREEN FROM THE INSIDE OUT
    Begin with your own personal ecosystem.

Take time for yourself, reconnect and put things in perspective, and engage the world on your own terms, in the moment, sustainably.

When redefining technology in personal terms, Drue’s take on Zen packs a powerful punch.

Posted in Uncategorized | Tagged , , , , | 1 Comment

On VRM and Standards

Phil Whitehouse recently served up some nuggets to stimulate conversations at next week’s VRM2008 in Munich.

I’ve been thinking a lot about VRM lately. Not so much about what it means, but rather the mechanism of how it can work.

If you’re new to VRM, it can be summarised like this: it’s the reciprocal of CRM. Rather than being bombarded with advertising, much of which is irrelevant, and the rest irritating, wouldn’t it be nice if you could just tell vendors what you want, on your terms? Without even going to the trouble of looking for them? If they’re willing and able to respond, they do so. Everyone else goes on their merry way. It’s all about sharing the data you want with the people you want.

Some examples from Doc Searls (Cluetrain Manifesto dude), who heads up the VRM project, include:
I want to:

– Buy a power convertor near St.Paul’s in the next three hours, at any price
– Buy a stroller for twins near Highway 70 in Kansas today for under $300
– Buy an Apple laptop with a 500gb HDD and weighs under five pounds, as soon as it comes out
– Buy a double decaf cappuccino at the next exit on this highway
(You can see more examples presented by Doc on this photo)

There are a few big problems that need solving. Filtering is one (both on the outbound request and the way back in), targeting is another (how do you choose which vendor to share your data with?), organisation is a third (by what mechanism do customers agree to share their data, and in what form, while retaining control over it?).

don’t know much about establishing standards. My erstwhile colleague Paul Downey, on the other hand, represents BT at the W3C and thus knows a bit about standards. He sez this will be a hard problem to crack, and he’s probably right. Big question: to what extent would we, the customers, allow brokers to help create this standard?

My view is this problem needs to be overcome before VRM can move forward, regardless of whether brokers are involved.

Good stuff. As the chair of the Standards Committee for Project VRM, it might be obvious that I think we need to create some standards.

data and globeAt the end of the day, interoperability requires either standards or one-to-one interoperability engineering. The user-centric Identity movement has grown like crazy in the last few years largely because a hybrid of these approaches have been used, as OpenID, Higgins, CardSpace, and Liberty (among others) took their 1.0 products and figured out how to make them work together, leveraging standards like WS* and SAML as they did so. The nice thing about standards is that once they are in place, they reduce an O(n^2) problem, where every software vendor has to coordinate with every other vendor, to an O(n) problem where each software vendor coordinates to the standard.

The problem with standards is they are slow to develop. But once you have some apps and some standards at the 1.0 level, the efforts towards interoperability can get serious traction, like they did with the user-centric Identity movement.

I’m hoping we can engender a similar development cycle with VRM. We need both working applications and formal standards and specifications, especially with regard to data formats and communications protocols.

I’ll diplomatically disagree and agree with Bart (read his comment on the original post) regarding leaving standards to others. On the one hand, we should leverage existing work as much as possible. For example, I see Higgins and XRI playing a major enabling role for us. On the other hand, while the Dataportability and Higgins guys are doing great work, they are not necessarily solving the problems VRM has set out to tackle, namely reinventing the marketplace on behalf of individuals while creating more value for vendors.

As an example, the Dataportability movement has framed the problem in terms of Data and Portability. This brings to mind exporting and importing “my” data from vendor to vendor. That’s a start toward liberating users from vendor silos. However, I think the real win is in user-centric services, where the location of the “data” is essentially irrelevant–even as it is hosted under the control of the user–and all user-authorized vendors can access the data through approved services.

open mailboxThat’s the idea behind the Personal Address Manager, which we’ll be discussing in Munich. Your actual postal address isn’t that much of a problem from a dataportability perspective. It’s just a few lines to enter and no real need to “export” it from some vendor’s silo. However, when you change your address, it would be nice for the new address to automatically propagate to those authorized to get it. Or, for more sophisticated vendors, to have the address provided on demand, so that they never send postal mail to the wrong address. Such a service would be automatically discoverable by vendors using the Identity layer to authenticate and authorize exactly who gets it.

I see the job of VRM as working through these scenarios from the user’s perspective and ensuring the development of enough standards and technology for a complete implementation.

In any case, I’m looking forward to seeing Phil and Bart at VRM2008. There’s plenty of room to continue this conversation. Join us if you can; it should be fun. =)

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , | 1 Comment

Majority of Americans dislike unauthorized use of behavioral data

From Yahoo News:

Majority Uncomfortable with Websites Customizing Content Based Visitors Personal Profiles

 

Level of Comfort Increases When Privacy Safeguards Introduced

ROCHESTER, N.Y.–(BUSINESS WIRE)–A majority of U.S. adults are skeptical about the practice of websites using information about a persons online activity to customize website content. However, after being introduced to four potential recommendations for improving websites privacy and security polices, U.S. adults become somewhat more comfortable with the websites use of personal information.

Good stuff, although one should read closely to understand exactly what users dislike. Customization isn’t the problem… it’s the unauthorized invasion of privacy. The questions asked by Harris were rather leading. It would be interesting to see what people say to “if asked, would you allow a Search engine to provide enhanced results based on your behavior.” My understanding is most people do opt-in to the advanced features of Google desktop, which asks essentially the same question at install time. People don’t like surreptitious activities, but if you ask up front, it’s much easier for folks to say yes.

Posted in Identity, ProjectVRM, Vendor Relationship Management | Tagged , , | Comments Off on Majority of Americans dislike unauthorized use of behavioral data

Dataportability podcast interview

Here‘s yours truly with Trent Adams and Steve Greenberg of Dataportability, talking about VRM. Also in the podcast: dataportability news and Kaliya Hamlin on the Data Sharing Summit.

Posted in Vendor Relationship Management | Tagged , , , , , | Comments Off on Dataportability podcast interview

BT busted for unauthorized tracking of user activity

The title says it all, as reported by the Guardian:

BT admits tracking 18,000 users with Phorm systems in 2006

Bummer. I kinda like BT.

Posted in Identity | Tagged , | 1 Comment