The Killer App Proceeds From the User

Alex Iskold of Blue Organizer asks “What is the Killer App?” for the Semantic Web in an article that nicely condenses the current best of class in the major contending promises of what Tim Berner’s Lee has recently dubbed the Giant Global Graph:

  • Natural Language Understanding
    • No longer a need for cryptic “Googlese” to get the computer to give you want you want.
  • The Genie in the Bottle
    • The magically perfect assistant who can answer any question or satisfy any need you might have.
  • Semantic Knowledge Bases
    • Structured databases that have deep understanding of the meaning behind the data, rather than just the characters and numbers used to represent the data. Think Freebase and Twine.
  • Semantic Search
    • Natural language understanding driving search results, so you can ask questions like “What clubs does Tiger use?” rather than Googleses keyword queries. Hakia, Powerset, and Cognition are all in this space.
  • Social Graph
  • Shortcuts

It’s nice walk through the space and particularly interesting how Alex responds to the current state-of-the-art in each. I’ll summarize here, so I can respond in turn (check out the full article for Alex’s actual statements):

  • Natural Language Understanding
    • Huge, hairy problem. No solution in site.
  • The Genie in the Bottle
    • Even harder. Needs magic that isn’t even conceptually well understood.
  • Semantic Knowledge Bases
    • More detailed data is good, but does it really help users? Not emotionally catalytic enough for people to actually get excited and jump on board.
  • Semantic Search
    • Doesn’t look like the killer app so far, because none of the “semantic” approaches seem to improve much on Google.
  • Social Graph
    • This is just a subset of the semantic web and therefore not its killer app.
  • Shortcuts
    • An up and coming category, these embedded shortcuts remove search as the killer navigation online. However, it is still young, misunderstood, and also lacks emotional umph.

First, the most intriguing item is that Alex is candid enough to be critical of the category in which he places his own company’s flagship product. Perhaps AdaptiveBlue has turned the corner on their conceptualization of the market and are rapidly, fiercely developing their next innovation, their next rev, the thing that just might become the killer app of the Semanic Web. That makes me curious, indeed.

Second, I like the break down, but naturally have some slightly different opinions. SwitchBook is still largely in stealth mode–we have yet to publish much on what we are doing even though we are relatively open in face-to-face meetings. However, from my posts here you can guess that it involves search, user-centrism, and particularly the principles underlying VRM.

So let’s look at Alex’s breakdown again:

Natural Language Understanding

Definitely a huge problem. Not only do you have to deal with the incredible elasticity of language, once you’ve mapped the natural language into some sort of internal representation, you still have to figure out what the heck you are going to do with it.

In other words, “understanding” is context specific not just in terms of words having different meaning in different places–Jaguar could mean a car, a cat, or an operating system depending on whose brochure or website you find it on–but it also has different meaning based on what you (as a system, as a service) are going to do with that understanding.

  • Are you going to return web pages that contain Jaguar with the same meaning?
  • Are you going to offer alternatives to the term Jaguar, like a thesaurus?
  • Are you going to translate Jaguar into other languages?
  • Are you going to sell Jaguar compatible products?
  • Are you going to reason over the threats and opportunities of Jaguars?

All of these require fundamentally different internal representations of the “understanding” of the natural language from the user.

As Jaron Lanier will tell you, language is an interface by which people remotely control the world outside their mind. We use it to communicate with others to get what we want and to understand how to respond to others (which is basically figuring out how to eventually get others to give us what we want). As such, its primary use, its raison-d’etre, is to influence the world around us. So, what we really want isn’t to understand the language, but to understand (1) what a speaker wants and (2) how to influence the world.

It turns out, people are incredibly adaptive at both of these tasks. Language is just one of the interfaces we use and we are capable of learning entirely new tools quickly when they demonstrate a more efficient, more effective way to get what we want. The humble spreadsheet is one of my favorite examples of this. I believe that more people “program” in MS Excel than in any classic programming language: we write mini-programs using functions like sum() and average() and put data in and look at the results. Who would’ve thought that entry-level clerical workers, accountants, and soccer moms around the world would’ve learned to program? And yet, they do. In my opinion, Excel is probably the most widely used programming environment in the corporate world.

Could you imagine trying to replace that with Natural Language? I can only imagine that a natural language version of Excel would be more convoluted and harder to use, but maybe that’s just because I lack imagination.

The Genie in the Bottle

This is more interesting. I agree that this goal is arbitrarily far away–no one will crack this nut entirely until we have both omniscience and omnipotence programmed into our software (and that is essentially never). However, by understanding clearly exactly what the Genie would do if he or she could, then you have a starting point for building innovative solutions.

Consider the development of online virtual worlds. Many people also said that the fictional Star Trek holodeck is arbitrarily far into the future, that, like the Genie, it requires so much advanced technology as to effectively be magic. And yet, Janet Murray’s Hamlet on the Holodeck gave us a realistic assessment of the current state of the art and how we might eventually get there. Sure, we are still arbitrarily far away from the uber virtual experience of the Holodeck. But Second Life, World of Warcraft, and Grand Theft Auto have all broken incredible ground in making a simpler, more feasible version of that experience available today to tens of millions of people.

So, what we can learn from the Genie is how to think about the “perfect” Search service. Imagine for a moment the absolutely perfect search service. Think bigger than natural language search. Think bigger than talking to your computer and getting what you want.

The perfect Search is when you only just barely have to indicate your intention and your search result appears. Somehow, magically, the system just knows what you want and when you are ready to actually act on that desire, the system has already brought your desire to you. No more running to the vending machine to get a soda from an arbitrarily limited selection in fixed volume and vendor-mandated packaging. The system knows you are getting thirsty, knows what you want (not just from history information but even from sensing your current blood-sugar and taste craving) and how you want it, and the moment you commit to getting that soda, it appears at your desk–perhaps even without you knowing exactly which soda you wanted today. All of this done discretely, unobtrusively, privately, and with the utmost discretion so neighbors or co-workers don’t see what you’d rather they don’t. The action, ultimately, is always driven by your committed intention. Not your attention, not some statistically predicted estimate of your desire, but your actual, expressed commitment to realize a particular desire. Express an intention and magically, it is fulfilled.

That’s the Genie.

While it isn’t yet available, bits and pieces of it are becoming available, just as online text MUDs and World of Warcraft are bits and pieces of 30 years working towards the ultimate virtual reality. By placing the committed intention of the user at the core of value creation, at the heart of the system design, I believe the Genie provides an almost the ideal model for conceptualizing the Holy Grail of Search.

Semantic Knowledge Bases

Essentially, I agree with Alex, this is a technology looking for a problem. “Better” data and more “powerful” ways to interact with and reason over that data should provide better results and is, therefore, a Good thing–assuming there are no other costs. Unfortunately, the semantic web has significant transitional and ongoing costs to turn the free-form, anyone-can-post-anything World Wide Web, into a system where participating as a first class of citizen requires using RDF or microformats or some other arcane technology to transform formerly arbitrary scribblings–and marketing and online stores and customer service and media outlets and whatever–into semantically structured information. It requires an imposition of structure that is inherently limiting and counter to the user-centric architecture of the open web.

Nobody wants to pay that cost unless the immediate value to them is obviously much greater. And so far, the value is uncertain and far into the future.

Semantic Search

Alex suggests that because none of the semantic search companies is better than Google that semantic search isn’t the killer app. Well, Google uses a lot of semantics in its Search. Most users just don’t know it. They’ve used Latent Semantic Indexing for years and AdSense is all about wicked smart semantic analysis of web page content for matching ads from the Google ad universe. In fact, one of the more interesting semantic tricks Google does is one you can see for yourself. Try typing “jaguar” (or some other ambiguous term) into Google’s query box.

You’ll find that alternative meanings of “jaguar” all show up in the early results. Jaguar as a car. Jaguar as the cat. Even the Jaguar quantum chemistry package from Schrodinger, which has no reason being in the top ten at Google. Google does this because it knows that from the limited query box, it can’t figure out which Jaguar you really mean. But it also knows that users will filter out the misses and get excited about the hits. They design for the “Ah-hah” moment. As long as one in ten (or so) results matches the user’s intended meaning of Jaguar, then Google gets credit for finding the “right” jaguar. Brilliant.

So, I argue that any search that isn’t semantic is a dinosaur waiting for the undertaker. Maybe it isn’t a killer app as a distinct service, but it is already an integral part of the #1 killer app of the Web, Search.

Social Graph

On this one, Alex fails to explain clearly enough why he doesn’t like it. Any killer app is going to be a “subset” of the entire market. Email isn’t the totality of the Internet, but it is the killer app that first broke down the isolated IT networks and marched like Sherman all the way through to the consumer market to give the sexier World Wide Web a fighting chance at establishing the Internet as much a fundamental part of the civilized world as electricity, running water, and paved roads.

Actually, I think the social graph might be the killer app of the Semantic Web. It doesn’t deliver the full value of the Semantic Web, but it provides such immediate, obvious value for so many people that once the privacy controls are worked out, many many people are going to be surfing the Semantic Web without knowing it as they seamlessly mingle across their social internetwork through the former silos of Facebook, MySpace, Plaxo, and others. If it can be a killer app without people giving it credit, then the Social Graph is definitely a contender.

Shortcuts

This is absolutely illuminating. I like AdaptiveBlue’s product a lot, and others in this category have potential. However, I usually find the disjoint interactions confusing. Shortcuts, by nature, interfere with the “normal” web experience and are inherently intrusive. I happen to have Snap installed on my machine and I’m still surprised and often annoyed when it pops-up “previews” of links I’m doodling my cursor on.

I do that… I doodle mouse and doodle click. I have the same problem at the New York Times’ website, actually. They allow you to look up the meaning of any word on a page just by double-clicking on the word. Problem is, I doodle-click meaninglessly, sort of a virtual twiddling of my thumbs as I browse. And -whoops- I just triggered a new page download I don’t really want. It is a mess.

So, shortcuts have a long way to go to be less intrusive and to find the right “intuitive” connection with the user. Ultimately, I am a huge fan of augmenting the traditional “browse”-based experience of the web, rather than replacing it wholesale. People like the web. They like their services. They like the freedom of going anywhere that supports http and html. And yet, many of those websites don’t have the technical wherewithal to get “semantic”.

BlueOrganizer does a nice job, for example, of connecting IMDB listings of movies with NetFlix so it is easy for you to go from the Internet’s unofficial authority on movies to the leading movie-on-demand service. All without NetFlix or IMDB needing to do anything. That sort of user centrism is critical to the next evolution of the web and it’s the semantics of what is already on the web pages that make that possible. Shortcuts are just one effort to do something with that semantic data. Perhaps as they grow up, they will become more useful to more people.

Closing

Again, despite my initial hopes, I have written WAY too much, which is a pathological flaw I seem to have. Thanks for hanging in there.

My point in responding to Alex’s post is simply this: any killer app needs to start and end with the User. This is so true it has become a software development truism that everybody knows is important, but few know how to translate into their feature development schedule. Technology alone–like Natural Language Understanding–will never be a killer app. Only when someone figures out how to make it electric for users–exciting and immediate and so obviously valuable–can any innovation become a killer app.

With all due respect to the folks who love this term, the Semantic Web is one of those bundling concepts that is about as useful as the term “Electric Appliance.” It is useful in describing a category of product, but completely useless in helping retailers make decisions about what products people are going to buy this season. Until companies move beyond that catch all descriptor into product discussions that connect with what users already understand and want… none of the “semantic” offerings can possibly breakthrough to being a true killer app.

Posted in Vendor Relationship Management | Tagged , , , , , , , | Comments Off on The Killer App Proceeds From the User

The VRM Vector

The core of VRM, Vendor Relationship Management, is the vector of activity.

Remember vectors? Vectors are multi-dimensional, scalars one dimensional. In high school they explained it by saying velocity is a vector, it contains both the direction of travel and the magnitude. Speed, on the other hand, is a scalar. It only has the magnitude, direction isn’t included.

VRM isn’t just about magnitude, it is also about direction.

Bart Stevens, a new contributor to the VRM conversation asked this in his post VRM, APML, and Semantics.

I have been doing quite a lot or reading like JP Rangaswami post
on data portability among various sites.

1. My remark had to do with creating some sort of bank for your data. Maybe owned by the community themselves.

Secondly, I have been following the APML/data portability discussion of Chris Saad at Google Groups

2. My remark is that this is moving in the direction of VRM, we should become an active member in this group

Thirdly, I read this interesting post from Yihong Ding.

3. My remark, should we look into semantics as part of the VRM standardization exercise?

The short answer is clearly “Yes”. Semantics matter, as does the work of the data portability group. Having a better understanding of all the data on the Giant Global Graph as Sir Tim humorously calls it, is A Good ThingTM. It frees the user from Vendor data silos and provides a more comprehensive, understandable foundation for creating user-centric value. That is, it might let you do cool stuff for the user.

The more complex answer suggests a grain of salt is in order, but with appropriate care, all of the above can contribute to a VRM future.

Both APML and the GGG, formerly known as the semantic web, suffer from what I consider a misdirection in attention, despite creating real value in the world. That is, they are doing great work, but at a level that is necessarily abstracted away from where the user gets value.

Think of it like this. Consider all the advances that made biochemistry the amazing science it is today. Electrons. Protons. Molecules. Chemical reactions. Organic chemistry. Enzymes. DNA. Biological pathways. Literally dozens and dozens of Nobel prizes underpinning the concrete understanding of our world that lets us apply modern biochemistry. And that modern biochemistry is solving many of the worlds greatest problems. Absolutely brilliant, powerful, important work.

And yet, it won’t tell you a thing about what makes a person fall in love.

Or what color sweaters are going to sell well this season.

Or what the person entering a search query is really looking for.

The semantic web is based on a model that once all the data is properly interrelated, we can do smart stuff with it. That’s certainly true. That’s essentially what forensics departments do. They analyze all the data available to produce clues that can hopefully solve a crime and convict the guilty. Automating and extending that Giant Global Graph would allow an incredible level of forensic analysis to attempt to figure out how companies can create value. Such a graph would nicely align with the CRM and MIS systems of Fortune 500 companies and direct marketers and charity fundraising campaigns, right alongside the Department of Commerce, the IRS, and the CIA. There’s no doubt in my mind that the graph can be used to create value in new and amazing ways for those entities with the wherewithal to understand it and build systems to leverage it.

But it isn’t about helping individuals.

APML on the other hand has greater proximity to users, which is good. However, it still requires forensics to tease out the value. APML is a storage format for keeping track of clickstream, lifestream, and other attention data. This data is created on the user’s machine at the same time we leave our data trails around the web. Since it collects all of a user’s activity, no matter where they go, it has a much greater reach than even the new Google/Doubleclick database of user activity. And because the user owns this file, the user has the power to control how that data is used by vendors who might want to use it. On the whole, this is excellent. A classic Personal Data Store approach (minus the user-centric Identity access control, but that’s a different issue).

But what APML doesn’t do is explain what real-world value is being created for the user. Like the GGG, somebody somewhere has to do the forensics to make sense of it. Is APML just a more thorough version of the same data that Google/Doubleclick already tracks? If so, what good is that to the user? Will it mean they get more appropriate spam? Will it improve search results? Will it improve the ad banners that show up in the Doubleclick ad network? In other words, while APML certainly starts near the user, it isn’t clear if the direction of value is truly towards the user. I can clearly see how it helps advertisers and investigators, but I have yet to see a credible, compelling case for user value.

VRM, in contrast, is about starting with the user and creating value on their behalf, first. We do that specifically by focusing on commercial transactions and by enabling mutually beneficial relationships. It isn’t about moving the power from Vendors to Individuals, it is about creating new efficiencies and new value points across the ecosystem and marketplace that improve the situation for everyone.

With VRM, the value begins with the individual. The rest is implementation.

By focusing directly on the point of value for the user, I believe we can create more value, more quickly than trying a forensics approach on deeper, larger, data sets. The user is the natural point of integration for any number of services. Even many in the data portability group have shifted their language in this area. Initially Brad Fitzpatrick catalyzed the Social Network Portability movement by imagining a Global Social Graph. But many have come to realize that it isn’t the abstract, six-degrees linking everyone Global Graph that matters, its the slice of that graph that defines our own, individual social connections. What I care about is my social graph, my friends, my coworkers. That’s where the value is created.

Similarly, consider the user-centric answers to the problems above. Instead of looking at the biochemistry or forensic data,

  • Try looking at people falling in love if you want to learn about what makes a person fall in love.
  • Try looking at what color sweaters sold well last year and how other color trends are changing if you want to predict what color sweaters are going to sell well this season.
  • Try looking where other people with similar searches actually visited if you want to find out what the person making that search query is really looking for.

Start with the user. Identify the real value being created and build out from there.

The net results of the GGG and APML are definitely useful in realizing VRM. In fact, the data portability movement is huge. There are sea changes that must occur to fully realize the power of VRM, and I think the Scoble Facebook fiasco and subsequent joining of the Data Portability movement by representatives from Google, Facebook, and Plaxo makes January 2008 a watershed month for opening up the web and giving users more control over their own data.

What this means for VRM standards is that there is a lot of work going on in the real world that is all headed in the same direction, and everyone can leverage the accomplishments of the other teams. VRM isn’t going to build everything, we’re just going to put a stake in the ground on behalf of the user and start figuring out how best to create value, for users, in today’s zero-distance world.

As Doc said at Le Web3, the user is the platform of the future. VRM is about figuring out what that means, not just conceptually, but in concrete pragmatic terms so that real companies can build real technologies and services that enable new, more efficient, more flexible, and more powerful relationships between buyers and sellers. And all of that starts with the user.

The VRM Vector starts with the user, straight towards real value.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | Tagged , , , , , , | 1 Comment

iPhone, David Lynch, my little LG, and Users of the Future

Happy 2008 everyone.

A quick thanks to Damien Mulley for this lovely bit on legendary director David Lynch hating on the iPhone:


I just got a brand new LG from Verizon with their VCast service and I have to say, while it’s full keyboard is a treat and I’ve had fun playing with the apps I can get on the bigger screen, the on-demand video sucks, especially compared to the beautiful iPhone. Maybe Lynch is just upset about the writer’s strike…

Clearly, it is just a matter of time until the user is fully in charge… everywhere.

It’s not so much that information wants to be free.It’s that convenience is one of the most compelling drives in the human condition. When you give people a way to get what they want, more conveniently than they could get it before, you make money and make people happy.That’s why 7-11’s exist. That’s why there’s a Starbucks on every corner. And that’s why Tesco’s Fresh & Easy invasion of the U.S. is likely to be a rampaging success. Those brick & mortar stories aren’t typically thought of in the context of the user-centric movement, VRM, or Digital Rights Management/piracy contexts, but they should be, because at the end of the day, the companies that win are the ones that make it easier, faster, and more convenient for people to get what they want. Not what Vendors want them to have, but what they really want, even–or perhaps especially–when Vendors can’t really know what the market wants until they get out there and start selling stuff.

I love Lynch. He’s a fabulous artist in his medium. But I love that iPhone too. And isn’t it ironic to watch that clip? Power to the users.

Posted in Vendor Relationship Management | Tagged , , , , , , | Comments Off on iPhone, David Lynch, my little LG, and Users of the Future

Intro video to OpenID

Here’s a nice, clear introduction to OpenID that Phil Windley blogged recently. If you are curious about all this user-centric identity stuff or have taken on the role of explaining OpenID to others, I highly recommend it.

Posted in Identity | Tagged , , | Comments Off on Intro video to OpenID

The user is the platform of the future… Doc Searls @ LeWeb3

I love Doc Searls. Few people inspire the future as well as Doc, especially when he is on a tear. Here’s a delightful short (<5 min) romp in an interview at LeWeb3 in Paris about the future of the web and the critical importance of making user-centric open systems the core of a ubiquitously connected future. (Think VRM and The User As the Point of Integration)

A few gems:

What is meta about life transcends what is meta about electronics.

We have to look to solve problems for ourselves.

What really matters is our indendence, our freedom, our ability to act on our own

Enjoy!

Posted in Identity, Personal Data Store, Vendor Relationship Management | Tagged , , | 2 Comments

The hard stuff – VRM Use Cases

Last week was the Internet Identity Workshop in Mountain View, California, which is, without reservation, the most productive technical gathering I know of. An “unconference,” (facilitated by the incomparable Kaliya Hamlin) it dumps the talking heads for interactive discussions so that folks can get real work done. The culture and focus enable a truly impressive amount of collaboration and co-creation as people dig in and work on the hard stuff of Internet Identity.

And there is a TON of hard stuff. Just ask Microsoft and Kim Cameron, whose CardSpace is making up for the failures of Passport. Or anyone who thought PKI (public key infrastructure) would solve the problems of Internet Identity. Or David Recordon and the folks of OpenID who brilliantly solved the challenge of a user-centric Single Sign On only to find that was just the first of many challenges of Identity and then answered in part with OpenID 2.0 and OAuth, and continue to answer collaboratively with the rest of the IIW community.

One of the hardest problems discussed at IIW is how we apply the open approach of the Internet to traditional buyer/seller relationships. When customers can come from anywhere and leave at any time, the silo-based world of proprietary lock-in is rapidly becoming outdated. It is not just unsavory for customers, it is actually damaging to vendors who doggedly defend their CRM systems and industrial era mindset. Fixing this problem is what VRM is all about.

In a planning workshop before IIW, a few of the early contributors to VRM met and started to map out the simplest use case we could think of: changing your primary postal address. We’ve all had to do it and it rarely goes smoothly. In the US, it often starts with a Change of Address card sent to the USPS, plus updates to magazine subscriptions, credit card companies, the IRS and the Department of Motor Vehicles, utility companies, ad nauseum… and eventually emails or letters to those friends we want to inform. It is structurally ideal for the user as the point of integration, since ultimately only the user knows for sure when and where they are moving.

What we found was that even this seemingly “simple” use case required a lot of baselining, normalization of language, and formal abstraction to fully clarify the details of what should happen when designed for the users’ needs rather than the vendors’.

In the end (with about 80% completion) it boiled down to a three-step abstract use case narrative, five requirements, and 5 supporting use cases (6 with the base case) for two actors, the AddressChanger and AddressUser:

Use Case Narrative

  1. AddressChanger decides to move
  2. AddressChanger expresses new address to system (optionally including scheduling information)
  3. AddressUsers get the new address when they need it

Requirements

  1. Address stored independently of any particular vendor
  2. Owner of address can choose who stores canonical source (self-storage ok)
  3. Data should be in an open format and portable without data or service loss
  4. Data transfer and use is always under user control
  5. Vendors can discover the appropriate service for each user

Supporting Use Cases

  1. AddressChanger Changes Address (base case)
  2. AddressChanger Manages Address Holder Permissions (and data subsets)
  3. AddressChanger Accesses Audit Report
  4. AddressChanger Reviews Address
  5. AddressUser Gets Current Address (pull)
  6. AddressUser Subscribes to Address Changes (push)

This level of definition specifically leaves the design and implementation details up to subsequent engineering. The first step for VRM is to formally define the requirements of the system in the individual’s terms. Once we agree on that, we can move to specifics of how the requirements can be met. For instance, in the above definition, the user may or may not directly interface with an “Address Service”. The expression of a new address and the authorization management could all–theoretically–happen at standards-compliant vendor websites (who are in effect acting as the Address Service). For example, when I tell Amazon I have a new address, they could automagically update the cloud so that other authorized vendors get that address, and those “authorized” vendors could have been set up at the same time I signed up for their service. Use cases 5 and 6 are alternative design choices, but the consensus was that a standard system should allow the users to make that choice rather than restricting it at this stage.

As we make design choices, we can clarify the challenges and additional requirements those design choices imply. Those design choices will in turn suggest further use cases, which, in the case of VRM, can be considered for further development and standardization if merited. For example, Amazon currently lets me maintain any number of addresses of record… how would we update the Use Model for the Address Service to allow multiple addresses, including the additional authorization complexities? Plaxo, in contrast, allows two addresses, a personal and a business address, which syncs nicely with their permissions framework for who gets updates to which address. What do these usage situations imply for the entire use model? How should/might we expand the use model to standardize how this advanced functionality is supported in a cross-platform user-centric way?

We then took the simple use case as outlined above and worked through a Gap Analysis, led by Brett McDowell with Paul Madsen, Paul Trevithick, Andy Dale, Doc Searls, and Drummond Reed contributing, among others. That conversation morphed from the AddressChanger use case to the AddressUser to map out how current technologies implement this use case, including any overlap and missing pieces. Here’s what we came up with:
VRM Address Change Gap Analysis Diagram

It is more than a bit technical and without a good discussion, it is hard to understand the details (I will point out that the BA in the cloud is British Airways, one of many AddressUsers in the system). Yet, this is the magic of IIW, this is the hard stuff, collaboratively worked out by folks who are intimately involved with all of the competing technologies… because at the end of the day, without interoperability, Identity (and VRM) are just another proprietary data service.

Here’s Paul Madsen’s post-IIW reflection and continuation on the Address Change use case. Good stuff. You can clearly see how the “simple” use case outlined above starts to address sophisticated real-world situations. You can also see how this is just the beginning of a conversation that both explores and defines some critical uses of both Identity and VRM. Paul outlined specifically how the WSF framework could implement the use case with particular design choices along the way, such as when & where the user interfaces with the Address Service. The same scenario could also be implemented with Liberty’s framework or OpenID+OAuth. And as Paul states, just about any aspect of Identity could be managed this way. And for some use cases, we need even further specifications, such as how we manage Personal Health Care records or how a Personal RFP is structured, propagated, and responded to.

I’m looking forward to moving this forward and transforming more of these “simple” use cases into consensus requirements for reinventing the modern marketplace, or as we like to call it, VRM.

Bonus link: a great post by Joel Spolsky of Joel on Software on the value of solving the hard problems.

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , , | 2 Comments

APML, OpenAuth, OpenSocial, VRM, Attention and Intention

Over at On the Pod, Duncan Riley hosts a rambling but intriguing chat about APML and its relationship with the latest Social Network developments:

Ashley Angell, Chris Saad (Faraday Media) and Jon Cianciullo (Cluztr) join Duncan Riley to discuss APML, OpenAuth, OpenSocial and how we are moving towards open data sharing.

Even with the audio problems, it is worth listening to the whole bit. There is a great, concise analysis of the real value and limitations of OpenSocial, but my ears perked up especially when Chris endorses VRM and, more importantly, connects the success of APML to its role in providing user control over Attention data.

I have said several times that Attention efforts are missing the big win, digerati conversations about the “Attention Economy” notwithstanding. Getting your attention without providing value is annoying. We will soon see the impact of that as people begin to react to Facebook’s latest innovations in advertising.

It sounds redundant to say it, but the real goal for people is to realize our Intentions. That’s what “intention” means. We put our energy and will into realizing our intentions. Attention just happens to be how we filter out the signal from the noise. It does not inherently translate to value for anyone. In fact, distracting your attention is a key skill of politicians, magicians, and con-men everywhere. So, it isn’t about “Attention.” It’s about “Intention.”

The opportunity, then, for service providers and software vendors is to provide tools for individuals to manage their Intention. Solve that while facilitating vendors’ goals–because many, but not all, Intention activities are directly monetizable in a transaction–and you have a service or product that can generate serious value for everyone involved.

That’s the promise of VRM.

As I’ve mentioned before, VRM–or Vendor Relationship Management–is at the core of SwitchBook’s approach to tools for Complex Search. Our involvement in that effort has transformed how we think about Search, advertising, and online marketplaces.

VRM’s mandate is straightforward: Enable buyers and sellers to build mutually beneficial relationships. The vast majority of online buyer/seller relationships include Complex Searches prior to a transaction and the bigger the transaction amount, the more effort that goes into the Search and therefore, the more important and useful the tools provided to individuals. We see a direct link between providing people control over their online Searches and enabling them to have richer, more rewarding relationships with vendors. To us that means simplifying how people realize their Intentions online by connecting them with the right resources more efficiently and more credibly.

What is intriguing about the podcast is the endorsement of VRM and the related commitment to empowering user choice through APML tools. Whatever words we use to describe it–“Attention” or “Intention”–increased user control is definitely part of the solution.

As Attention becomes more and more shaped to be responsive to user choice rather than a smart database of people’s online behavior, and the more it empowers both explicit expressions of interest and the implicit meaning we can glean from people’s clickstreams, the closer Attention comes to Intention.

To be fair, much of what we are doing with Intention incorporates a lot of Attention data. So, while there are still some key distinctions, it is good to see APML folks talking about VRM in a way that suggests a fruitful convergence is not so far away.

Posted in ProjectVRM, Vendor Relationship Management | Comments Off on APML, OpenAuth, OpenSocial, VRM, Attention and Intention

ICANN punts… and scores!

Responding to ICANN‘s call for comments, I submitted a comment asking ICANN to reject the proposed changes to WHOIS. And they listened.

Actually, I doubt that it was my comment that made the difference, but kudos to everyone who took the time to give their input, even those that disagreed with me. (Note that link only includes those who commented through the online system rather than by email or other means). Internet governance doesn’t always work well–and maybe this is an odd example of it working–but at least this time, public input seemed to prevent a change I definitely thought would be for the worse.

Tip of the hat to Doc Searls for the initial call to arms.

Posted in Uncategorized | Comments Off on ICANN punts… and scores!

The future

Future

But the past was much too cramped!

That about sums it up sometimes, doesn’t it?

Posted in Uncategorized | Comments Off on The future

Open comment to ICANN on WhoIs changes

If you haven’t already, you might considering reviewing the current proposed ICANN changes to Whois and consider sending in your comments. Mine follow.

In short, the proposed changes are more than morally questionable, they undermine the core infrastructure that keeps the Internet working.

(Many thanks to Doc Searls for pointing me to this issue.)

Dear ICANN,

I am writing to oppose the proposed changes to WhoIs.

ICANN has always been a technically driven overseer of the DNS and IP infrastructure, shrewdly navigating sometimes contentious waters with reliable continuation of Internet services as its guiding priniciple. If an action might (or would) reduce the stability of core Internet services such as DNS or the services relying on DNS, such as email and the World Wide Web, then that action was rejected until such stability could be assured.  This principle is the reason ICANN deserves its quasi-independent regulator status; decisions made contrary to this interest negate ICANN’s moral authority to administer Internet resources on behalf of the general welfare.

For example, by strictly focusing on this guiding principle, ICANN has managed to isolate the legal issues of trademark disputes from imprudent termination or transfer of domain control. Similarly, ICANN maintains rigorous policies and procedures that all domain registrars must follow at the termination of a registrant’s contract, specifically designed to assure that the current domain owner has every reasonable opportunity to assert their control and maintain a working domain that links to their Internet service.

The move to a limited-disclosure official point of contact is a move in the right direction, but a closer reading of the proposed recommendation suggests it is flawed in its details. The point of WhoIs is to allow for resolution of service quality issues, that is, to allow for a reliable continuation of services. The current recommendation instead creates a route for undesired intervention by interested parties, which can only reduce the quality of services.

Allowing access to unpublished information on the minimal criteria of “reasonable evidence of actionable harm” does nothing to ensure the future stability of Internet services and instead acts as a starting point for several players–whether private or public entities–to begin processes which would seek to interfere with such services. Enabling litigants or law enforcement further means to pursue the registrants in no way increases the stability of the services offered by the registrant and most likely increases the likelihood that such services be–rightly or wrongly–moderated or even terminated. In short, the clear and obvious natural result of the recommendation would be to decrease the stability of Internet services.

Not all services of course, just those that afford intervention because of “reasonable evidence of actionable harm.” However, the judgment of evidence is neither ICANN’s purpose nor its expertise. Most jurisdictions in the world provide appropriate mechanisms for judging evidence against the public welfare. In the United States, that means the courts. Should a private or public entity seek the unpublished information for any registrant, the appropriate route for discovery–assuming the point of contact refuses–is to demonstrate a legally justifiable reason to a judge and thereby secure a subpoena. This process both assures suitable access to otherwise private information /and/ provides appropriate protections against unwarranted searches and seizures. It would be a complete abandonment of its moral authority and a wild assumption of unwarranted power should ICANN seek to enable itself, or its registars, to act in judgment on evidence of the need for disclosure in the public welfare.

Finally, the potential hope that this system will ultimately make it easier to root out the bad guys fails in the situation where it is most required: the truly bad actors can easily bypass the presentation of their information in the database using any number of shell games, private corporations, and attorneys. By providing streamlined access to unpublished information, ICANN will not be assisting in the prosecution of justice against the worst terrorist and criminals, because such bad actors will avail themselves of one or more of the available workarounds. Instead, ICANN will be assisting public and private entities in the harassment and persecution of domain owners whose interests or activities have become a target of attention, all without suitable due process for those actors to prove in the appropriate venue that such owners should be revealed.

We already see this disparity today, with registrars charging a premium for “anonymous” registrations, which demands additional fees for those who want to protect their identity and personal property from would-be attackers. Clearly, those entities who are sophisticated criminals already avail themselves of these services. Therefore we can reasonably assume that the bulk of the information in Whois is not the world’s most dangerous terrorists, but rather everyday folks… and in the case of criminals, those small time operators who don’t have the wherewithal to protect their identity through one or more layers of anonymous services.  While the idea of a limited-disclosure official point of contact seems to help with this problem, recommendation 2 proactively provides a loophole for the most tenacious and well-funded attackers to pursue their actions against domain name owners. In the end, this can only destablize those services which come under attack. It will not improve the services for anyone.

Ultimately, it is beyond the purpose and capability of ICANN and its registrars to make judgment on such cases and even more importantly, it is beyond your moral authority to support a scheme of offensive intervention against existing Internet services.  Your role is to act steadfastly in protecting the technical infrastructure underlying the functioning Internet. Anything contrary to that can only be considered an abandonment of your very reason to exist.

As such, I implore you in the strongest possible terms to reject the recommended changes and to retain your fundamental focus on assuring the reliable operation of the infrastructure underlying the Internet.

Sent by email October 25, 2007

Posted in Uncategorized | 1 Comment