A world of claims, not facts

On the Social Network Interoperability list, Danny Ayers recently pointed to a great post, “The World is Now Closed” by Dan Brickley, with the following quote:

[[from Dan Brickly:] So what am I getting at here? I guess it’s just that we need these big social sites to move away from making teen-talk claims about how the world is – “Sally (now) loves John” – and instead become reflectors for the things people are saying, “Sally announces that she’s in love with John”; “John says that he used to work for Microsoft” versus “John worked for Microsoft 2004-2006?; “Stanford University says Sally was awarded a PhD in 2008?. Today’s young internet users are growing up fast, and the Web around them needs also to mature.]

This is fascinating. It belies an underlying hubris of much thinking in both AI and the semantic web. We often imagine that it is somehow possible to map out, understand, or process some sort of “objective” set of facts. Computer Science practically conspires to force this world view on its practitioners. When programming, we not only start with assumptions about data, we must concretize those assumptions so our algorithms have something to transform from input to output. “Fuzzy logic” and neural nets embrace ambiguity, but computer science on the whole lives in a world of clearly defined inputs and outputs. It literally forces one to think in terms of objective data.

But in the real world, nothing is that simple. Was Princess Diana murdered? Is OJ guilty? Is DNA evidence conclusive? These are legal examples, where ambiguity is argued to death in court so contestants can eventually move on with the rest of their lives, but what about love, betrayal, politics, or discrimination? Does he really love her? Did your business partner always plan to stab you in the back or is he actually still acting in what he believes is in the best interest of the company? Were there weapons of mass destruction? Did race or gender influence your hiring decision?

Answers to these kinds of questions can’t be reduced to facts. They can only be reduced to “good enough” approximations of facts.

This is particularly apparent, for example, in Freebase, a socially maintained structured “factual” semantic database which came out of Applied Minds and at least in part from the brilliant mind of Danny Hillis. Freebase is like Wikipedia on crack. Delightfully ambitious, it has set out to leverage the social editing power of wikis to construct a semantically and computationally accessible knowledgebase of everything worth talking about.

If we ignore for a minute that Wikipedia–and all similar social constructs–can never be perfectly accurate and instead accept that they can be exceptionally useful, then we can begin to see the allure of a socially edited and maintained database of facts such that a computer could query or reason over embedded topics. It’s a great idea and hopefully will create enough value by solving enough of the problem.

And yet, one can see in its “factual” hubris, the beginning of its fundamental limitations. Take for example the “type” associated with living people. There is a different distinct type for deceased people. There was a fair amount of discussion about this, but apparently rather than allow “people” to be either living or dead, it made more sense to separate the two types. Ok. It’s often easy to tell if people are really dead. But what if it isn’t? What if someone, like Steve Fosset, is lost and presumed dead? (That’s my presumption, anyway.) What about Amelia Earhart? What if an individual is brain-dead but still breathing? Do you wait for a definitive statement from a coroner? What if there is no body? The “factual” paradigm requires someone–or the collective someone of social editing–to make the call about whether or not someone is categorized as a living person or a deceased one.

And I have barely scraped the surface on religious “facts”. Both Freebase and Wikipedia (which is often used as the source used in Freebase)  address this in part by shifting from “fact” mode into contextualized statements or claims. See Jesus and Mohammad entries in Freebase. Coincidentally, at the time of this writing the Wikipedia entry on Mohammad is locked to editing because of disputes. It is the nature of the most interesting topics to generate disputes, and yet these same disputes prevent us from asserting any sort of singular claim with any honesty.

The solution used is in Wikipedia is to state that so-and-so religion claims certain things, for instance, about Jesus or Mohammad, and cite a source for those claims (and implicitly listing the editor who entered those claims). It is not clear yet how much these semantics will be captured in the underlying data structure at Freebase.

Generally, these factual databases and modeling systems (such as certain unified schema proposed by some proponents of the semantic web) implicitly require someone to distinguish what is fact from what is not, and often do so without clarifying the asserted “fact” is really a “claim”, although the editing history at least allows you to know who made the claim. The systemic requirement that somebody decides what is “true” is patriarchal, Apollonian, and unrealistic. It enforces a top-down view on the world, even though we know as a matter of practical experience that there are many, many viable and interesting and rewarding competing world views. And yet, the architectural assumptions of Wikipedia are clearly making it difficult to come to terms with appropriate language to present “facts” about Mohammad.

Whether or not there is a classic objective reality in the Ayn Rand sense is irrelevant from a systems development perspective. What’s important is that there are numberless different and competing views of the world, stored in people’s heads, in corporate data silos, and soon coordinated in individual personal data stores. No one system can ever assimilate, aggregate, and accommodate all of those distinct datasets into a unified whole. Trying to do so is a fool’s errand and designing your systems to count on it a recipe for an unscalable system.

Instead, what is important, in my not so humble opinion, is that the interfaces between as many sources as possible allow for fluid, low-transaction-cost, accurate engagement across the network, no matter who you are or who they are, moderated by appropriate rights management and identity access control, so each of us can seamlessly access the datasphere as broadly as we have the right to, as easily as if each data store were our own. Consider how most web browsers can access (mostly) all web pages. That ubiquitous access to different data fuels Wikipedia’s editorial preference for citing accessible web pages whenever making claims. That’s a profoundly simple and powerful model for engaging the world’s diverse data and communications needs. We just need to upgrade to sharable semantic interfaces and proper access mechanisms. Brickley’s comment on claims verses facts highlights a critical system requirement: the acceptance of ambiguity.

Clearly this is the kind of thinking that fuels much of my interest in VRM. Vendor Relationship Management still requires much gestation and care before it can truly be judged as a widely useful effort. But what it does in this crazy world where each data silo has divergent data and every vendor wants to own it all, is redefine the working context so that we can focus on what each individual actually knows and needs, which at least for that individual, for that customer, for that “monetizable opportunity,” is actually quite likely to be “right.” And since it is “right” for that closest dataset to that individual, it is likely to be right in a way that might create value for someone who can respond to those needs and for the person whose needs get addressed. We are working by focusing on the interface between these distributed systems, on the protocols that make networked semi-automated vendor-customer relationships work, not on any presumptions of fact or a globally rigorous index or model of all the world’s information.

Hence the incredible resonance of Dan Brickley’s observation about the relative value of “claims” verses “facts”. We can’t really know if a fact is true, generally, but we can convince ourselve that a given person or company or entity has asserted a claim. And by connecting the claim to an a particular person or company, anyone relying on that claim can decide on their own whether or not to trust that entity or keep checking the facts. For most of us, most of the time, a handle for consistent claims is enough to weave together a shared set of expectations and understandings, which we can use in the face of a philosophically intractable inability to discern the “objective” truth.

Some of this is, of course, old-hat to those folks coming from the Identity world, where they already speak of “claims” and “assertions” rather than facts. And as such, VRM gladly claims that heritage and common sensibility. If you think about it, it makes sense in a vendor relationship. Who really cares what the “factual” price of an item is when you can find a credible vendor willing to offer that same item at a better price. That’s all about claims at the interface between the buyer and seller and all about how we, as individuals, relate with vendors.

The upshot: systems that represent claims of fact made by specific entities will be more robust and more useful than systems that simply represent claims of fact. And that you can design on.

Posted in Identity, Personal Data Store, ProjectVRM, Vendor Relationship Management | Comments Off on A world of claims, not facts

Netflix going VRM

Blogging from GnomeDex, Dave Winer says Netflix is looking to offer VRM-style data portability:

I had some interesting hallway talks, but none more interesting than the one with Kevin McEntee of Netflix about providing a way for users to take their movie ratings from Netflix to other services. This could turn Netflix into the hub for movie ratings (the first place that exports becomes the default UI), and could enable all kinds of interesting combos, such as checking a box on Match.com to be introduced to dates who like the same kinds of movies.

Turning Netflix into a hub for movie ratings doesn’t sound like much of an improvement to me, but creating a way for any authorized service to access all of my movie ratings is music to my ears.

Although Personal Data Stores are “owned” by the individual, there is no reason they can’t be implemented in a completely distributed way. I imagine we’ll have a VRM world where every individual has numerous Personal Data Store services providing identity-based access to their personal data, across Vendors.

XRI and XDI enable this sort of service discovery, although I’m only just beginning to get a glimpse of how it works. I believe the Netflix use case can be address through service discovery provided by the user’s Identity Provider (which need not be Netflix). So, for Netflix, the win would be to become my “movie ratings data store” service. Seems reasonable to me, as long as I can actually control how that data is propagated and used by Netflix and others.

In the near term, I expect Netflix to implement their own semi-open data silo, retaining both data ownership and control over identity. Not because they don’t get it, but because it will be the easiest and fastest way to offer an API for users to use Netflix as their movie ratings platform. But will Amazon and Blockbuster want to play in Netflix’ data store? Hard to say.

However, once the XDI/XRI protocols are in widespread use, the “third party” architecture makes it a straightforward proposition for any movie provider (or any service for that matter) to access the user’s data store. Standard protocols and access rights will isolate the vagaries of independent providers, making it possible for vendors to trust the data outside their own silos.

Consider this scenario, which starts with the assumption that the user has a suitable Identity Provider (IDP) to resolve service discovery requests and authentication for their i-name:

First, creating the data store.

  1. User signs up at a movie-ratings data store, registering his or her i-name. For this scenario, let’s use Netflix as the data store service.
  2. User confirms/registers Netflix as their movie-ratings data store service with their IDP
  3. (Optional) User uploads or inputs initial ratings into the data store. As a data store service, Netflix would start with the ratings already stored in their system.

Second, accessing that data store.

  1. User registers i-name with movie provider service, such as Amazon or Blockbuster (let’s pick Blockbuster for this example). Eventually, this will be an integral part of registration for most web services, replacing usernames and email addresses.
  2. Using the IDP responsible for that i-name, the user authorizes Blockbuster to access to his or her movie ratings data store, specifying whatever access rights are appropriate. Again, this will eventually be a standard part of registration, where users authorize access privileges to their Personal Data Stores.
  3. Blockbuster queries the IDP for the movie ratings data store, confirms access rights terms, and is directed to Netflix. (Note that the ordering of 2 and 3 is implementation dependent; the authorization could be triggered by Blockbuster’s query.)
  4. Blockbuster queries Netflix for movie ratings using the VRM standard protocol for movie ratings data sharing.
  5. Netflix authenticates Blockbuster via IDP, verifying that the user authorized access to the data store.
  6. Netflix opens communication channel to Blockbuster for appropriate read/write access to the move ratings database, based on IDP authentication.

The point with this architecture is that individuals can use any data store provider, any identity provider, and any service provider. Today, all three of these functions are bundled into monolithic proprietary services. You log into Netflix with your Netflix ID, they keep track of your ratings, and only they can provide recommendations or services based on those ratings.

Most limiting for Netflix, they can only “see” the ratings you enter on their system, with no way to know what you have at home or have entered at Amazon or Blockbuster. With reciprocity-based access rights, we should be able to get all of our service providers to both store and access our data from a shared Personal Data store, seamlessly automating the integration of data across multiple vendors.
And for the first time, services can be built that integrate data outside the “specialty” of the offering service, such as Dave suggested with Match.com using movie ratings for romantic matching. For users, that’s more useful, easier, and delightfully empowering…

Clearly, Netflix sees the benefit of opening up the silo. Here’s to hoping they will join the VRM movement and go all the way to full VRM Personal Data store interoperability with other vendors.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | Comments Off on Netflix going VRM

Google and Microsoft and Personal Health Records

The NYT reports interesting things afoot with major players Google and Microsoft pushing their own visions of the future of health care:

If the efforts of the two big companies gain momentum over time, that promises to accelerate a shift in power to consumers in health care, just as Internet technology has done in other industries.

Today, about 20 percent of the nation’s patient population have computerized records — rather than paper ones — and the Bush administration has pushed the health care industry to speed up the switch to electronic formats. But these records still tend to be controlled by doctors, hospitals or insurers. A patient moves to another state, for example, but the record usually stays.

The Google and Microsoft initiatives would give much more control to individuals, a trend many health experts see as inevitable. “Patients will ultimately be the stewards of their own information,” said John D. Halamka, a doctor and the chief information officer of the Harvard Medical School.

Indeed. Users are often (always?) the best stewards of their own data, no matter the use case or application. There is so much inherent value in managing user data from the user’s perspective that this architectural shift is not only useful, it is inevitable.

Vendor Relationship Management is creating tools for individuals to manage their relationships with Vendors. In health care, that means giving people control over their Personal Health Records and how (and which) Vendors gets access. Think Personal Data  Stores as applied to health records.

The article continues:

It is common these days, Dr. Halamka said, for a patient to come in carrying a pile of Web page printouts. “The doctor is becoming a knowledge navigator,” he said. “In the future, health care will be a much more collaborative process between patients and doctors.”

Microsoft and Google are hoping this will lead people to seek more control over their own health records, using tools the companies will provide. Neither company will discuss their plans in detail. But Microsoft’s consumer-oriented effort is scheduled to be announced this fall, while Google’s has been delayed and will probably not be introduced until next year, according to people who have been briefed on the companies’ plans.

A prototype of Google Health, which the company has shown to health professionals and advisers, makes the consumer focus clear. The welcome page reads, “At Google, we feel patients should be in charge of their health information, and they should be able to grant their health care providers, family members, or whomever they choose, access to this information. Google Health was developed to meet this need.”

Just as markets are conversations, so is health care. It’ll be interesting to see what the major players do with this opportunity. It not only one of the most complex and challenging VRM scenarios–with lots of regulation and technical challenges–it is also one of the most promising.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | Comments Off on Google and Microsoft and Personal Health Records

VRM Banking

Kudos to Wesabe for launching a VRM approach to banking as described by Chief Product Officer Marc Hedlund:

Last week, my company, Wesabe (which makes a personal finance community site), launched a REST API that allows anyone to get their bank or credit card data in XML, Excel, CSV, or a bunch of standard financial formats. Tonight, we launched an open source Firefox extension that allows anyone to automatically extract data from their bank every night, and upload it to Wesabe, regardless of whether their bank provides automatic download or not. Both of these features work for any bank in any country, as long as they support one of our export formats (OFX, QFX, QIF, OFC, Quicken, Money, and a few others coming soon). Data in, data out, free and easy.

There’s a basic Web 2.0 story here, which is simply that opening up APIs and embracing the web as a platform is a great way to empower the people using your service. It’s been amazing to me to see one developer after another approach us about using the API, even in its early form. But, while obviously I’m totally biased, I think think there’s a deeper story here, too, and I thought it would be worth calling out some of the things that make the Wesabe API and Firefox extension releases different and interesting.

Read the entire post. It’s worth it. Marc also points to a few other services with similar VRM efforts:

It’s fun for me to see other startups going down a similar path in other industries. For instance, Get Satisfaction seems to be taking a related approach with customer service, another industry with a SuperMax approach to data. Free the data and flip the model, and you can make even the stodgiest industry into a web platform participant — whether they like it or not.

Great progress. And definitely one of the most developed Personal Data Store Services on the market.

Colin Henderson at Bankwatch also has a nice post discussing Wesabe’s new API.

Best of luck with the venture, Wesabe. We hope to see you at future VRM event.

Posted in Personal Data Store, Vendor Relationship Management | Comments Off on VRM Banking

VRM and Personal Data Stores

In my previous post on VRM‘s Personal Data Stores, I discussed how we can decentralize information services by focusing on the user as the point of integration. Not only would that give the user direct control over their personal data–to the cheers of privacy advocates everywhere–it would provide a more robust, reliable, and scalable approach for important VRM use cases, including personal health care data, media consumption histories (and licenses), personal RFPs, and more.

Three replies sum up the curious or critical responses I’ve had:

  • Matthias Gutfeldt: “But how do we do it?”
  • William Hayes: “Any idea when someone will step forward to provide the user information mgmt service?”
  • Dave Weinberger: “How does VRM (or Joe’s vision of it) differ from federated identity schemes in which the user has control over her personal info? “

I’ll generalize these into two focused questions:

  1. What is different with VRM and Personal Data Stores?
  2. What will it take to implement them?

VRM’s Personal Data Stores (PDSs) are a new inflection of the familiar paradigm shift of decentralization

The answer to the first question lies in the recent advances in user-centric identity and the upcoming access-rights infrastructure built into XRI and XDI.

Limited versions of user-centric data stores have been around for decades. The PC revolution followed the same paradigm shift: put the applications and data on the user side instead of a central mainframe. The Internet itself echoes that user-centric view of the world, especially when you consider businesses as users and online services as vendors. Any architecture that moves data control from a centralized vendor to the decentralized user resonates with the user-centrism of Personal Data Stores.

What decentralized systems don’t necessarily have–and what PDSs add–is structured third-party read access to that data store.

Internet Email as Personal Data Store

POP, IMAP, SMTP–essentially all store and forward email architectures–allow independent third parties to input data into a simple user data store. That’s the point. An Internet-based email service that can’t accept email from anyone on the net, isn’t really Internet email. However, with email, there isn’t a way for outside third parties, such as vendors, to access that data store. The privacy reasons behind this are self-evident. Most people don’t want neighbors or “vendors” reading their email.

Blogs as Personal Data Store

Blogs, on the other hand, offer both input and output of personal data and move a little bit further along the spectrum towards a Personal Data store. Blogs are primarily output mechanisms; users write posts and those posts are published to the world. Comments provide an input mechanism from the “cloud” of arbitrary Internet users, giving blogs a limited input and output capability for what is essentially a publicly accessible personal data store.

The access rights management on blogs, however, leaves much to be desired–and is far from enabling many core VRM scenarios.

Access Limitations

Most blogs are simply available to the public–or occasionally to a limited “internal” audience by restricting access to the web page. A VRM data store should have extremely fine grained access privileges, including by “identity class” so that, for example, all legitimate Travel Agencies could access a personal RFP for travel or certified medical doctors who have registered an emergency medical situation warrant could access a personal medical history. These sorts of restricted rights mechanisms require not only the emerging user-centric identity technology, they require an institutional infrastructure capable of reliably authenticating “travel agencies” and “certified medical doctors” who have “registered a warrant”. Ultimately, a Personal Data Store must not only store the requisite data, it must provide secure and effective access to the right vendors and individuals, and refuse access to all others.

Input Limitations

Second, the ability to “input” into a blog data store exists in the form of comments, but it is limited. Sometimes this privilege is restricted by identity (using TypeKey, OpenID, or an InfoCard for example), but not always, and access is usually restricted in a simple way: for example, any InfoCard user can post a comment on Kim Cameron’s blog on any post. This is a good start towards identity-based access rights management, but most sites have minimal distinctions between different classes of users and different data sets. Of course, blogs are for blogging, so they don’t have a need for sophisticated access functionality. However, when you treat markets as conversations, VRM needs to enable conversations between users and buyers. That implies that users can, for example:

  • Input data to their Data Store
    • RFPs (requests for proposals)
    • Customer Interaction Data
      • service calls
      • RMA requests
      • bug reports
      • reviews
    • Personal Health Updates
      • Symptom Reports
      • Doctor Visits
      • Medication Log
      • Exercise Log
  • Amend/Revise Data
  • Reply to Vendors (securely, privately)
  • Manage data access
    • publish subsets of the data to specific vendors
    • publish to sets of vendors

It also implies that Vendors can

  • Access subsets of the data
  • Add new data to the Data Store
    • Prescriptions
    • Proposals in response to RFPs
  • Update subsets of data (securely and with reciprocal privacy)
    • RMAs
    • Customer Service History
    • Revised/Updated Proposals

This also implies that data stored by the vendor should be protected from edits by other vendors and even users (although outright deletion must remain an option). It would be a mess if users could edit responses to RFPs or prescriptions directly. Rather, the integrity of the system requires a mechanism to assure that the data is what the original author intended it to be.

This level of access functionality is essentially non-existent in blogs. Other online markets provide some elements of these features, but none that I know of place complete control in the hands of the user, enabling any vendor (approved by the user) to participate. eBay is a good start; it definitely democratizes the vendor/buyer marketplace. However, you don’t control the types of buyers/vendors who can access your listings and you still need to use eBay to culminate the transaction. It will probably be a challenge for eBay to find ways to make money while putting the transaction context in the Personal Data Store.

However, it is worth noting that CompuServe and AOL faced the same conundrum with the Internet, struggling to learn how to profitably transition from a closed-loop system to an Internet that let anyone access and publish anything. Ultimately separate business models worked quite well with access providers (Earthlink, AT&T, etc.) and service providers (Google, Amazon, etc.). Meanwhile, AOL is still struggling to define its business.

VRM and Personal Data Stores will likely create a similar segmentation of silo & service businesses into specialized vendors of Personal Data Store Services and those focused services that leverage PDSs to deliver new value.

Blogs + ID access = Personal Data Store?

Blogs with Identity-based access privileges start looking like a workable Personal Data Store for some VRM use cases. Consider posting a Personal RFP (request for proposals) to your blog, with appropriate tags (travel, ready-to-buy, hotel, airfare, car), pinging a pingmarket service like Technorati, and receiving offers via comments to that post. If access to the post–and the ability to reply–were seamlessly moderated by a credentialing service (so only authentic “travel agents” could respond), then we start to have a system that could work. Vendors who subscribe to RSS feeds from the pingmarket see sales opportunities right on their desktop, not unlike Shopatron‘s manufacturer-to-retail online distribution service.

This architecture highlights two additional requirements: first, how can we trust the claims of the user? Second, how can we (automatically) understand the requests (and claims) of the user?

Validating User Claims

VRM relies upon users making claims of various types:

  • intention and interest
    • In the market for a new car
    • Buying a plane ticket
    • Looking for a home
  • affiliations
    • AAA Member
    • Retired military
    • US Citizen
    • Member of the California Bar
    • credit card #
    • employment
  • facts
    • Address
    • Age
    • Gender
    • Income
  • certificates
    • Licensed to drive by California DMV
    • Insured to drive by BBB
    • Security clearance by US Federal Government
    • Credit rating by Equifax

Many vendors avoid wasting time with unqualified leads, including competitors and window shoppers, as well as individuals who can’t legally purchase the product because they are underage or excluded due to export control laws. In addition, the reputational history of the user enables Vendors to focus resources on the most promising buyers. Buyers with no history or with negative history don’t deserve the same VIP treatment that proven, reliable buyers deserve. We see this on eBay, but have no clear way to leverage our eBay reputation with other vendors. A VRM system would allow reputations of this nature to arise explicitly from multiple reputation vendors, and incorporate our transaction history across multiple marketplaces.

It will be a while before our institutions implement these kinds of authentication services, but it is already happening with the earliest “adopters” (apologies to Geoffrey Moore, in his terminology, these guys are all “hobbyists” even when multinational corporations). Sun Microsystems, for example, now validates employee claims so that third party Vendors can rely on that validation for providing services. With Microsoft’s CardSpace technology built into .Net and Vista and soon Active Directory… plus OpenID and Higgins open source solutions, the Internet identity infrastructure is somewere akin to the World-Wide-Web was in 1993 or 1994. Which is to say, about to seriously explode into corporate and mainstream consciousness.

So, to answer the first question, what is different with VRM’s PDs is incredibly fine grained control over both the data and who accesses it. Today’s federation systems actually move in the opposite direction by allowing a wider and wider system of vendors to access personal data with absolutely no control by the user. The result is, frankly, a culture where people hesitate to provide full or accurate information because of fears of what vendors will do with it.

Standardized VRM Data Types and Protocols

For VRM to scale beyond human moderated interactions–the kind enabled by Shopatron where retailers personally check the Shopatron website to select orders to “bid” on–we need a solution for automated understanding of user data. No, I don’t mean some mammoth Artificial Intelligence, natural-language-processing, all-knowing, all-dancing automated salesman. What we need a standardized way for people to make claims such that Vendors can understand them. This means a cross-Vendor open standard for structuring VRM data, including claims, RFPs, personal health records, etc. In some ways, this parallels the work being done by many many folks building the semantic web. If the original data is presented in a structured, commonly understandable format, then programs can have a reasonable expectation of “understanding” it in a useful way.

Currently, Vendors typically have their own internal data structures and formats. This makes it hard to move data from one system to another. Yet, that is exactly the power of the Personal Data Store, serving as the point of integration between multiple vendors, no matter who is sourcing the original data. So, if Amazon, BlockBuster, and NetFlix all want read/write access to my PD to better understand my media consumption history–and provide better recommendations based on that understanding–then all three need to be able to store the data in a mutually understandable way.

This is a huge problem. We are essentially talking about reversing the damage done at the Tower of Babel, of integrating a formal representation of all possible data.

HUGE.

PROBLEM.

Except if we look at it a bit differently. Taken at 30,000 feet, VRM’s PDs seems to offer a secure, universally accessible and universally understandable read-write data store. That sounds great. It also sounds like an insurmountable problem. However, by breaking the data types down into cohesive use cases–at 1,000 feet–we can start to package the PDs in a way that is implementable, scales with use, and provides high-quality understandable data to individuals and Vendors.

First, think of the PD as a fungible store of any kind of data. Built smart read/write access to that data using user-centric identity systems with third-party credentialing for “identity class”-based usage. The VRM is about vendor-customer relationship data, but once the infrastructure is in place, truly any structured data makes sense. (Unstructured data just acts like another ftp or web repository.)

Second, take real-world integration problems and solve them with relatively small, focused data formats and get Vendors to support those formats. For example, a standard media-history record that any vendor can read and write into our PD. Or a standardized RFP format, potentially with an extensible RFP type so that custom, structured information can be embedded in Airfare RFPs, retail goods RFPs, or service RFPs. By tackling real-world problems and working with a handful of real-world vendors, shared data formats that provide immediate value can be developed in a realistic timeframe. By solving each of these in an open-standard, open community fashion, a library of VRM data formats will start to emerge hand-in-hand with the VRM protocols that manage the creation, distribution, and consumption of that data.

You might think of it like MIME for vendor-consumer interactions. MIME is the Multi-media Internet Mail Extension. It was designed to allow email attachments of files like word documents and images. It also allows webservers to specify the type of a file being downloaded by the browser. In both of these cases, the underlying access protocols of SMTP/POP and HTTP don’t need to know anything about what is in the MIME attachment. Instead, applications use the MIME type to do the right thing once the data arrives. In the same way, a PD should provide an identity-based fungible data store where rich data formats of different types can be intelligently stored, accessed, and managed.

The result is a system that scales by adding new open standard data-types to the open data store, just like email and the World Wide Web scaled to support images, Flash, audio, and movies.

Access Rights and Responsibilities

Now that we have a technological infrastructure–and conceptually an institutional infrastructure for validating Identities and claims–we are still missing perhaps the most critical piece of the puzzle: the legal infrastructure.

In today’s internetworked world, there is astonishingly little control over data other than denying access. If a Vendor knows who I am because they bought my name and information from some mailing list company, they can–and do–bombard me with junk mail. They share it with other divisions or sell it to third parties. Some Vendors do have reasonable privacy policies, and I would be remiss not to give a tip of the hat to eTrust, which has done much to advocate in this area.

However, with the Personal Data Store we are talking about a massive restructuring of the scale and type of information that will be made available to vendors, and making it available at incredibly low marginal costs. Not only will Vendors need a viable system for appropriate use of that data, users will need to be assured that the data they put in their PDs is protected in a rigorous way, minimizing user exposure to spam, unwanted solicitations, fraud, stalking, and identity theft. Having personal data–such as your address–in your PD must feel and be at least as secure as entering that same information at the culmination of an online purchase.

Interfaces and Phishing

There are two parts to this problem. The first is the user interface for how a user securely manages their private or semi-private personal data. Largely this has to do with minimizing phishing attacks while assuring the user can feel comfortable with the correct vendors. Kim Cameron often discusses this topic and it remains one of the biggest security risks for user-centric Identity systems. However, VRM and the PD don’t address this problem, nor do I see them ever doing so. As with the rest of the user-centric Identity movement, VRM will build upon the work of others.

Access Rights Management

The second problem is controlling what happens to the data after it leaves the PD. Or, to put it another way, providing restricted use licenses to Vendors who access your data.

I’ve consistently attacked the language of the AttentionTrust and others when they discuss users’ rights in regard to our “attention data”. Many people assert that we, as individuals, own the Attention Data sprinkled around the Internet as we “spend” our attention at various places. I have yet to receive a satisfactory answer to my queries about what it means to “own” that attention data, as it seems ludicrous to me to assert ownership over things like website access logs at YouTube or our transaction history at Amazon. Clearly, the Vendors own that data at least as much as we do.

However, when we store data in our Personal Data Store, we do own it. What the AttentionTrust and APML get right is that by collating our Attention Data in a data store on our computers–or on computers under our control–we are creating a data resource that we do in fact own and control. It doesn’t make sense to then give up that control just to get a better ad from Amazon, does it?

It might. If Amazon were to legally commit to using that data only for presenting that ad. In general, it isn’t usually the immediate use of personal data that we find annoying. What annoys us is the indiscriminant use, propagation, or application of that data out of context and for unexpected uses. I don’t mind telling the bartender what beer I’d like–otherwise she’d have a hard time serving me–but it would be annoying if that choice was broadcast on a loudspeaker “Joe Andrieu orders a Guinness” and posted to the bar’s blog the next morning. Rules of etiquette reinforce these sorts of expectations in real-world society. We call it “discretion”. But until there is something formally restricting Vendors who access a Personal Data Store, we can expect them to use all information as widely and as creatively as they can profitably do so. The consequence is that many users will refuse to expose authentic data, undermining the whole system.

At the same time, we can’t expect every vendor to read, evaluate, and agree to a custom twenty-page licensing agreement for each Personal Data Store they want to access. Instead, what we need is a handful of simple, standard access rights contracts or terms that can legally bind Vendors who access our PDs. Fortunately XRI and XDI have this sort of access rights architecture built-in. However, the actual rights contracts which would use those access protocols remain to be written.

Here are a few rights that users might want to be able to secure for their data, as well as some privileges they could provide to vendors:

  1. Reciprocity–That vendors who access a particular type of data also agree to reciprocally provide updates to that data. For example, I might let Amazon access my media history records if they agree to update it with my past and future media purchases at Amazon.
  2. Non-propagation–No further distribution of the data beyond the specific services authorized. No reselling to third-parties. No re-use by other divisions.
  3. Non-persistence–No retention of the data beyond the session of the current transaction. For example, an emergency room physician can access my personal medical history while I’m under his or her care, but he or she can’t store that data on any internal systems.
  4. Anonymous Persistence–Data can be retained, but only if it is suitably anonymized and disassociated from the individual user.
  5. Editable Persistence–Data may be retained by the vendor, but it must be editable and deletable by the user.
  6. Anonymized Analytic Rights–Vendor has the right to query the PD at a later point for business or operational analysis, as long as that analysis ensures anonymity after the fact.*

*One of the main reasons companies retain detailed customer data is to analyze it for business improvement. Perhaps the product is doing particularly well in certain areas or particular demographics. Perhaps certain customers are having a particularly hard time with certain product features. I expect that many of the largest Vendors will be unable to support non-persistence or anonymous persistence unless they are allowed some way to incorporate rich user data during analysis time. One benefit of the PDs as a source for Vendor analysis is that if performed with non-propagation and non-persistence, it can provide secure, private access to a much broader source of customer data than customers are willing to give and Vendors are able to capture. By shifting this datamining from Vendor silos to Personal Data Stores, not only would Vendors get richer, more timely, and more accurate information for their data mining, individuals would gain explicit control over the use of personal data which is currently entirely under Vendor control in their private silos.

We are already seeing quite a bit of activity by the large search Vendors to modify their own data retention policies to be more user friendly:

http://www.mercurynews.com/business/ci_6449050?nclick_check=1


What makes VRM and Personal Data Stores different?

In summary, VRM is applying user-centrism to vendor/customer relationships in a way not possible (or not worth the effort) before user-centric identity platforms emerged:

  1. Fine grained user control
    • the data
    • who can access the data (and how)
    • access rights and responsibilities for those who do access the data
  2. Third-party validation of claims
  3. Standardized meta-data schema for sharing data with multiple vendors

What will it take to implement VRM and Personal Data Stores?

So, knowing what is different with VRM makes it pretty clear what we’ll need to achieve it:

  1. A service and/or software infrastructure capable of offering users control over fungible, identity-accessed data stores. On the open source software side, look to OpenID, the Higgins Project, and Sun Microsystems (amid many others). I’m not yet aware of anyone offering PDs “hosting” services, but I believe we should see PD capable products within a year or two.
  2. Institutional infrastructure of organizations providing claim verification services using the user-centric Identity Provider (IDP) architecture. The Department of Motor Vehicles, AAA, Equifax, eBay, National Association of Realtors. All are capable of providing added value for their licensed drivers, members, consumers, users, and Realtors, respectively, by being an IDP and enabling Relying Parties (RPs) to provide new services or capabilities based on validated user claims.
  3. Legally binding, and tractable access rights agreements. Before users are comfortable sharing personal data automagically, we’ll need to wrap clear and enforceable rights agreements around our data stores.
  4. Open standards and protocols for the exchange of use-case specific data. Within communities of interest, we need to forge common schemas so Vendors can easily work with data provided by others. There isn’t really a general solution to this problem; however, with focused “vertical” approaches, we can define how we share specific personal information across the Internet.

Closing

This is a long post. If you’ve made it this far, thanks for staying with me. =) VRM is an incredibly rich vein for innovation and I hope I have succeeded in exploring that a bit with you. As a focal point for new services, software, and initiatives, VRM provides both a clear moral framework–it’s about empowering the user in their engagement with vendors–and a clear bounds on our scope: reinvent vendor-customer relationships.

There is a lot of fertile ground for VRM… and a lot of ways that the same underlying technology can be used and applied to use cases outside the vendor-relationship context. Those applications will emerge and be delightful surprises as Doc Searls and the VRM crew rally and clarify the vision we see for a VRM future–and create the technology to bring VRM to market.

Hopefully this article has given some depth and concreteness to a few VRM concepts that have been in recent discussions, especially the Personal Data Store. The entire movement is still early in its gestation. No doubt, the ideas presented here will evolve to become something different than what we have started with and I encourage and invite you to join us in evolving VRM. This is an open community effort and we welcome your input.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | 15 Comments

Two explosions hit San Francisco on Market St near Van Ness

While in a conference call just now, my colleague was summarily ushered out of his building in San Francisco as two small explosions caused white smoke to billow out from manholes on Market Street near Van Ness.  Safety crews are on the scene and it seems to be clearing up.

Thought you might want to know.

Posted in Uncategorized | Comments Off on Two explosions hit San Francisco on Market St near Van Ness

VRM: The user as point of integration

On yesterday’s Project VRM conference call, a piece of the Vendor Relationship Management puzzle snapped into alignment in a flash of insight.

It wasn’t something new to the movement, rather it was a realization about the primacy and criticality of what we are doing and how to communicate it. It has always been a part of the conversation, just one that we often took a bit of time to get around to. And yet, it is perhaps the most important piece of all:

When we put the user at the center, and make them the point of integration, the entire system becomes simpler, more robust, more scalable, and more useful.

This is a profound shift that has some interesting parallels with a concept in AI called “stigmergy” and with a bit of classic Einstein becomes a totally new way to think about next generation systems design. In other words VRM changes the landscape in a way that not only makes life better for individuals, it profoundly improves the information architecture that modern society depends on.

If you’ll indulge me, I’ll try to explain.

User Centrism

VRM has its roots in the user-centric Identity movement and has user-centrism at the core of its DNA. The first, and perhaps most obvious, interpretation of user-centrism is user control. That, unfortunately, is part of the problem.

User Control

User control is critically important. It resonates with the core of the modern social contract. Freedom. Liberty. Capitalism. The Age of Reason. Liberalism. These systems and ideologies all assume that the individual, and only the individual, has legitimate moral authority over his or her life, assets and the disposition of both. These are powerful concepts. So powerful, that when you build systems that provide individual control, you energize vast personal resources that in turn become real economic power, measured in trillions of dollars, just to consider US GDP. Contrast this with fascism, communism, and socialism, which place the state above the individual and in varying forms take control away. There’s some powerful mojo supporting the whole capitalistic freedom-loving democracy thing.

Regulated Freedoms

Of course, unfettered freedom isn’t the ultimate answer. We’ve learned how unregulated markets fail in various ways, often recreating abuses of power that eventually lead back to a loss of individual control. Think Standard Oil. Southern Pacific Railroad. AT&T. All monopolies that abused their power. And they all owe their break-up to the Progressive political movement which itself was an exercise in user control that started in the early 1900s and arguably ended in the 1990s when Clinton “reformed” welfare.

The Political Siren Call

The problem with user control is that it is so powerful as a political concept. “Putting users back in control” is a seductive rallying cry. In fact, it echoes with John Edward’s current populist campaign against poverty as well as Marx’s call to “workers of the world, unite!” The echoes also show up in user centrism in efforts like the Attention Trust, on which I’ve commented before. Much of the Attention Trust work is important, powerful work. But I still don’t know what they mean when they say that people own their attention data. Does that mean we somehow have the right to “nationalize” private data silos in the name of the people? Without debating the politics of this question–there are good points on both sides–it is clear that this line of thinking about VRM, user control, and user rights, is deeply political and therefore, controversial.

The Conflict of Control

It is also challenging to the graceful and speedy realization of our goals. Starting the conversation by asserting user control implies a loss of control somewhere else. Usually that means conflict, as few entities have ever given up control without a fight. So, thinking and talking about putting users in control resonates with users. But it scares the crap out of vendors. Too bad, some people say. Even yesterday, on that same VRM conference call: “We’ll punish them and get them to be more open and more transparent.” We may be able to do just that, but it is unlikely to be the easiest route to realize our goals.

Especially not if there is a way to reframe the conversation, a way to redefine the important matters, such that the debate is not about user control, but rather about the inherent efficiencies and power of user-centrism. If we can do that, then user control gets built into the system automatically and those who would be giving up some of the control they have today–in their precious vendor data silos–do so not out of punishment, but out of honest, natural desires to improve their bottom line. Along that route lies the embrace of vendors and, I believe, more fruitful relationships for everyone.

User Centrism as System Architecture

Doc Searls shared a story about his experience getting medical care while at Harvard recently. As a fellow at the Berkman center, he just gave them his Harvard ID card and was immediately ushered into a doctor’s office–minimal paperwork, maximal service. They even called him a cab to go to Mass General and gave him a voucher for the ride. At the hospital, they needed a bit more paperwork, but as everything was in order, they immediately fixed him up. It was excellent service.

But what Doc noticed was that at every point where some sort of paperwork was done, there were errors. His name was spelled wrong. They got the wrong birthdate. Wrong employer. Something. As he shuffled from Berkman to the clinic to the cabbie to the hospital to the pharmacy, a paper (and digital trail) followed him through archaic legacy systems with errors accumulating as he went. What became immediately clear to Doc was that for the files at the clinic, the voucher, the systems at the hospital, for all of these systems, he was the natural point of data integration… he was the only component gauranteed to contact each of these service providers. And yet, his physical person was essentially incidental to the entire data trail being created on his behalf.

User as Point of Integration

But what if those systems were replaced with a VRM approach? What if instead of individual, isolated IT departments and infrastructure, Doc, the user was the integrating agent in the system? That would not only assure that Doc had control over the propagation of his medical history, it would assure all of the service providers in the loop that, in fact, they had access to all of Doc’s medical history. All of his medications. All of his allergies. All of his past surgeries or treatments. His (potentially apocryphal) visits to new age homeopathic healers. His chiropractic treatments. His crazy new diet. All of these things could affect the judgment of the medical professionals charged with his care. And yet, trying to integrate all of those systems from the top down is not only a nightmare, it is a nightmare that apparently continues to fail despite massive federal efforts to re-invent medical care.

(See The Emergence of National Electronic Health Record Architectures in the United States and Australia: Models, Costs, and Questions and Difficulties Implementing an Electronic Medical Record for Diverse Healthcare Service Providers for excellent reviews of what is going on this area, both pro and con.)

Profoundly Different

Doc’s insight–and that of user-centric systems–isn’t new. What’s new is the possibility to utilize the user-centric Identity meta-system to securely and efficiently provide seamless access to user-managed data stores. With that critical piece coming into place, we have the opportunity to completely re-think what it means to build out our IT infrastructure.

What clicked on the conference call was first, that this approach actually has some intriguing resonance with a field of AI called “swarm intelligence” and the concept of stigmergy. And second, as a result, the user as the point of integration has the potential to be profoundly different and profoundly more efficient than current practices.

Swarm Intelligence and Stigmergy

Swarm Intelligence looks to the world of insects as inspiration for building AI systems that are collectively smart, but using individually dumb, but active components. For example, how do wasps build nests? Or how do ants find paths to food? It turns out that a lot of these insect behaviors have common properties that can be used to build computer algorithms. One concept that is particularly useful is “stigmergy”, which means marking the environment as communal signaling in a larger, emergent algorithm.

Ants, for example, mark their trails with pheromones. As other ants explore for food, they sometimes follow existing trails, other times not. As more and more ants find success along one particular trail, it gets reinforced, and even improved as some ants’ explorations discover a slightly better route. This natural feedback loop uses the environment in a simple way to allow a bunch of ants to find food in an incredibly efficient way. The last time I looked into it, the Ant Algorithm was in fact the best known algorithm for a particular version of the “Traveling Salesman” problem. Amazing. All without any “active” part of the algorithm actually knowing or thinking about the entire area being mapped–which is what other mapping algorithms basically do.

For an excellent discussion of Swarm Intelligence see Eric Bonabeau’s Swarm Intelligence: From Natural to Artificial Systems.

Einstein, Ants, and User Centrism

So what the heck do a bunch of ants have to do with VRM? With a bit of a solipsistic twist and topological imagination, quite a bit.

Albert Einstein helped the world understand the truth that all velocity is relative. That me running at 15 mph towards a stationary car is the same as the car traveling 15 mph towards me. The important thing is the relationship between the parties, not which one is standing still.

Now apply that sense of relativity to “stigmergy” and invert the ant and the environment. (And don’t hurt your brain!)

Instead of thinking of humans as the active element, think of humans as the environment and Vendors as the ants. Instead of humans visiting a bunch of isolated data silos, invert it so that vendors are visiting stationary users–or their stationary data stores.

Now, instead of a bunch of individuals running around leaving a disparate data trail which is hard to keep track of, the individual represents the digital environment where data is stored by vendors. When the next vendor comes along, the data is there, available for use, without the need for complex integration, processing, or systems maintenance, just like the environment is there for the next ant to come along, allowing that ant to do what they do without a complicated brain or sophisticated map of the territory.

It doesn’t matter that Doc was physically moving around in his example. From Doc’s perspective, he was always right there. “No matter where I go… there I am.” This is more than just a solipsistic view of the universe, it is perhaps the most critical insight of the VRM user-centric gestalt. When you put the user at the center, it makes it trivially easy to manage and integrate the entire digital experience of the user. Because it is all right there, all the time.

It is hard for me to judge if that makes any sense to the average person, but when it clicked in my brain yesterday, it was like a mega-watt flash bulb going off. This is a profoundly different way to think about systems architecture. Just like the ant algorithm, it shifts the problem from one of a complicated system that has to know and integrate everything, to one where all the vendor needs to know is which data store goes with which user. The rest follows.

Sure, there is still a lot of work yet to be done. We have to figure out the protocols and technologies for what data vendors actually share in that data-store and how we assure reliable, always-on access in a secure and privacy-protected manner. Fortunately, as I mentioned earlier, the user-centric Identity meta-system is addressing a huge portion of that. In short, we are building on the shoulders of giants, which stand on the mountains of Moore and Gates and Postel and Berners-Lee and Andreeson. Sounds like fun to me.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | Tagged , , , , , , | 18 Comments

New YouTube surfing interface = crack

I thought YouTube was enticing before. But their new super fast, groovy surfing bar takes it to a whole other level…

Check it out in this PSA about posting stuff online:

Talk about sucking my attention away…

Posted in Uncategorized | Comments Off on New YouTube surfing interface = crack

Photosynth redefines “visualizing” the web

Photosynth from Microsoft appears to be a true game changer.

Assuming they can deal with scaling at a Google or YouTube pace (and it looks like they probably can), their approach to multi-resolution experiences and automated image linking, extraction, & assimilation, truly changes what images mean on the web.

You might think of what they are doing as creating the visual Semantic Web.

Powerful stuff.

Many thanks to ScuttleMonkey for his post to /.

Posted in Uncategorized | Comments Off on Photosynth redefines “visualizing” the web

Reputation as case-based Identity

Michael O’Connor Clarke on Web 3.0 and Personal Reputation Management:

I feel the need for some secure, personal repository that would hold all of my connections and “whuffie” together. I want to keep my whuffie in my wallet – but not in a Microsoft Passport/Hailstorm kind of way. Ack, no.

It should include most elements of OpenID, a lot of FOAF, and maybe some of the stuff being worked on by the Attention Trust people.

I want it in XML, of course, and I want it to be incredibly easy to implement and use, as secure as it possibly can be, and extensible without being completely unmanageable.

Naturally, I’d want everyone to adopt it – from eBay to Amazon, Facebook to Flickr, Google to Microsoft to Yahoo.

This is a VRM perspective on reputation and it makes perfect sense (and the rest of the post is worth reading as well).

This immediately triggered an insight: identity seems to be inseparably bifurcated between assertions and reputation, between the direct and the indirect, or in legalese, between the statutory and the case-based. The latter two terms I think are particularly useful.

Reputation is a critical missing piece in the Identity meta-system. The meta-system enables reputation–as infrastructure you can build reputation with it–but I have yet to see good, concrete thinking about how to capture, build, leverage, and work with reputation in a general way. It’s still fairly fuzzy, despite its criticality. It’s a bit like the World Wide Web as a commerce platform in 1992. Sure, you could see how http and html could enable wide-spread e-commerce, but few grokked a future made of SSL, shopping carts, pay-per-click, and affiliate marketing.

Yet, I think that figuring out reputation is required to completely resolve the issues of Identity. Michael’s post focused on the isolated reputation silos at places like eBay and Equifax. And a personal data-store containing our transaction history, feedback, and ratings, is a great start for decentralizing identity, but doesn’t address what makes reputation distinct from other aspects of Identity.

Think about this missing piece as the distinction between statutory and case-based Identity.

I like this reference because it is a useful distinction in the U.S. legal system. Statutory law is what the government explicitly makes law, typically by legislative bills signed by governors or the President. It also includes local city and county ordinances and the like. These are explicit rules, formally enforceable in court.

Case-based law, on the other hand, is based on how the courts have decided to interpret the law, based on all existing applicable statutes and prior case law. It is essentially a case-by-case distillation of the entire history of the jurisdiction in the matter at hand. It requires analysis and evaluation of the entire set of applicable laws and prior judgments, and it is the ultimate arbiter when statutory laws are in conflict, such as when state and federal law disagree.

Think of Identity as a combination of statutory and case-based claims. Since identity, in the Identity meta-system, is the sum of all claims about an entity or individual, I think it behooves us to understand more clearly the distinction between statutory and case-based claims.

So, I’d like to introduce two new terms into the Identity conversation: “Statutory Identity” and “Case-based Identity”.

Statutory Identity

Statutory Identity is based on the explicit assertions of fact made about me by Identity Providers (IDPs) as to my true nature, e.g., that I am a Sun Microsystems employee (I’m not, btw), of a certain age, or a US citizen. These easily fit into the “claims” architecture of the emerging Identity infrastructure, and Relaying Parties can readily judge the validity of a particular claim based on the authority ascribed to the IDP. For example, the Department of Motor Vehicles is arguably definitive regarding my right to drive, authoritative for my age, but not authoritative for my current employment status.

Case-based Identity

In contrast, Case-based Identity is built from the accumulation of transactions (historical facts) or assertions of opinion/judgment by others. It is emergent or generative and is more a matter of judgment than fact. It is our reputation, as rendered by a particular method or by a particular service based on a knowable and refutable set of data. For example, your credit rating is a construct of one of three credit-bureaus, it represents their judgment about your credit worthiness. Rarely do these three sources agree, often because they base their judgment on varying data. Similarly, eBay generates its own reputation ranking based on feedback from transactions at their service. Both of these reputation architectures are (1) based on real transactions (2) refutable through some appeals process.

The good news is that these underlying data points can readily be communicated via the Identity infrastructure. The bad news is that there is as of yet no clear agreement about how to convert those facts into a reputation. Different folks have ideas, but we lack even a clear conceptual framework.

And yet, my identity is clearly both the factual statutory claims about me and the emergent reputation based on my history. While we have developed an architecture for the first, I think we are only beginning to establish a framework for the latter. Perhaps considering reputation as case-based identity, we can start to outline the components required for such case-based systems to work:

  • transaction data (potentially including opinions of others)
  • algorithmic evaluation
  • refutation process

These may not be the definitive requirements for a reputation system, but they seem to be present in the working systems I know of and are perhaps a good starting point.

For the record, I think it is an even bet as to whether or not personal opinions can be effectively integrated as “transaction history” in a case-based identity system, given the challenges of emotions, grudges, slander, and the non-provability of opinions.

It is also a near certainty that for certain types of case-based identity that the user will never be able to actually fully control the data-set. For example, I could significantly improve my credit score if I had read-write control over that data-set. Unfortunately, that would render the current system completely ineffective. Perhaps a new one could emerge, but there are other domains, such as criminal records, etc., where an authoritative reputation requires a data-set with limited or heavily moderated user control–otherwise everyone would erase those pesky traffic violations.

Any suggestions for other elements in a good case-based identity system?

Posted in Identity, ProjectVRM, Vendor Relationship Management | 4 Comments