Answers to a few questions about VRM

Pignerol Antoine recently asked some questions about VRM and I thought I’d answer them publicly.

Is VRM really different from social CRM ?

Yes, although exactly how depends on how you define social CRM. Based on my understanding, I would suggest that VRM is first and foremost about providing value for the user with any vendor, as opposed to using social networking tools with a particular vendor. VRM is vendor agnostic and silo-adverse. The goal is to catalyze the development of tools for individuals through protocols and standards that let them work with any vendor seamlessly, without loss of functionality or services.

Does VRM work with a CRM ?

Sure. A CRM is a company-centric system. Every company should pay attention to its customers and CRM is currently the best-of-class thinking on the enterprise-side for how to do that. Different VRM services act on behalf of the individual, yet still require connecting to enterprise systems. For things to be seamless, VRM services should marry into CRM services for fulfillment.

Can VRM be implemented in all kinds of business?

Yes. Any business can support VRM services and be compliant with general VRM principles. Ultimately, it will be as easy for a small company to be VRM compliant as it is for a small company to run a blog or a wiki today. That takes some level of technical sophistication, but it is within grasp of any company that wants to invest a small amount of effort using freely available open source tools. Eventually, VRM will be available in the same way.

What’s needed for VRM to work ?

We need to work through electronic marketplace issues from customers’ perspectives, with attention to the full power of relationships, finding consistent ways to create new value through the network. For the Standards Committee, that means a public conversation starting with users and requirements. Once that is vetted in an open source manner, we can explore particular implementations. We believe that with a well defined, high quality requirements specification, service providers will emerge to deliver those services.

As customers are looking for lower prices, don’t you think that Personal RFPs are gonna cost more for customers (because they are personalized offers) ?

Two things here. First, I don’t think customers are just looking for lower prices. They are looking for better value.

https://blog.joeandrieu.com/2008/03/07/pricing-markets-and-demand-vrm-style/

One of my favorite examples of this is Shopatron’s business where they sell everything at 100% manufacturer suggested retail price, no discounts, no rebates:

https://blog.joeandrieu.com/2007/01/19/shopatron-redefines-vendor-relationships/

Second, the personal RFP is designed to eliminate transaction costs in the marketplace. Currently, product and vendor discovery is slow, expensive, and uncertain. That means buyers waste time and vendors waste advertising and lead generation dollars seeking the right match between needs and solutions. Any time transaction costs are reduced, you have an opportunity for better prices.

At the same time, Vendors will be discovering ways to provide more value to customers and the net result could easily be that customers will end up paying more for enhanced services or products. Ideally, this will mean that commodity products continue to drop in price while value-added customizations are welcomed by buyers and voluntarily paid for at a premium over the commodities.

What do you expect from VRM?

I expect it will take longer and be more work than any of us would prefer. However, I think that the concepts behind VRM, and hopefully our work developing standards and catalyzing working solutions, will enable a fundamental shift in the marketplace. As Doc Searls has said more than once, the industrial revolution is over: industry won. There is an incredibly powerful legacy of using computers and networks to help companies make more money (and create more value as they do so). Unfortunately, companies tend to think for themselves first, often to the detriment of overall economic benefit.

I see a world where every individual is engaged and empowered to get the most out of their relationships with vendors–vendors of all sizes. In that world, not only are individuals and vendors each getting and creating more value directly, the entire economy is operating at a higher efficiency as less money is spent on wasted advertising and product development and more is spent on fulfilling verified demand. This would supercharge Adam Smith’s invisible hand and provide a significant increase in aggregate global wealth for everyone. It takes the benefits of the zero-distance network and extends it efficiently into the domain of user-driven commerce.

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , , , | 2 Comments

R-cards “ah-hah!” at IIW

At last month’s Internet Identity Workshop and the subsequent DataSharing Summit, Markus S and Drummond Reed unpacked several ideas about r-cards, which, to a certain extent, are an evolution of the Information Card at the heart of CardSpace.

Going into IIW, I understood r-cards simply as a hybrid of InfoCard’s managed and personal card models. Managed cards are issued by another party–all the data associated/transmitted with that card is controlled by that managing party, while personal cards are self-asserted, allowing individuals to serve as their own card provider, controlling all of the associated data. R-cards then, allow a managing party to co-control a card with the user–with some data controlled by the managing party and some controlled by the user.

However, during the IIW demo of r-card, I had an epiphany about how powerful the r-card is, once we actually allow the user to manage the personal claims through multiple, dereferenceable links.

One issue that came up during the demo was that if the “personal” side of the r-card is manually entered claims, such as contact information, then the user is creating a management nightmare: duplicate claims would need to be entered and maintained across many different r-cards. The more r-cards, the worse the problem.

The “obvious” solution discussed at the session was to allow the user to specify specific claims that are served by other IdPs, such as a Personal Address Manager. And for completeness sake, let’s note that such claims could be mashed up from multiple other IdPs, not just a single one. Thus, any number of claims from a particular IdP could act as a sort of sub-card, combining with other subcards at presentation time.

The net result of this is a realization that that perhaps the most interesting thing about r-cards is their use as dynamic cards or aggregate cards or mashup identity cards.

That’s pretty cool in itself.

However, it also struck me that this also potentially fixes usability problems around authorizing a bunch of vendor’s (M) access to identity claims at a variety of different identity providers (N). This potentially requires N points of authorization and authentication for each M vendors (or relying parties). Sub-cards (or r-cards) may combine that task at the point of presentation for much greater user understanding and simplicity.

Since the Card Selector is itself a trusted point of authorization, we should be able to use the “mashup” gesture as explicit authorization for relying parties to access the claims specified in the sub-cards. That is, the UI of creating the r-card/mashup card/dynamic card also explicitly approves access to specific claims from multiple IdPs, since after all, the selector is where you select which claims to present to relying parties.

This adjustment to the Information Card ceremony greatly simplifies the user experience, while retaining all the power of distributed claims at appropriate IdPs. For example, it would allow me to specify my Passport # to United Airlines, as a verifiable claim served by the US Secretary of State IdP (which should be trusted by UA), streamlining any international travel I might do, while retaining my contact info at my Personal Address Manager. All with the same authorization ceremony I use with any information card relying party.

This realization was, for me, the most surprising insight into the power of the r-card. In fact, I’m wondering if the name “r-card” captures it best.

Posted in Identity, Personal Data Store | Tagged , , , , , , | 2 Comments

Bandit, Higgins, Open Source, Profit and Novell

At EIC2008 last month, Dale Olds of Novell’s Bandit Project gave me a few minutes and some insight into how Novell (and others) are mixing open source with proprietary software to architect a whole new Identity paradigm online.

I’ve been following the user-centric Identity movement ever since Doc Searls talked me into attending IIW2006b, an unconference. EIC is a classic Enterprise technology sales conference on identity management. The two events couldn’t be more different, even though both have excellent content and are focused on Identity. EIC was all about big business selling to each other, while IIW is all about engineers making user-centric Identity work.

Identity? A lot of you are familiar with the term, but for those who might not know what I mean, I’m talking about how people authenticate themselves for access to online systems. Traditionally based on usernames and passwords, online Identity presents a host of problems, not the least of which is that an individual may have dozens or even hundreds of different usernames and passwords, one for each new web service or corporate LAN accessed. This proliferation is itself a security risk–as people reuse passwords despite the best efforts of zealous IT gurus everywhere. It is also an information management nightmare: how are we supposed to remember all of that? Which reinforces the problem of reused passwords and unfortunately typically insecure password reset. Today’s identity management software provides solutions to this problem, largely through federation and user-centric Identity.

In short, federation is how corporate IT systems rely on other corporate systems–provided by other departments or even other companies–to authenticate your identity and share information about you. It can be used for authentication, or as in the case of FaceBook’s Beacon, it can be used to pass on highly sensitive personal data. (Blockbuster is now in a lawsuit over this, which I expect they’ll lose.) As Doc Searls likes to put it, federation is about large companies having safe sex with each other, using your data. You can see how this starts to relate to your offline identity, as bits and pieces of your data trail could be used to build a profile and steal your identity or use it for other nefarious purposes, like spamming you with “targeted” ads.

In contrast, user-centric Identity is an architecture where individuals present the credentials of their choice for authentication at online services. Instead of the vendor-to-vendor systems integration and trust contracts of federation, “Relying Parties” authenticate a visitor by relying on the Identity services of an “Identity Provider” of the visitor’s choice. Relying parties may not accept all ID Providers, but in general, the choice of who authenticates your identity lies with you. Key technologies in this space are OpenID, InfoCards, and a variety of standards from the Liberty Alliance. These are the core of the conversation at IIW.

Of course, you can do federation with a user-centric Identity architecture; that’s not the point. The point is that in the user-centric world, the user is in charge of their identity. Or, as Doc Searls advocates, in the user-driven world, the user is driving the transaction.

So, when I sat down with Dale at EIC, I had already heard about Bandit—I even have the t-shirt—yet, I was wondering how Bandit fit into the whole mash up of technology behind user-centric Identity. I know that OpenID is a URL-based approach for identity that has generated significant traction because it is easy for relying parties to implement and for tech savvy users to use. I also know that Higgins and CardSpace both implement Information Cards, or InfoCards: one an open source, extendable client and server implementation, the other a polished proprietary client app from Microsoft. I even had some inkling of the various protocols created and under development by the Liberty Alliance, who started life as a federation standards group and has embraced user-centric approaches as it builds out its services stack. And I even knew about Sxipper and Vidoop, the first a client application that helps users manage their identity presentation online, whether the online services are user-centric or not, and the latter an Identity Provider with a unique method for verifying that you are you.

But what I didn’t quite get was how Bandit fit into it all. I know they are supporters of Higgins and Information Cards, but is Bandit a client app like Sxipper? A card selector like CardSpace? Is it a server implementation that could be used by companies like Vidoop? Is it open source and if so, how does it fit into Novell’s business model?

Dale was able to make it fairly clear: Bandit is an open source project supported by Novell. Bandit provided the card selector for the Higgins project and participate in OSIS (Open Source Identity Systems), a working group of the Identity Commons comprised of different Identity technology providers working towards interoperability. They also support the soon to be announced InfoCard Foundation, although there have been no official announcements by anyone yet about that particular project. Novell, as a separate entity, is putting engineering and organizational resources into these open source and interoperability efforts because they see a bright future in selling Identity management tools once we get the Internet Identity-enabled.

That’s when the light went on. Bandit is about helping create the entire infrastructure of Identity, the Identity Meta-System, as Kim Cameron calls it. Once that infrastructure is in place, Novell will be able to sell companies a number of tools that make it easy to leverage that infrastructure. As Dale put it, the open source part of this is about enabling Identity: assuring that the basic plumping and services are present and understood. The subsequent business model is helping companies manage identity, once we have the essential plumbing in place.

Think of it like http and HTML as enabling the world-wide-web, while products like Cold Fusion, IIS, and Drupal help companies manage web services. The web wouldn’t exist without the open source gift from CERN some fifteen years ago, and without that underlying plumbing of protocols and formats, software providers like Netscape, Microsoft, IBM, Sun, and Novell, wouldn’t have made a dollar selling web technologies to anyone. Instead, with a web-enabled world, literally thousands of companies competed to provide web software, making billions of dollars in the process.

Novell sees a similar dynamic with Identity. Clearly, so does Microsoft and Sun, and hundreds of other companies.

So do I. And it looks pretty damn cool from here.

p.s. my apologies for the lack of links and images. I realized I better post this before the real-time world overtakes me. I hope to see a bunch of you at IIW

p.s. bonus link: Doc Searls on vendors bankrolling open source.

Posted in Identity | Tagged , , , , , , , , , | 1 Comment

Running the Numbers

Bart Stevens recently suggested a breakdown on the potential economic impact of VRM, based largely on a post by Steve Rubel arguing that $1B is wasted in online advertising today.

First, I anticipate the Personal Data Store to become a design pattern that underlies other VRM services, rather than a service by itself. In fact, a PD isn’t really a PD unless it enables VRM services explicitly… Personal Data Stores aren’t just online storage like Amazon’s S3.

Eye trackingSecond, I think the $1 Billion number is far too small. Steve is only estimating the CPM costs for display ads that are literally missed by users during eye tracking studies. That’s an intriguing number because those ads truly are wasted… there isn’t even any brand exposure because the ads are not even seen. It’s like paying for ads in a magazine that is never opened by a real reader.

On the other hand, there are still plenty of ads that are seen by the wrong people and CPC ads that are clicked on by the wrong people. Note that for the “right” people, those ads arguably generate useful brand exposure, so they aren’t wasted.

Burning moneyWhen advertising starts with the advertiser, it inherently wastes money, as it inevitably buys placement in ineffective or misaligned media. By now it is an old chestnut that advertisers waste half their budget–they just don’t know which half. Sometimes advertising is an investment in exploring potential markets… the goal is the data gained in the test marketing, which isn’t entirely a waste. Other times advertising is educational outreach where the goal isn’t so much to trigger a sale, but instead to introduce people to new products and services. Sometimes this is called demand generation. And that still leaves a vast amount of waste, buying media (offline or online) that just doesn’t perform or create any value. The potential savings in these areas is not only missing from Rubel’s analysis, I’d wager it is far more than $1 billion.

Question MarkExclamation markThe huge potential of VRM is to turn these models inside-out, by providing a scalable pipeline directly into the product development and sales divisions of capable firms. Instead of Vendors guessing what people want, VRM services can cost-effectively tell Vendors what people truly do want. If the product is available, the sales team can enable purchase and delivery. If the product doesn’t exist, the Vendor can create it if demand is sufficient.

This new paradigm is exactly the shift from Attention to Intention that Doc and I have been advocating. The Attention game is the world of traditional advertising, where the industrial manufacturer competes in mass media to get the attention of the right consumers in order to generate demand for their products and services. Given that attention, they seduce, cajole, and entertain in hopes of winning new sales.

The Intention game, on the other hand, starts with explicit requests from the user to fulfill actual demand. Sometimes that intention will be nascent, needing further exploration and discovery. But eventually, for the segment of the population that finds something they want or need, that intention shifts from educating oneself about available options to seeking specific satisfaction, that is, buying a solution. Because intention starts with the user’s commitment to take the relationship to the next level, it immediately takes a vast amount of guesswork and wasted advertising out of the equation.

Raining DollarsThis guesswork and wasted advertising is probably closer to $100 billion/year, but that’s just my gut feeling. And that number only addresses the loss side of the equation, that is, the money we save by not wasting product development and advertising dollars. It ignores the value of products and services that today languish as innumerable missed opportunities–missed because companies have no way to efficiently gauge true market demand. There are undoubtedly services and products that exist–or could be profitably offered today–which fail to reach customers because we don’t have a suitable mechanism for connecting the right customers with the right companies. This potential to close the gap between potential sales and unmet demand, is simply too large to estimate.

The Cost-Per-Action/Pay-for-Performance business model of Affiliate Marketing is likely to continue to transform the ad industry, significantly reducing billions in unnecessary expenses, including the $1B wasted on unseen display ads in Rubel’s analysis.

It won’t be until we transform explicit intent into new offerings and new sales that we unleash the vast potential that is VRM.

Posted in Personal Data Store, ProjectVRM, Vendor Relationship Management | Tagged , , , , , | 2 Comments

Zen and Technology

I’m not sure how I found it, but today I discovered a bit of a gem in the blogosphere: ValleyZen.

For a quick taste, check out the interview with Drue Kataoka on View from the Bay. It is amazing how a few simple words can have such a profound visceral impact.

Drue’s suggestions resonate with my user-centric world-view:

  1. SIMPLIFY
    Focus on what’s important. Eliminate what’s not.
  2. IMMEDIACY
    React to the moment — not to your fears and concerns.
  3. BREAK YOUR RHYTHM
    Surprise yourself and those around you.
  4. BE CALM
    Find Tranquility in Action.
  5. GREEN FROM THE INSIDE OUT
    Begin with your own personal ecosystem.

Take time for yourself, reconnect and put things in perspective, and engage the world on your own terms, in the moment, sustainably.

When redefining technology in personal terms, Drue’s take on Zen packs a powerful punch.

Posted in Uncategorized | Tagged , , , , | 1 Comment

On VRM and Standards

Phil Whitehouse recently served up some nuggets to stimulate conversations at next week’s VRM2008 in Munich.

I’ve been thinking a lot about VRM lately. Not so much about what it means, but rather the mechanism of how it can work.

If you’re new to VRM, it can be summarised like this: it’s the reciprocal of CRM. Rather than being bombarded with advertising, much of which is irrelevant, and the rest irritating, wouldn’t it be nice if you could just tell vendors what you want, on your terms? Without even going to the trouble of looking for them? If they’re willing and able to respond, they do so. Everyone else goes on their merry way. It’s all about sharing the data you want with the people you want.

Some examples from Doc Searls (Cluetrain Manifesto dude), who heads up the VRM project, include:
I want to:

– Buy a power convertor near St.Paul’s in the next three hours, at any price
– Buy a stroller for twins near Highway 70 in Kansas today for under $300
– Buy an Apple laptop with a 500gb HDD and weighs under five pounds, as soon as it comes out
– Buy a double decaf cappuccino at the next exit on this highway
(You can see more examples presented by Doc on this photo)

There are a few big problems that need solving. Filtering is one (both on the outbound request and the way back in), targeting is another (how do you choose which vendor to share your data with?), organisation is a third (by what mechanism do customers agree to share their data, and in what form, while retaining control over it?).

don’t know much about establishing standards. My erstwhile colleague Paul Downey, on the other hand, represents BT at the W3C and thus knows a bit about standards. He sez this will be a hard problem to crack, and he’s probably right. Big question: to what extent would we, the customers, allow brokers to help create this standard?

My view is this problem needs to be overcome before VRM can move forward, regardless of whether brokers are involved.

Good stuff. As the chair of the Standards Committee for Project VRM, it might be obvious that I think we need to create some standards.

data and globeAt the end of the day, interoperability requires either standards or one-to-one interoperability engineering. The user-centric Identity movement has grown like crazy in the last few years largely because a hybrid of these approaches have been used, as OpenID, Higgins, CardSpace, and Liberty (among others) took their 1.0 products and figured out how to make them work together, leveraging standards like WS* and SAML as they did so. The nice thing about standards is that once they are in place, they reduce an O(n^2) problem, where every software vendor has to coordinate with every other vendor, to an O(n) problem where each software vendor coordinates to the standard.

The problem with standards is they are slow to develop. But once you have some apps and some standards at the 1.0 level, the efforts towards interoperability can get serious traction, like they did with the user-centric Identity movement.

I’m hoping we can engender a similar development cycle with VRM. We need both working applications and formal standards and specifications, especially with regard to data formats and communications protocols.

I’ll diplomatically disagree and agree with Bart (read his comment on the original post) regarding leaving standards to others. On the one hand, we should leverage existing work as much as possible. For example, I see Higgins and XRI playing a major enabling role for us. On the other hand, while the Dataportability and Higgins guys are doing great work, they are not necessarily solving the problems VRM has set out to tackle, namely reinventing the marketplace on behalf of individuals while creating more value for vendors.

As an example, the Dataportability movement has framed the problem in terms of Data and Portability. This brings to mind exporting and importing “my” data from vendor to vendor. That’s a start toward liberating users from vendor silos. However, I think the real win is in user-centric services, where the location of the “data” is essentially irrelevant–even as it is hosted under the control of the user–and all user-authorized vendors can access the data through approved services.

open mailboxThat’s the idea behind the Personal Address Manager, which we’ll be discussing in Munich. Your actual postal address isn’t that much of a problem from a dataportability perspective. It’s just a few lines to enter and no real need to “export” it from some vendor’s silo. However, when you change your address, it would be nice for the new address to automatically propagate to those authorized to get it. Or, for more sophisticated vendors, to have the address provided on demand, so that they never send postal mail to the wrong address. Such a service would be automatically discoverable by vendors using the Identity layer to authenticate and authorize exactly who gets it.

I see the job of VRM as working through these scenarios from the user’s perspective and ensuring the development of enough standards and technology for a complete implementation.

In any case, I’m looking forward to seeing Phil and Bart at VRM2008. There’s plenty of room to continue this conversation. Join us if you can; it should be fun. =)

Posted in ProjectVRM, Vendor Relationship Management | Tagged , , , , | 1 Comment

Majority of Americans dislike unauthorized use of behavioral data

From Yahoo News:

Majority Uncomfortable with Websites Customizing Content Based Visitors Personal Profiles

 

Level of Comfort Increases When Privacy Safeguards Introduced

ROCHESTER, N.Y.–(BUSINESS WIRE)–A majority of U.S. adults are skeptical about the practice of websites using information about a persons online activity to customize website content. However, after being introduced to four potential recommendations for improving websites privacy and security polices, U.S. adults become somewhat more comfortable with the websites use of personal information.

Good stuff, although one should read closely to understand exactly what users dislike. Customization isn’t the problem… it’s the unauthorized invasion of privacy. The questions asked by Harris were rather leading. It would be interesting to see what people say to “if asked, would you allow a Search engine to provide enhanced results based on your behavior.” My understanding is most people do opt-in to the advanced features of Google desktop, which asks essentially the same question at install time. People don’t like surreptitious activities, but if you ask up front, it’s much easier for folks to say yes.

Posted in Identity, ProjectVRM, Vendor Relationship Management | Tagged , , | Comments Off on Majority of Americans dislike unauthorized use of behavioral data

Dataportability podcast interview

Here‘s yours truly with Trent Adams and Steve Greenberg of Dataportability, talking about VRM. Also in the podcast: dataportability news and Kaliya Hamlin on the Data Sharing Summit.

Posted in Vendor Relationship Management | Tagged , , , , , | Comments Off on Dataportability podcast interview

BT busted for unauthorized tracking of user activity

The title says it all, as reported by the Guardian:

BT admits tracking 18,000 users with Phorm systems in 2006

Bummer. I kinda like BT.

Posted in Identity | Tagged , | 1 Comment

Law enforcement v Minimal disclosure

The Washington Post today exposed considerable excesses by “fusion” centers organized post 9/11.

Intelligence centers run by states across the country have access to personal information about millions of Americans, including unlisted cellphone numbers, insurance claims, driver’s license photographs and credit reports, according to a document obtained by The Washington Post.

Dozens of the organizations known as fusion centers were created after the Sept. 11, 2001, terrorist attacks to identify potential threats and improve the way information is shared. The centers use law enforcement analysts and sophisticated computer systems to compile, or fuse, disparate tips and clues and pass along the refined information to other agencies. They are expected to play important roles in national information-sharing networks that link local, state and federal authorities and enable them to automatically sift their storehouses of records for patterns and clues.

The list of information resources was part of a survey conducted last year, officials familiar with the effort said. It shows that, like most police agencies, the fusion centers have subscriptions to private information-broker services that keep records about Americans’ locations, financial holdings, associates, relatives, firearms licenses and the like.

Centers serving New York and other states also tap into a Federal Trade Commission database with information about hundreds of thousands of identity-theft reports, the document and police interviews show.

Pennsylvania buys credit reports and uses face-recognition software to examine driver’s license photos, while analysts in Rhode Island have access to car-rental databases. In Maryland, authorities rely on a little-known data broker called Entersect, which claims it maintains 12 billion records about 98 percent of Americans.

In its online promotional material, Entersect calls itself “the silent partner to municipal, county, state, and federal justice agencies who access our databases every day to locate subjects, develop background information, secure information from a cellular or unlisted number, and much more.”

“There is never ever enough information when it comes to terrorism” said Maj. Steven G. O’Donnell, deputy superintendent of the Rhode Island State Police. “That’s what post-9/11 is about.”

The last statement pretty much sums up current institutional thinking on individual liberty and national security: in the fight against terrorism, we have a moral obligation to do everything we can. Everything.

It’s scary how much that position echoes that of fascism. As promoted by Mussolini, fascism builds a moral framework based on the primacy of the state. Fasciste means a bundle of sticks, symbolizing that the group is stronger than any individual. Fascism extends that thinking, declaring that each individual’s rights exist only insofar as they support the state. Or to restate, in the defense of the state, there are no individual rights.

Which, if you think about it, is exactly what anti-terrorist programs assert when claiming that terrorism trumps the rights and privileges of the suspect or accused. Due process, protection from unreasonable searches, freedom of speech. All of these have rights have been trampled on in the name of the War on Terror. The fusion centers are just one more institution created by the mindset that brought us illegal wiretaps, extraordinary extradition, secret prison camps, extra-territorial detention, and torture.

I understand law enforcement’s position. It is easier to enforce laws when you know everything about everyone, just like in a police state (see The Lives of Others for an Academy Award-winning story of pre-information age East Germany’s police state). But it is impossible for a police state to generate the economic and social well-being that emerges in a free society… and it is that well-being which, ultimately, is the core of U.S. global power. Simply put, undermining freedom undermines US security.

In contrast, consider the subtle brilliance of Kim Cameron’s Laws of Identity, in particular, law 2:

2. Minimal Disclosure for a Constrained Use

The solution that discloses the least amount of identifying information and best limits its use is the most stable long-term solution.

We should build systems that employ identifying information on the basis that a breach is always possible. Such a breach represents a risk. To mitigate risk, it is best to acquire information only on a “need to know” basis, and to retain it only on a “need to retain” basis. By following these practices, we can ensure the least possible damage in the event of a breach.

At the same time, the value of identifying information decreases as the amount decreases. A system built with the principles of information minimalism is therefore a less attractive target for identity theft, reducing risk even further.

By limiting use to an explicit scenario (in conjunction with the use policy described in the Law of Control), the effectiveness of the “need to know” principle in reducing risk is further magnified. There is no longer the possibility of collecting and keeping information “just in case” it might one day be required.

The concept of “least identifying information” should be taken as meaning not only the fewest number of claims, but the information least likely to identify a given individual across multiple contexts. For example, if a scenario requires proof of being a certain age, then it is better to acquire and store the age category rather than the birth date. Date of birth is more likely, in association with other claims, to uniquely identify a subject, and so represents “more identifying information” which should be avoided if it is not needed.

In the same way, unique identifiers that can be reused in other contexts (for example, drivers’ license numbers, Social Security Numbers, and the like) represent “more identifying information” than unique special-purpose identifiers that do not cross context. In this sense, acquiring and storing a Social Security Number represents a much greater risk than assigning a randomly generated student or employee number.

Numerous identity catastrophes have occurred where this law has been broken.

We can also express the Law of Minimal Disclosure this way: aggregation of identifying information also aggregates risk. To minimize risk, minimize aggregation.

Whether or not you think the War on Terror is being handled well, it is a demonstrable fact that human systems fail. People make mistakes.
And that means we can guarantee that institutions–even when acting in our own best interest–will make mistakes, like the admitted errors of the FBI, as reported by the NYT:

F.B.I. Made ‘Blanket’ Demands for Phone Records

WASHINGTON — Senior officials of the Federal Bureau of Investigation repeatedly approved the use of “blanket” records demands to justify the improper collection of thousands of phone records, according to officials briefed on the practice.

Under the USA Patriot Act, the F.B.I. received broadened authority to issue the national security letters on its own authority — without the approval of a judge — to gather records like phone bills or e-mail transactions that might be considered relevant to a particular terrorism investigation. The Justice Department inspector general found in March 2007 that the F.B.I. had routinely violated the standards for using the letters and that officials often cited “exigent” or emergency situations that did not really exist in issuing them to phone providers and other private companies.

F.B.I. Says Records Demands Are Curbed

WASHINGTON — The Federal Bureau of Investigation improperly obtained personal information on Americans in numerous terrorism investigations in 2006, but internal practices put in place since then appear to have helped curtail the problems, Bush administration officials said Wednesday.

The Justice Department’s inspector general is expected to issue a report in coming weeks that updates the findings of a major investigation last year into the F.B.I.’s use of so-called national security letters, which allow investigators to obtain telephone, e-mail and financial information on people involved in investigations without a court warrant.

Last year’s report caused an uproar in Congress when it was disclosed that the F.B.I., under powers granted by the USA Patriot Act, had misused its authority to gather records in thousands of instances from 2003 to 2005. The new report from the inspector general will examine the bureau’s use of the records demands in 2006.

At the end of the day, this isn’t about any particular individual, nor even any particular violation of our constitutional rights.

It’s about addressing the systemic problems of the information age. There will always be threats to national security. There will always be the drive to get as much data as possible into the hands of a few, elite law enforcement agencies, capable of acting in the “public good”. And there will always be those individuals who break the rules, whether for good intent or malicious device. We don’t need conspiracy theories to point out the dangers of centralizing all the information about everybody.

What we need is an open-eyed approach to building information systems on user-centric principles, such as Cameron’s seven Laws of Identity. Do that and a vast number of systemic risks of the information age go away.

Posted in Identity | Tagged , , , , , , , , | Comments Off on Law enforcement v Minimal disclosure