On information leakage, Bruce Schneier comments:
It's easy to say "we haven't seen any cases of fraud using our information," because there's rarely a way to tell where information comes from. ... Everyone thinks their data practices are good because there have never been any documented abuses stemming from leaks of their data and everyone is fooling themselves.
Many years ago, when I worked on some information systems for direct mail marketing, it was standard practice to include fictional entries in a mailing list, which allowed for the rapid detection of abuse. In this context, abuse generally means using the mailing list for a purpose not authorized by the mailing list owner/administrator, and/or without proper payment. The data owner has an incentive to control abuse, because abuse degrades the value of the data to the owner. The relationship between the data owner and the data user is one of provisional trust, with retrospective sanctions whenever abuses of trust come to light. This relationship works because of the detection mechanism. And the mechanism works because the data user cannot discriminate between the fictional entries and the real ones.
So why doesn't this work for the current spate of privacy violations and identity theft vulnerabilities? Assuming that the fictional entries are properly constructed. There are some technical considerations and some social considerations (including regulation), but the value of such a mechanism should be obvious.
Technorati Tags: identity theft privacy security trust
No comments:
Post a Comment