What does the anonymization of data—the masking of private information by using a single, unchanging identifier to hide connections between data and data subject (also known as “static anonymity”)—have in common with the tiresome kabuki theater that the Transportation Security Administration (TSA) agency of the U.S. Department of Homeland Security requires us to go through at airport checkpoints? Not much, it would appear. But upon closer examination, both encourage complacency by fostering not only a false sense of security but a false sense of utility as well.
We’re not discounting the value of anonymization; it powered the growth of the Internet. But today, technology, markets, applications and threats have evolved while the protocols to keep personally identifiable data anonymous have not. If we are to mine the vast potential of data analytics to create high-value products and services that improve and even save lives while meeting the privacy expectations of the public and regulators, we need new tools and thinking.
Stop Patting Down Grandma
To understand the problem, consider the great failure of TSA airport screening. In treating every passenger as an equal threat, the agency collects more data than it can possibly analyze for patterns of terrorist activity, violating privacy for no real gains in security or utility. That’s why aviation security experts are calling for the TSA to vary screening intensity based on passenger profiles … acknowledging that 88-year-old grandmothers are unlikely to be carrying explosives.
Big data merely compounds the TSA’s error. In data analytics, the phrase “You can have privacy or value, but you cannot have both” is accepted as an axiom, but it’s actually a dangerous fallacy. Zero privacy reduces the value of data because it does not filter out anyone or anything, leaving too many choices and an excess of noise. Zero privacy can also subject an identifiable data subject to potential discrimination and harm while exposing data processors to potential liability.
On the other hand, complete data anonymity restricts the relevant data that could be used to protect individual health and safety while fueling useful, valuable products and services. The irony, of course, is that static de-identification schemes, which can make data all but useless to authorized users, can be easily broken by unauthorized ones. The result? A false sense of security—in which privacy protection satisfies neither consumers nor regulators—and utility, in which our all-or-nothing privacy approach either buries organizations in irrelevant data or denies it to them altogether.
Dynamic Data Obscurity: Bridging the Privacy-Value Gap
“When one believes in accountability based information policy management one is always looking for controls that are effective and will be trusted by enforcement agencies. Controls are what make it possible for an organization to make promises and be able to demonstrate their integrity. Controls are a combination of policies with penalties and the technology tools to make those policies work …We believe the solutions are part of a field we have begun to call 'Dynamic Data Obscurity.' Dynamic data obscurity involves obscuring data down to the element level when that level of security is necessary, and making sure that rules which control when elements can be seen are real and enforced. Dynamic data obscurity is also about making the technology controls harder to break but still allowing for appropriate uses. It requires both new technologies combined with effective internal monitoring and enforcement.”
Dynamic data obscurity improves upon static anonymity by moving beyond protecting data at the data record level to enable data protection at the data element level. Dynamic data obscurity empowers privacy officers to improve the “optics” of data protection for data subject, regulators and the news media while deploying next-generation technology solutions that deliver more effective data privacy controls while maximizing data value.
An Ethical Approach
We've therefore developed an approach to dynamic data obscurity—we call it Dynamic Anonymity—that dynamically segments and applies re-assignable dynamic de-identifiers (DDIDs) to data stream elements at various stages. This significantly reduces the risk of personally identifying information being unintentionally shared in transit, in use or at rest. Meanwhile, trusted parties, in accordance with permissions established by or on behalf of data subjects, maintain the ability to "re-stitch" the data stream elements together. In addition to protecting anonymity, Anonos Dynamic Anonymity also allows data end-users to selectively filter only those elements that they find useful, thereby reducing noise and increasing the utility of the data stream.
Vibrant and growing areas of economic activity—the “trust economy,” life sciences research, personalized medicine/education, the Internet of Things, personalization of goods and services—are based on individuals trusting that their data is private, protected and used only for authorized purposes that bring them maximum value. This trust cannot be maintained using static anonymity. We must embrace new approaches like dynamic data obscurity to both maintain and earn trust and more effectively serve businesses, researchers, healthcare providers and anyone who relies on the integrity of data.
This article originally appeared in IAPP. All trademarks are the property of their respective owners. All rights reserved by the respective owners.
How GDPR compliant pseudonymization requirements have evolved from prior standards: