Taking Data Security from Analogue to Digital - EU regulators now encourage companies to adopt a new “digital” alternative – namely “Pseudonymisation”
Despite seemingly endless headlines about data leaks, financial hacks, and nine-figure fines levied at major companies, countless industry professionals continue approaching the capture, storage, and processing of consumer information the same way they have for decades. All the while, national insurance numbers, banking records, password and login credentials – all core components of our online identities – remain vulnerable across corporate networks, easily snatched by well-equipped cyber thieves, leaked due to negligence and resold openly on the digital black market.
The General Data Protection Regulation (GDPR), a landmark privacy law that went into effect in 2018 across the European Union, establishes standards for “data protection by design and default” and other state of the art approaches to more effectively balance data use and protection. As these concepts enter the mainstream, they force large-scale data-holders to reckon with their antiquated, “analogue” contract and policy-based approaches to protecting data.
While once satisfied with easily-decryptable software and hard-to-enforce contracts as safeguards to protecting personal data across data streams, EU regulators now encourage companies to adopt a new “digital” alternative – namely “Pseudonymisation” as now defined under the GDPR.
Outlined explicitly within the GDPR, Pseudonymisation requires that companies operating within the European Union ensure that personal data cannot be attributed to specific consumers without requiring the use of separately kept additional information (to prevent unauthorized re-identification). Individual datasets that include consumer contact details or demographic information, for example, must be kept distinct from one another (i.e. dynamic functional separation), and can only be relinked by authorized personnel for explicitly demarcated purposes.
Traditionally, companies have been fairly effective at keeping personal data protected by encrypting it while it is in storage, and some have integrated software that protects information while it is being transmitted from one location to another (i.e. end-to-end encryption). However, personal data when it is actually in use by company algorithms, cloud storage applications, or by third-party vendors, rarely receives this same level of protection, leaving identifiable user information dangerously susceptible to interception, misuse and abuse.
Anonymization, though largely undefined, has historically been the principle approach to protecting data containing sensitive, restricted personal data when in use. However, the effectiveness of anonymization techniques has been severely compromised by the large volume and variety of data that is available to combine and compare against data that is allegedly protected using traditional anonymization technology.
Recent well-publicized criticism by data privacy experts highlights that anonymization is no longer effective at preventing unauthorized re-identification of individuals in today’s “Big Data” world where frequent data sharing and combination is commonplace.
Simply put, encrypting and anonymizing personal data is no longer enough under GDPR.
Leveraging newly defined GDPR compliant Pseudonymisation serves as the only foolproof measure that organizations reliant on data processing can take to secure sensitive consumer information while it is in use.
The technology also dynamically alters encryption keys (i.e. dynamic tokenization in stark contrast to pre-GDPR static tokenization/Pseudonymisation), circumventing deciphering strategies frequently deployed by adept hackers. As such, cybersecurity protocols that integrate GDPR compliant Pseudonymisation keep data safe while in use while simultaneously combatting the “Mosaic Effect” – the process by which potential intruders can crack encryption keys and re-identify consumers’ information by mix-and-matching the exposed data with other sources.
Unfortunately, striking the right balance between maximizing the value of data while still respecting basic consumer liberties has proven challenging for many companies. Earlier this year, the UK Information Commissioner’s Office (ICO) penalized two global Fortune 500 companies for failing to undertake “appropriate steps to protect fundamental privacy rights,” imposing fines that, when combined, totaled near £300 million.
These enforcement actions, coupled with additional legal and regulatory decisions throughout Europe and North America, have highlighted the immediate need for companies to upgrade their networks, rendering entire industries at risk of noncompliance.
At the same time, calls for more ethical, ‘fair trade’ principles in data processing are underway, with a drumbeat of activists and scholars steadily demanding stronger corporate and intergovernmental oversight of consumer information.
Challenging businesses to reduce the amount of identifiable records they collect and store to an absolute minimum, many have expressed deep dissatisfaction with how social media or biometric/facial recognition data has been manipulated and lackadaisically distributed to untrustworthy API developers and data supply chain partners.
Needless to say, the Facebook-Cambridge Analytica scandal would not have been so devastating had the online conglomerate taken adequate steps to properly Pseudonymise user content with dynamic cryptographic technology.
As the most comprehensive global privacy law to date, GDPR has broken fertile ground for companies handling sensitive information to reexamine their old-fashioned “analog” data protection measures, paving the way towards a new era of “digital” data privacy protections like Pseudonymisation. Other countries around the world, however, are starting to take notice. In Israel, for instance, Judge Michal Agmon-Gonen of the Tel Aviv District Court recently ruled that the era of “Big Data” processing has rendered older anonymization practices obsolete, concluding that poorly-secured systems too easily allow “information such as location [to] be cross-referenced with other data – thereby revealing “many details about a person” that “infringe upon his privacy.”
Despite trials and tribulations in heightening corporate privacy practices, data holders operating across borders should gain solace in knowing that GDPR harmonizes Europe’s patchwork of intersecting information security regulations towards one common standard – “data protection by design and default.”
Sensitive personal records, when held to a higher standard of care, afford consumers the ability to rest easy knowing their data, if stolen, will not be traceable or exploitable. Pseudonymisation, dynamic functional separation, and fair-trade data concepts serve as the next-generation paradigm for information collection and processing – enabling a “digital” future that can overcome the shortcomings of our “analogue” past.
This article originally appeared in Info Security. All trademarks are the property of their respective owners. All rights reserved by the respective owners.
CLICK TO VIEW CURRENT NEWS