2022: THE YEAR OF GDPR PSEUDONYMISATION
2022: THE YEAR OF GDPR PSEUDONYMISATION
Recent EDPS Webinar and the EU Commission Equivalency Decision for South Korea Highlight that the Days of ‘Unprotected Processing by Default’ are Numbered in 2022
I. EDPS Webinar on Pseudonymisation
A. Key Quotes
As with most regulatory bodies, speeches by leaders in those organisations often signal areas of focus and emphasis.
The following quotes from the “Pseudonymous Data: Processing Personal Data While Mitigating Risks” webinar hosted by the European Data Protection Supervisor (EDPS) on 9 December 2021 highlight why pseudonymisation is the most misunderstood and underutilised means to achieve simultaneous data enablement and protection under Schrems II and the GDPR generally.
- “Our legal data protection rules in the European Union and particularly GDPR itself considered pseudonymisation as a sort of model of all risk mitigating measures. This comes only after the first of all obligations; if you do not need the personal data, do not process them. But if you need the personal data, then GDPR refers to pseudonymisation when it takes exemplifying the appropriate safeguards in many circumstances.” 
Wojciech Wiewiórowski, European Data Protection Supervisor
- “The first rule in data protection is: if you do not need personal data, do not collect personal data. The second rule in data protection is: if you really need personal data, then start by pseudonymising this personal data:” 
Thomas Zerdick, Head of Technology and Privacy – EDPS
- “After the Schrems II ruling, the debate on pseudonymisation has gained momentum as many consider it as the most viable “supplementary measure” to transfer personal data to third countries not offering an equivalent level of protection.” 
Thomas Zerdick, Head of Technology and Privacy - EDPS
- “The idea is that as soon as you have personal data, you should pseudonymise them if it's possible. This is the spirit of recital 29 of the GDPR, which considers it a basic measure, even within the same organisation when utility is maintained. Not sure this is done on a regular basis. Recital 78 adds that it should be done as soon as possible.” 
Wojciech Wiewiórowski, European Data Protection Supervisor
- “Data Controllers may not be aware that the GDPR encourages the use of pseudonymisation. There are more than 16 occurrences of pseudonymisation in the GDPR. And they are not aware that pseudonymisation helps to reduce the risks associated with processing data and can help relax some GDPR obligations.” 
Monir Azraoui, Technology Expert (CNIL)
B. About the EDPS
The European Data Protection Supervisor (EDPS) is the EU’s independent Data Protection Authority responsible for monitoring and ensuring that European institutions and bodies respect the right to privacy and data protection when they process personal data and develop new policies. The EDPS, an increasingly influential authority, is headed by a Supervisor and supported by experienced lawyers, IT specialists and administrators.
In addition, the EDPS is a full member and serves as the Secretariat of the European Data Protection Board (EDPB). The EDPB comprises Data Protection Authority representatives for each EU member state. As the EDPB Secretariat, the EDPS provides administrative, technical, and other support working under the Chair of the EDPB. As such, the two organisations coordinate closely.
As noted above, speeches by leaders of regulatory bodies are often used to signal areas of focus and emphasis. With that in mind, note that the EDPB is actively preparing (which means EDPS staff are working with EU Data Protection Authority representatives to prepare) updated guidance on the use of Legitimate Interests as a lawful basis for processing personal data (under which pseudonymisation plays a key role) and on Anonymisation and Pseudonymisation, as defined in the GDPR, as data protection by design and by default techniques.
II. The EU Commission Equivalency Decision for South Korea
A. Key Pseudonymisation Highlights
It is interesting to note that in the context of the 17 December 2021 decision by the European Commission (“Commission”) that South Korea’s Personal Information Protection Act (as updated, “PIPA”) ensures an adequate level of protection for EU personal data processed by controllers and processors, the Commission highlighted that:
- Instead of identifying pseudonymisation as a possible safeguard, PIPA imposes it as a non-elective precondition for certain processing activities pertaining to statistics, scientific research and archiving in the public interest (such as to be able to process the data without consent, perform further processing or to combine different datasets).
- PIPA prohibits the processing of pseudonymised information with the purpose of identifying a certain individual. In fact, if information that could identify an individual would be generated while processing pseudonymised information, the controller must immediately suspend the processing and destroy such information. “Failure to comply with these provisions is subject to administrative fines and constitutes a criminal offence. This means that, even in those situations where it would be practically possible to re-identify the individual, such re-identification is legally prohibited.”
- Under PIPA, parties have an affirmative obligation to “endeavour to process personal data in anonymity or in pseudonymised form, if possible.”
- Parties are not required to notify individuals under PIPA when a data breach involves pseudonymised information processed for the purposes of statistics, scientific research or archiving in the public interest.
- The obligation to destroy personal data upon achieving the purpose of processing or upon expiry of the retention period (whichever is earlier), does not arise under PIPA when pseudonymised data is being processed for statistical purposes, scientific research or archiving in the public interest.
B. About the EU Commission
The European Commission (Commission) is the executive branch of the European Union, responsible for proposing legislation, enforcing EU laws, and directing the union's administrative operations.
The Commission is responsible for planning, preparing and proposing new European laws. It has the right to do this on its own initiative. The laws it proposes must defend the interests of the Union and its citizens. The Commission submits a legislative proposal to the European Parliament and the Council of the European Union, who must agree on the text for it to become EU law.
The Commission is responsible for monitoring whether EU laws are applied correctly and on time. In this role, the Commission is referred to as the "guardian of the treaties". 
III. DON’T MISS WEBINAR - 2022: THE YEAR OF GDPR PSEUDONYMISATION & SCHREMS II COMPLIANCE
27 JANUARY at 4PM CET / 10AM EST
Register at www.pseudonymisation.com/webinar
Gary LaFever (Anonos) will discuss the requirements and benefits of GDPR Pseudonymisation as recently highlighted by the EDPS as the most promising Schrems II-compliant Supplementary Measure.
Prof. Dr. Michael Schmidl (Baker McKenzie) will cover requirements for Schrems II-compliant data transfer impact assessments, new model clauses and will provide an overview of EC Decisions 2021/914 and 2021/915 and EDPB Recommendation 01/2020 and Guidelines 05/2021, with a special focus on German requirements.
Submit questions in advance to Learn@Pseudonymisation.com
IV. Anonos Data Embassy GDPR Pseudonymisation Software
Privacy and trust in data are now critical differentiators between technology vendors, particularly among cloud and Software-as-a-Service (SaaS) vendors. The lack of technologically enforced auditable data protection controls creates strong distrust among countries, courts, regulators, companies, and consumers. This distrust resulted in near-billion-euro/dollar fines and multi-billion-euro/dollar class-action lawsuits in 2021. All global data supply chain stakeholders suffer when data is not protected sufficiently to ensure lawful processing and trust - this cannot be accomplished by encryption alone.
No political solution will solve this trust deficit - the answer is technologically enforcing trust controls that protect data during computation for analytics, AI and Machine Learning (ML)
Anonos Data Embassy® software embeds GDPR pseudonymisation-enabled trust controls that travel with the data to Maximize Its Value Lawfully.
Anonos Data Embassy software produces state-of-the-art GDPR-compliant pseudonymisation enabled Variant Twins® (i.e., use case-specific privacy-respectful versions of “digital twin” data) that protect both direct identifiers (e.g., passport number, credit card numbers) and indirect identifiers (e.g., date of birth, zip code, gender) to enable lawful complex processing of EU personal data. In addition to the benefits set described above, Variant Twins deliver 100% accuracy and fidelity compared to the results of processing data in the clear and up to 16X improved speed to insight by reducing time for privacy reviews by 25% and increasing the number of projects approved by 4X.
With Anonos Variant Twins, companies can address privacy challenges while expanding lawful data use, sharing, and combining for all stakeholders in global data supply chains. Variant Twins create unmatched competitive benefits, preserving data processing and data accuracy and exponentially increasing Analytics, AI & ML opportunities. Anonos has developed a unique, patented solution that restores trust to global data ecosystems by embedding intrinsic, centralised data privacy/protection controls in the data, no matter where it travels. while preserving 100% data accuracy and utility – including during decentralised use.
Comparison of GDPR Pseudonymisation to Other Privacy Enhancing Technologies
The above chart compares a wide range of privacy enhancing technologies (PETs). After listing them, they are classified as using either cryptographic or non-cryptographic techniques (or in the case of pseudonymisation, both).
The balance of the chart to the right is a “knock-out” analysis of the technologies using a series of evaluation and elimination criteria. In this analysis, once a PET has been eliminated, it is no longer evaluated against subsequent criteria.
The first criteria in column (1) is protection of data during computation for uses such as analytics, AI and machine learning. This is sometimes called protection in use, to contrast it with protection of data at rest and in transit. Encryption is the de facto standard for protection of data at rest and in transit, but is successful at doing so precisely because it renders data unusable for computation and therefore does not protect data in use.
The second criteria in column (2) considers the ability of the PET to provide results that deliver detailed record level protection. Differential Privacy and cohorts/clusters by definition provide aggregate results rather than record level results and thus fall out at this point.
The third criteria in column (3) looks at how well a PET succeeds at delivering effective protection, while preserving utility comparable to processing cleartext. Each of the PETs receiving a “No” evaluation suffers from an inability to resolve a fundamental tradeoff between protection and utility. Greater protection invariably results in a loss of utility and preservation of utility results in weaker protection, regardless of whether the approach adds noise, masks or generalizes values, or synthesizes artificial data.
The fourth criteria in column (4) involves the ability of a PET to efficiently and effectively support AI and machine learning. Multi-party computing fails in this regard as a result of massive bandwidth requirements to coordinate calculations between participating nodes. Similarly, homomorphic encryption is not computationally feasible at the time and data volume scales required by these analytical techniques. And any future advances in computation power will still always leave them orders of magnitude slower than other PETs, which will also benefit relative to today’s performance from the additional computational power and speed.
The fifth criteria in column (5) looks at the ability of the remaining options to enable data-sharing and multi-cloud use cases. Confidential Computing via a Trusted Execution Environment, which has fared well up to this point now also drops out, as by its nature, the trusted execution environments that are used to achieve confidential computing are by design impenetrable silos, antithetical to data sharing.
At this point, the remaining PETs are GDPR Pseudonymisation and Anonos Data Embassy Variant Twins software, which not coincidentally leverages GDPR Pseudonymisation. Among PETs, only GDPR Pseudonymisation simultaneously:
● Protects data during computation for analytics
● Provides accurate (vs cleartext) record-level results
● Reconciles the trade-off between protection and utility
● Supports AI and machine learning
● Supports data sharing and multi-cloud use cases
The sixth and final criteria in column (6) involves the ability to deliver scalable digital enforcement of enterprise-level data protection policies and controlled relinking, which knocks out GDPR Pseudonymisation as at PET standing on its own. Anonos Data Embassy Variant Twins succeed as the remaining PET by combining GDPR-pseudonymisation with other PETs that do not distort data or add noise (e.g., masking, generalisation, tokenisation, and k-anonymity) and leverages patented technology to enable:
- The use of different pseudonyms at different times for different purposes (i.e., dynamism);
- Controlled-Relinkability that allows relinking from protected subsets of data to the entire original source data sets under controlled conditions; and
- Digital enforcement of Privacy Policies.
If you have questions, email us at Learn@Pseudonymisation.com
This article originally appeared in LinkedIn. All trademarks are the property of their respective owners. All rights reserved by the respective owners.
CLICK TO VIEW CURRENT NEWS