Prior to the GDPR, relying on anonymization was generally considered necessary to enable many forms of data innovation, including the lawful sharing and multi-party analysis of data for commercial and societal benefit. However, the likelihood that data processed for AI will satisfy the requirements for anonymization is remote due to today’s realities of:
- The ease with which data used for AI processing can be combined with readily available data resulting in the reidentification of individuals as in the Scania case;3
- The inability of organizational or contractual measures by themselves to prevent a priori the misuse of the data, and the critical importance of complementary technical safeguards;
- The increasing popularity of data processing activities involving innumerable parties;4
- The increasing prevalence of data breaches and cybercrime exposing data to unintended recipients.
However, this does not mean that the data is unavailable for innovation, including lawful sharing and multi-party data analysis for commercial and societal benefit. To enable this innovation, GDPR Article 25 requires data controllers to leverage the state of the art by complying with new “data protection by design and by default” obligations. Specifically, controllers must “implement appropriate technical and organizational measures,
such as pseudonymization, which are designed to implement data-protection principles, such as data minimization, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.”
The inclusion - for the first time in EU law - of a statutory definition for “pseudonymization” in the GDPR shows that parties can leverage state-of-the-art technical capabilities to enable data-driven innovation that balances fundamental rights - while staying within (versus outside) the scope of the protection of personal data as defined by the GDPR. Prior to the statutory redefinition, the term “pseudonymization” was often used to describe the result of the failed anonymization of personal data. In contrast, to be entitled to the specific statutory benefits attributable to “pseudonymization” under the GDPR,
5 parties must now show that (a) “the personal data can no longer be attributed to a specific data subject without the use of additional information,”
and (b) “such additional information is kept separately and is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.”
6
Anonos recently filed an
Intervention with the CJEU in support of the EDPS' appeal of the judgment of the General Court in Case T-557/20, SRB v EDPS (Case C-413/23 P), in the context of the requirements for and benefits of statutorily compliant anonymization and pseudonymization. Anonos believes this filing was necessary since the growing popularity of AI dramatically increases the risk of irreparable harm from processing unsecured personal data. This is because AI involves distributed, multi-party processing of massive amounts of data containing sensitive, personal, and proprietary information on a global scale. Mitigating the associated risks cannot be achieved using traditional privacy and security techniques, effective only within constrained perimeters which is inconsistent with AI’s architectural requirements. It requires technologically enforced controls that travel with the data to prevent misuse before it occurs.
The
Anonos Intervention highlights that GDPR-compliant technologically enforced controls can prevent data misuse
a priori before it occurs with AI and other processing.
See the Appendix to the
Anonos Intervention for details on the requirements for and benefits of statutory pseudonymization under the GDPR, enabling ample opportunities for lawful data innovation and processing within the scope of the statute.
For more information, please email us at
LearnMore@anonos.com.