European Parliament Highlights the Need for More Effective Data Protection to Comply with GDPR and Schrems II Requirements
The EU Parliament highlights that organisations can no longer rely on simple "data housekeeping" practices alone; instead, they must shift towards advanced data protection approaches — such as GDPR Pseudonymisation — to control Big Data use.
The methods and purposes of data processing have changed dramatically over the past several decades: ongoing technology developments make it easier to switch seamlessly from primary purpose data collection and processing to advanced secondary analytics, complex artificial intelligence (AI) and machine learning (ML). As Big Data continues to advance using these technologies, data protection techniques must also evolve to keep pace. The General Data Protection Regulation (GDPR) was a major step towards ensuring that organisations comply with required data protection obligations to respect data subjects' fundamental rights. However, widespread non-compliance remains an issue. The European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) recently released a Motion for a Resolution that considers the current privacy law landscape now that the GDPR has been in force for over two years. A review of the Motion for Resolution highlights that the most popular Privacy Enhancing Technologies (PETs) do not provide adequate protection for popular Big Data use cases.
Beyond Simple Data Protection
Many companies still find themselves focusing on simple data protection for primary data collection and processing and nothing more. Organisations are aware of what data they are collecting, where they are storing it, and what direct identifiers are in their data. They are also conducting data inventory and data flow assessments, all of which comprise good "data housekeeping" techniques.
However, none of these approaches satisfies the real issue, that "secondary processing" via analytics, AI, and ML is becoming more pervasive each day. While contracts and consent may cover most primary data collection and processing practices, secondary processing often falls outside these legal mechanisms' realms. In most cases, and certainly, when it comes to AI and ML, a different lawful basis must be satisfied for lawful secondary processing, known as legitimate interests processing. These secondary processes and associated global data transfers were the focus of the highly-impactful Court of Justice of the European Union (CJEU) ruling known as Schrems II. This case puts additional pressure on organisations to consider what data they are processing and what technical and organisational measures they have put in place to support lawful secondary processing and data transfers.
Data Protection by Design and by Default
Critically, LIBE noted the importance of data protection by design and by default for all processing, which cannot be satisfied by "data housekeeping" techniques alone. Data protection by design and by default requires technical and organisational measures to protect data both in the EU (as required by the GDPR) and outside of the EU (as confirmed by Schrems II).
LIBE highlights that legitimate interest processing is crucial for further Big Data development. The secondary processing that many organisations are now undertaking is widespread: analytics, AI, and ML fuel sharing and combining data with multiple other parties. To do this lawfully, legitimate interests processing requires a "balancing of interests" test to be satisfied, ensuring that the processing is done in a proportionate manner, and protects the rights of data subjects. Implementing appropriate and robust data protection methods can help tip the balance of this test in favour of the data controller. Still, without technical and organisational measures, this processing's lawful basis is unlikely to be satisfied.
Is Anonymisation Possible?
One aspect of the LIBE's Motion for Resolution that may be somewhat out of step with the realities of modern Big Data processing is a mention of anonymisation. In today's Big Data world, anonymisation is in many respects impossible to achieve. Datasets can be combined from numerous sources, rendering attempts at anonymisation ineffective. The more datasets out there, the higher the likelihood they can be combined to de-anonymise any given person. While LIBE recommends that the European Data Protection Board (EDPB) create guidelines and "a list of unambiguous criteria to achieve anonymisation", many do not believe anonymisation is even possible in today’s Big Data world.
These steps reflect the evolution from prior data processing practices involving single or small, controllable groups of entities — to the current widespread "Big Data" world — involving decentralised processing by numerous parties, disparate data sets, users, use cases, and data transfers. Big Data secondary processing, sharing, and combining actually invalidates the architectural prerequisites relied upon by traditional privacy-enhancing techniques (PETs). These traditional PETs, developed before modern Big Data practices became commonplace, rely on controlling the data, users, use cases, and transfer to limit the likelihood of unauthorised re-identification occurring — goals that are not realistically achievable in today's Big Data world.
New technical measures are necessary. The clue comes from the GDPR itself. GDPR’s heightened requirements for Pseudonymisation are explicitly mentioned in the GDPR as a means to implement data protection by design and by default. GDPR Pseudonymisation is also recommended by the EDPB as a means to transfer data in compliance with Schrems II. Schrems II raised the issue of lower protection standards in third countries such as the US, which brought about the invalidation of the Privacy Shield treaty (following in the tracks of the Safe Harbour treaty's invalidation in the predecessor Schrems I ruling by the CJEU). Given Schrems II's requirements for global data transfers, techniques such as GDPR-compliant Pseudonymisation can help to protect data from surveillance in other countries. Political resolutions alone will not resolve underlying conflicts.
This increasing shift towards Big Data processing and the accompanying necessity of data protection evolution highlights the importance of technical and organisational controls (such as GDPR Pseudonymisation) referred to as "Supplementary Measures" in Schrems II to ensure the processing of data without revealing identity except under authorised conditions. LIBE highlights that organisations can no longer rely on "data housekeeping" practices alone. Instead, they must shift towards more effective data protection techniques to control Big Data.
The Board Risk Assessment Framework is now available to view and download at https://www.SchremsII.com/Board2
Join the Schrems II Linkedin Group with over 5,500 of your senior-level colleagues: https://www.linkedin.com/groups/12470752/
Are you Schrems II Compliant Quiz (in 2 questions): https://www.anonos.com/TakeTheQuiz
See the final version of 2020/2717(RSP) Resolution on the Commission Evaluation Report on the Implementation of the General Data Protection Regulation Two Years After its Application at https://www.europarl.europa.eu/doceo/document/B-9-2021-0211_EN.html
See Judgement of the Court of Justice of 16 July 2020, Data Protection Commissioner v Facebook Ireland Limited and Maximillian Schrems, C-311/18 at https://curia.europa.eu/juris/document/document.jsf?text=&docid=228677&pageIndex=0&doclang=en
See Supra, Note 1, at paragraph 25, “the supervisory authorities to evaluate the implementation of Article 25 on data protection by design and by default, in particular with a view to ensuring the technical and operational measures needed to implement the principles of data minimisation and purpose limitation, and to determine the effect this provision has had on manufacturers of processing technologies...” This speaks to the importance of effective technical and organisational measures complying with Article 25 data protection by design and by default requirements (which specifically and uniquely highlights Pseudonymisation) to enforce purpose limitation and data minimisation principles to enable compliance with the requirements for Article 6(1)(f) legitimate interest processing (including the balancing of interest test to show risks to data subjects have been sufficiently mitigated to enable the interests of the data controller to prevail) to enable further processing (under Article 6(4) (which also specifically includes Pseudonymisation in 6(4)(e)) to go beyond the “bounds” of processing authorized within the purposeful narrow (and protective) constraints of Article 6(1)(a) Consent and 6(1)(b) Contract.
See Supra, Note 1, at paragraph 26, “the EDPB to issue guidelines that classify different legitimate use cases of profiling according to their risks for the rights and freedoms of data subjects, along with recommendations for appropriate technical and organisational measures, and with a clear delineation of illegal-use cases…”
See Supra, Note 1, at paragraph 26, “the EDPB to review WP29 05/2014 of 10 April 2014 on Anonymisation Techniques and to establish a list of unambiguous criteria to achieve anonymisation…” However, the European Data Protection Supervisor (EDPS) and the Spanish Agencia Española de Protección de Datos (AEPD) jointly held that “anonymisation procedures must ensure that not even the data controller is capable of re-identifying the data holders in an anonymised file.” See https://edps.europa.eu/sites/edp/files/publication/19-10-30_aepd-edps_paper_hash_final_en.pdf. For these reasons, I believe the approach of Data Protection by Design and by Default leveraging Pseudonymisation is required to be effective in protecting data subjects’ fundamental rights while enabling ongoing innovative uses of data.
See Anonymising Personal Data ‘Not Enough to Protect Privacy’, Shows New Study at https://www.imperial.ac.uk/news/192112/anonymising-personal-data-enough-protect-privacy/
See GDPR Article 25(1), “the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation…” (emphasis added).
See paragraph 80 of EDPB Recommendations 01/2020 on Measures that Supplement Transfer Tools to Ensure Compliance with the EU Level of Protection of Personal Data at https://edpb.europa.eu/sites/edpb/files/consultation/edpb_recommendations_202001_supplementarymeasurestransferstools_en.pdf
See Supra, Note 1, at paragraph 29, “to date, no single mechanism guaranteeing the legal transfer of commercial personal data between the EU and the US has stood a legal challenge at the Court of Justice of the European Union (CJEU),” and paragraph 33, “mass surveillance programmes encompassing bulk data collection prevent adequacy findings; urges the Commission to apply the conclusions of the CJEU in the cases Schrems I [invalidation of the Safe Harbour, the predecessor to the Privacy Shield treaty], II [which invalidated the Privacy Shield] and Privacy International (2020)[which held in joined decisions that French, UK and Belgian collection and retention of telecommunications data for future surveillance purposes was unlawful – thereby extending restrictions on surveillance to EU member states] to all reviews of adequacy decisions as well as ongoing and future negotiations…”
This article originally appeared in LinkedIn. All trademarks are the property of their respective owners. All rights reserved by the respective owners.
CLICK TO VIEW CURRENT NEWS