There are two general European approaches to “anonymisation” for removing data from the scope of applicable regulation. The first approach focuses on the risk of re-identification primarily in the intended recipient(s) hands - a “localised” approach. The second approach looks beyond the risk of re-identification by the intended recipient(s) to include other third parties - a more “global” approach.
The localised approach to “anonymisation” is at odds with the global approach taken by EU member states that include the risk of re-identification from third parties who, although unintended, are reasonably likely to anticipate. It is important to note that the difference is not whether the data can be used, but whether the data is available for use without requiring the benefits of protective provisions - which would be the case if it is “anonymous” - or available for use provided it upholds the protection requirements of the GDPR - which would be the case if it is “pseudonymous”.
The language of Recital 26 of the UK and the EU GDPR is identical. Recital 26 states that in determining identifiability...
“…account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly”. (emphasis added)1
The statutory wording indicates that it is insufficient to evaluate identifiability from just the controller’s perspective but must include other third parties “reasonably likely” to have access and the means of reidentification. It comes down to differences in interpretation of the “reasonably likely” risk of reidentification.
In the recent ICO call for views: Anonymisation, pseudonymisation and privacy enhancing technologies, the ICO is proposing the “localised” approach, as indicated in its statement:
"In the ICO’s view, the same information can be personal data to one organisation, but anonymous information in the hands of another organisation. Its status depends greatly on its circumstances, both from your perspective and in the context of its disclosure."
The localised approach to “anonymisation” is consistent with the ICO’s prior position in its Code of Practice on anonymisation under the prior Data Protection Directive. Under its prior Code of Conduct, the ICO took the position that pseudonymous data should be considered anonymised when used by a researcher without access to a key needed for reidentification.2 The UK Health Research Authority similarly opined that pseudonymised data should not be considered personal data in the possession of someone who does not hold the re-identification key if “there is no other means to identify the individuals either by the combination of the data collected or by combining the data with other information held by, or accessible to, the staff undertaking the analysis.”3
However, ongoing advances in data analysis techniques, hardware and the increasing availability of data sources make it increasingly straightforward to relink data to data subjects.4 Research repeatedly confirms that allegedly anonymous data sets can reveal the identity of individuals when the data contains dates of birth, gender, and postal codes. Some people believe that “technology is rapidly moving towards perfect identifiability of information; datafication and advances in data analytics make everything (contain) information, and in increasingly ‘smart’ environments any information is likely to relate to a person in purpose or effect”.5
There are likely to be situations where an organisation believes it has adequately protected data using the local approach to anonymisation advocated by the ICO so that the data is outside the jurisdiction of the UK GDPR. However, as highlighted below, the broader global approach adopted by EU supervisory authorities would lead to a different result with respect to EU personal data under the EU GDPR. If an organisation processes EU personal data using the localised approach to anonymization advocated by the ICO, it may have the unintended consequence of stripping EU personal data of GDPR protection required by EU member states.
Spain (and the European Data Protection Supervisor)
The Spanish Agencia Española de Protección de Datos (AEPD) and the European Data Protection Supervisor (EDPS) have issued joint guidance related to requirements for anonymity and exemption from GDPR requirements. According to the EDPS and AEPD, “anonymisation procedures must ensure that not even the data controller is capable of re-identifying the data holders in an anonymised file.”6Italy
The Italian Data Protection Authority (Garante) ruled against Rousseau Association (as a data processor) with a finding that merely removing a telephone number when other persistent unique identifiers still exist enabling indirect linking to data subject identities was inadequate protection of personal data.7Ireland
The Data Protection Commission (DPC) states in its Guidance on Anonymisation and Pseudonymisation, “As set out above, data can be considered ‘anonymised’ from a data protection perspective when data subjects are no longer identifiable, having regard to any methods reasonably likely to be used by the data controller - or any other person to identify the data subject. Data controllers need to take full account the latter condition when assessing the effectiveness of their anonymization technique...If the data controller retains the raw data, or any key or other information which can be used to reverse the ‘anonymisation’ process and to identify a data subject, identification by the data controller must still be considered possible in most cases. Therefore, the data may not be considered ‘anonymised’, but merely ‘pseudonymised’ and thus remains personal data, and should only be processed in accordance with Data Protection law.”8
France
In its recommendations on the implementation of anonymisation and pseudonymisation, the Commission Nationale de l'informatique et des Libertés (CNIL) underlines that the de-identification via anonymisation must be in an irreversible manner, “Anonymisation is a treatment which consists in using a set of techniques in such a way as to make it impossible, in practice, to identify the person by any means whatsoever and in an irreversible manner...Since the anonymisation process aims to eliminate any possibility of re-identification, the future exploitation of the data is thus limited to certain types of use.”10
Research by data scientists11 at Imperial College in London and Université Catholique de Louvain in Belgium, as well as a ruling by Judge Michal Agmon-Gonen of the Tel Aviv District Court,12 highlights the shortcomings of “anonymisation” in today's Big Data world. Many believe that anonymisation reflects an outdated approach to data protection13 that was developed when the processing of data was limited to isolated (siloed) applications prior to the popularity of Big Data processing involving the widespread sharing and combining of data. This is why the Israeli judge in the above-cited case highlighted the relevance of state of the art data protection principles embodied in the GDPR in her ruling that:
Increasing the technological capabilities that enable storing large amounts of data, known as “big data”, and trading this information, enables the cross-referencing of information from different databases, and thus also trivial information such as location, may be cross-referenced with other data and reveal many details about a person, which infringe upon his privacy.
Given the scope of data collection and use of information, the matters of anonymisation and reidentification have recently become important and relevant to almost every entity in Israel – both private and public – which holds a substantial amount of information.
Information technologies bring new challenges and ongoing privacy vulnerabilities. One of the solutions that has been discussed in recent years is that of privacy engineering (Privacy by design), i.e., the design of technological systems in advance, to include protection of privacy.
A binding rule regarding privacy engineering was established in the European Union. Regulation for the Protection of Personal Data Article 25 of the GDPR General Data Protection Regulation (which came into effect in 2018) imposes a duty on the data controller to implement appropriate and effective technological and organizational measures both at the stage of system planning and in the stage of information processing, in other words, requiring a process of privacy engineering.
For data to be universally “anonymous” on a global basis, the data must not be capable of being cross-referenced with other data to reveal identity. This very high standard is required because when data does satisfy these requirements, it is treated as being outside the scope of legal protection provided under the GDPR. Why? Because of the very “safe” and protected nature of the data that actually satisfies the stringent requirements of not being cross-referenceable or re-identifiable. The Israeli court in Disabled Veterans Association v. Ministry of Defense13 highlighted that this is generally not the case in today’s world of Big Data processing.
In today’s world of Big Data processing, data that is held by a data controller may be readily linkable with data that is beyond the control of the controller thereby facilitating unauthorized re-identification and exposing:
Relying on the localised approach to anonymisation exposes organisations to the unintended risk of cross border data transfer and processing violations. Therefore, as a precaution, parties should adopt the “global” approach to anonymisation and leverage GDPR-compliant Pseudonymisationwhen engaging in cross-border data transfer and processing, particularly in respect of the Schrems II ruling by the EU Court of Justice.
1 EU and UK GDPR Recital 26
2 Information Commissioner’s Office, Anonymisation: Managing Data Protection Risk Code of Practice, Annex 1, noting that pseudonymised information would not be personal data in the hands of a researcher who lacks access to the key. See https://ico.org.uk/media/1061/anonymisation-code.pdf
3UK National Health Service Health Research Authority, Controllers and personal data in health and care research. See https://www.hra.nhs.uk/planning-and-improving-research/policies-standards-legislation/data-protection-and-information-governance/gdpr-guidance/what-law-says/data-controllers-and-personal-data-health-and-care-research-context/
4See “They who must not be identified - distinguishing personal from non-personal data under the GDPR” (2020) International Data Privacy Law, 2020, Vol. 10, No. 1, at page 20 at https://academic.oup.com/idpl/article/10/1/11/5802594
5See “The Law of Everything. Broad Concept of Personal Data and Future of EU Data Protection Law” (2018) 10 Law, Innovation and Technology at page 40 at https://www.tandfonline.com/doi/full/10.1080/17579961.2018.1452176
6 See https://edps.europa.eu/sites/edp/files/publication/19-10-30_aepd-edps_paper_hash_final_en.pdf
7See https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9101974
8 See https://www.dataprotection.ie/sites/default/files/uploads/2019-06/190614%20Anonymisation%20and%20Pseudonymisation.pdf
9 See https://gdpr.eu/data-anonymization-taxa-4x35/
10 See https://www.cnil.fr/fr/lanonymisation-de-donnees-personnelles
11 See https://www.nytimes.com/2019/07/23/health/data-privacy-protection.html?smid=nytcore-ios-share
12 See https://www.nevo.co.il/psika_html/minhali/MM-17-06-28857-22.htm
13 See https://www.timesofisrael.com/data-is-up-for-grabs-under-outdated-israeli-privacy-law-think-tank-says/
14 Supra, note 12