Achieving AI’s Full Potential With “Fair Trade Data”
Regulating the future of data and AI means making rules for a world that we do not yet fully understand. While AI is still in its infancy, the time is now to put safeguards in place to ensure that individuals are protected while still fostering an environment that encourages innovation. By establishing “Fair Trade Data” standards (as described below), we can drastically reduce the risk that algorithms cause unintentional harm to any group or individual.
Data protection authorities now require greater transparency to ensure that secondary processing of data is “trustworthy,” “ethical,” and “non-discriminatory.” The technical safeguards embodied in Fair Trade Data are designed to maintain fidelity and reduce the possibility of re-identification, bias and discrimination while maintaining the accuracy and highest levels of trust in the observations and decision-making resulting from its use. The availability of Fair Trade Data is paramount to creating much-needed transparency around the provenance of input datasets used to train AI applications. Data controllers and processors should be required to ensure that the steps they take in connection with AI processing limit the risk of exposure for individual data subjects to the misuse of their personal data against them. If these protections are not put into place in the near future, individual data subjects may suffer as billions are invested by both countries and commercial enterprise in AI, analytics and machine learning. The rights of data subjects may otherwise be sacrificed going forward with no alternatives for recovery if companies (i) elect to fight in court rather than change the processes in which they have invested, and (ii) decide that the most cost-effective course of action is “regulatory arbitrage” against a perceived low risk/cost of enforcement action.
Realizing the full potential of AI is at risk from concerns being increasing raised over potential bias, discrimination and violations of data subjects’ privacy. To date, the opportunity presented by increasingly sophisticated data science technologies has been undermined by privacy enhancement techniques that purport to protect the identity of the data subjects in any one dataset but fall far short of this goal in a big data world. The application of these techniques, including anonymization in combination with generalization and de-identification, introduces significant distortion into data which may result in erroneous conclusions being drawn from a protected dataset in comparison to the original data in a non-protected form.
For these purposes, “Fair Trade Data” refers to data that has embeded, technically-enforced, granular privacy controls to eliminate the risk of “Conflict Data” (as defined below) and protect against bias, discrimination, and violation of data subjects’ privacy. In contrast, “Conflict Data” describes the risk of personal information concerning an individual being used to the disadvantage of that person. It is analogous to “conflict diamonds” being used against a country in which they are illegally mined to the disadvantage of the country.
Current industry practices have outpaced the ability of policies alone or outdated technical approaches to adequately protect against bias, discrimination and violation of fundamental rights of privacy. Current data processing capabilities and practices require new Fair Trade Data controls that enforce:
- TECHNICAL AND ORGANIZATIONAL SAFEGUARDS REQUIRED FOR PRE-GDPR DATA AND ADVANCED PROCESSING TO BE LEGAL: The GDPR requires technical and organizational safeguards that: (a) transform illegal pre-GDPR data so that it remains legal to possess and process; and (b) support a non-consent and non-contract legal basis - Legitimate Interest processing - for advanced analytics, AI, marketing, and other iterative processing applications to be lawful under the GDPR.
- DATA USE MINIMIZATION (VS COLLECTION OR RETENTION MINIMIZATION) BY DYNAMICALLY CONTROLLING RE-IDENTIFICATION: Maximize authorized and minimize unauthorized uses of data by dynamically reducing re-identification risks.
- TRANSPARENCY AND AUDIT CONTROLS: Enable the availability of statistical properties of data sets to aid in interpreting decisions made using the data and to ensure auditable compliance with data privacy and use policies.
- CROSS-SECTIONAL POLICY ENFORCEMENT: Enable common data store(s) to programmatically support data protection and privacy rights management policies applicable to different entities and locations (i.e. companies, industries, states, countries, regions, etc.) – and to do so simultaneously.
- REAL-TIME POLICY ADJUSTMENT: Adjust in real-time to the changing requirements of policies by dynamically modifying the intelligible form of data into which dynamically obscured data are transformed.
These Fair Trade Data principles are consistent with the intentional omission of any “grandfather” provision under GDPR Recital 171 as well as the principles of lawfulness, purpose limitation, data minimization and data protection by design and by default under GDPR Articles, 5(1)(a), (b) and (c), and Article 25.
Gary LaFever is CEO and General Counsel of Anonos, a technology firm specializing in maximizing data innovation and value while complying with evolving data protection laws and technology-based regulation. Gary has been consulted by leading global corporations, international regulatory bodies and the United States Congress for his expertise on data privacy. Gary was formerly a partner at the top-rated international law firm of Hogan Lovells.
This article originally appeared in Lexology. All trademarks are the property of their respective owners. All rights reserved by the respective owners.
CLICK TO VIEW CURRENT NEWS