April 24, 2019 | Gary LaFever

Achieving AI’s Full Potential With “Fair Trade Data”

Regulating the future of data and AI means making rules for a world that we do not yet fully understand. While AI is still in its infancy, the time is now to put safeguards in place to ensure that individuals are protected while still fostering an environment that encourages innovation. By establishing “Fair Trade Data” standards (as described below), we can drastically reduce the risk that algorithms cause unintentional harm to any group or individual.

Data protection authorities now require greater transparency to ensure that secondary processing of data is “trustworthy,” “ethical,” and “non-discriminatory.” The technical safeguards embodied in Fair Trade Data are designed to maintain fidelity and reduce the possibility of re-identification, bias and discrimination while maintaining the accuracy and highest levels of trust in the observations and decision-making resulting from its use. The availability of Fair Trade Data is paramount to creating much-needed transparency around the provenance of input datasets used to train AI applications. Data controllers and processors should be required to ensure that the steps they take in connection with AI processing limit the risk of exposure for individual data subjects to the misuse of their personal data against them. If these protections are not put into place in the near future, individual data subjects may suffer as billions are invested by both countries and commercial enterprise in AI, analytics and machine learning. The rights of data subjects may otherwise be sacrificed going forward with no alternatives for recovery if companies (i) elect to fight in court rather than change the processes in which they have invested, and (ii) decide that the most cost-effective course of action is “regulatory arbitrage” against a perceived low risk/cost of enforcement action.

Realizing the full potential of AI is at risk from concerns being increasing raised over potential bias, discrimination and violations of data subjects’ privacy. To date, the opportunity presented by increasingly sophisticated data science technologies has been undermined by privacy enhancement techniques that purport to protect the identity of the data subjects in any one dataset but fall far short of this goal in a big data world. The application of these techniques, including anonymization in combination with generalization and de-identification, introduces significant distortion into data which may result in erroneous conclusions being drawn from a protected dataset in comparison to the original data in a non-protected form.

For these purposes, “Fair Trade Data” refers to data that has embeded, technically-enforced, granular privacy controls to eliminate the risk of “Conflict Data” (as defined below) and protect against bias, discrimination, and violation of data subjects’ privacy. In contrast, “Conflict Data” describes the risk of personal information concerning an individual being used to the disadvantage of that person. It is analogous to “conflict diamonds” being used against a country in which they are illegally mined to the disadvantage of the country.

Current industry practices have outpaced the ability of policies alone or outdated technical approaches to adequately protect against bias, discrimination and violation of fundamental rights of privacy. Current data processing capabilities and practices require new Fair Trade Data controls that enforce:

  • TECHNICAL AND ORGANIZATIONAL SAFEGUARDS REQUIRED FOR PRE-GDPR DATA AND ADVANCED PROCESSING TO BE LEGAL: The GDPR requires technical and organizational safeguards that: (a) transform illegal pre-GDPR data so that it remains legal to possess and process; and (b) support a non-consent and non-contract legal basis - Legitimate Interest processing - for advanced analytics, AI, marketing, and other iterative processing applications to be lawful under the GDPR.
  • DATA USE MINIMIZATION (VS COLLECTION OR RETENTION MINIMIZATION) BY DYNAMICALLY CONTROLLING RE-IDENTIFICATION: Maximize authorized and minimize unauthorized uses of data by dynamically reducing re-identification risks.
  • TRANSPARENCY AND AUDIT CONTROLS: Enable the availability of statistical properties of data sets to aid in interpreting decisions made using the data and to ensure auditable compliance with data privacy and use policies.
  • CROSS-SECTIONAL POLICY ENFORCEMENT: Enable common data store(s) to programmatically support data protection and privacy rights management policies applicable to different entities and locations (i.e. companies, industries, states, countries, regions, etc.) – and to do so simultaneously.
  • REAL-TIME POLICY ADJUSTMENT: Adjust in real-time to the changing requirements of policies by dynamically modifying the intelligible form of data into which dynamically obscured data are transformed.

These Fair Trade Data principles are consistent with the intentional omission of any “grandfather” provision under GDPR Recital 171 as well as the principles of lawfulness, purpose limitation, data minimization and data protection by design and by default under GDPR Articles, 5(1)(a), (b) and (c), and Article 25.




Gary LaFever is CEO and General Counsel of Anonos, a technology firm specializing in maximizing data innovation and value while complying with evolving data protection laws and technology-based regulation. Gary has been consulted by leading global corporations, international regulatory bodies and the United States Congress for his expertise on data privacy. Gary was formerly a partner at the top-rated international law firm of Hogan Lovells.

This article originally appeared in Lexology. All trademarks are the property of their respective owners. All rights reserved by the respective owners.

 
CLICK TO VIEW CURRENT NEWS

Are you facing any of these 4 problems with data?

You need a solution that removes the impediments to achieving speed to insight, lawfully & ethically

Roadblocks
to Insight
Are you unable to get desired business outcomes from your data within critical time frames? 53% of CDOs cannot achieve their desired uses of data. Are you one of them?
Lack of
Access
Do you have trouble getting access to the third-party data that you need to maximise the value of your data assets? Are third-parties and partners you work with worried about liability, or disruption of their operations?
Inability to
Process
Are you unable to process data due to limitations imposed by internal or external parties? Do they have concerns about your ability to control data use, sharing or combining?
Unlawful
Activity
Are you unable to defend the lawfulness of your current data processing activities, or data processing you have done in the past?
THE PROBLEM
Traditional privacy technologies focus on protecting data by putting it in “cages,” “containers,” or limiting use to centralised processing only. This limitation is done without considering the context of what the desired data use will be, including decentralised data sharing and combining. These approaches are based on decades-old, limited-use perspectives on data protection that severely minimise the kinds of data uses that remain available after controls have been applied. On the other hand, many other new data-use technologies focus on delivering desired business outcomes without considering that roadblocks may exist, such as those noted in the four problems above.
THE SOLUTION
Anonos technology allows data to be accessed and processed in line with desired business outcomes (including sharing and combining data) with full awareness of, and the ability to remove, potential roadblocks.