GARY LAFEVER | 05/01/19

A 'fair-trade data' standard for AI

This article originally appeared in The Hill.  

In recent months, policymakers have begun turning their attention to the potential hazards and perils that may come as we turn over an increasing amount of decision-making responsibility to artificial intelligence or “AI.” These are not the Hollywood-driven fears of AI-gone-mad that found their way into our cultural subconscious through films and TV; rather, the risk posed by AI as it exists today is far subtler and arguably more insidious. The threat is not a hostile takeover by a malevolent computer, but instead the “baking in” of human prejudices, biases and injustices into seemingly dispassionate computer code.

AI and machine learning systems only work if they’re given a stream of data — data collected from countless individuals that is fed into a series of algorithms and used to help streamline complex decisions. But these decisions have real, tangible consequences: Does a mortgage get approved? Will insurance cover a medical procedure? Which resumes are surfaced to hiring managers?

It’s easy to see how the wrong data or algorithm could inadvertently replicate conscious or unconscious human biases and bake them into systems too complex and deeply embedded for anyone to reasonably monitor. And so, as members of Congress and other policymakers look to build a framework for the ethical development and deployment of artificial intelligence, the key thing they must consider is where data is coming from, how it’s being processed, and if it respects the privacy and anonymity of the individuals who supply it.

In the 1990s, the term “conflict resource” entered the public consciousness — precious, valuable materials being mined and sold for the purpose of funding violence and exploitation. As part of a global outcry against these materials, an infrastructure was created to certify “conflict-free” or “fair trade” supply chains, allowing for more ethical use and consumption of necessary materials. As data becomes the currency of an AI-driven economy, we must similarly build an infrastructure that limits and sanctions “conflict data” and promotes an ethical “fair trade data” standard.

So, what would “fair trade data” look like?

First, it must be subject to technically enforced data use minimization control — this is to say that data should have embedded controls so it can only be used for authorized intended purposes. Demographic data collected to identify potential medical conditions, for instance, should not find itself used to better target advertisements. 

Second, it must be designed to decrease bias and discrimination as much as possible. This is made possible by dynamic functional separation of data sets. This means that individual data sets — names, birthdates, addresses, gender, etc. — are kept separate in all instances, except when they need to be relinked for a specific purpose, based on authorized use and established regulations. If a class like race or gender does not need to apply to an AI task, it should not be considered, even if that data is available.

Last, it must only be stored and shared with built-in safeguards like dynamic use-case specific de-identification designed to preserve the privacy rights of individuals to ensure that it stays “ethical” even if the data set is leaked or as regulations evolve.

With these principles at its core, “fair trade data” is designed to both maintain fidelity of the information and reduce the possibility of re-identification, bias and discrimination. At the same time, it still allows for researchers, companies, government agencies and more to use data, ethically, to solve big problems, streamline complex processes, and make difficult decisions. If these kinds of protections are not put into place, individuals will suffer as their data is processed in unforeseen ways, shared with unethical actors or used to reinforce structural and systemic biases.

This is the greatest challenge for policymakers — regulating the future of data and AI means making rules for a world we do not yet fully understand. AI is in its infancy, but that means the time is now to put safeguards in place to ensure that individuals are protected while still fostering an environment that encourages innovation. By setting standards for data usage, storage and sharing, we drastically reduce the chances that an algorithm is able to cause unintentional harm to any group or individual.

Gary LaFever is CEO and General Counsel of Anonos, a technology firm specializing in data risk management, security and privacy. LaFever has been consulted by leading global corporations, international regulatory bodies and the United States Congress for his expertise on data privacy. He was formerly a partner at the top-rated international law firm of Hogan Lovells. Follow him on Twitter @GaryLaFever and on LinkedIn.

This article originally appeared in The Hill.  All trademarks are the property of their respective owners. All rights reserved by the respective owners.


Are you facing any of these 4 problems with data?

You need a solution that removes the impediments to achieving speed to insight, lawfully & ethically

to Insight
Are you unable to get desired business outcomes from your data within critical time frames? 53% of CDOs cannot achieve their desired uses of data. Are you one of them?
Lack of
Do you have trouble getting access to the third-party data that you need to maximise the value of your data assets? Are third-parties and partners you work with worried about liability, or disruption of their operations?
Inability to
Are you unable to process data due to limitations imposed by internal or external parties? Do they have concerns about your ability to control data use, sharing or combining?
Are you unable to defend the lawfulness of your current data processing activities, or data processing you have done in the past?
Traditional privacy technologies focus on protecting data by putting it in “cages,” “containers,” or limiting use to centralised processing only. This limitation is done without considering the context of what the desired data use will be, including decentralised data sharing and combining. These approaches are based on decades-old, limited-use perspectives on data protection that severely minimise the kinds of data uses that remain available after controls have been applied. On the other hand, many other new data-use technologies focus on delivering desired business outcomes without considering that roadblocks may exist, such as those noted in the four problems above.
Anonos technology allows data to be accessed and processed in line with desired business outcomes (including sharing and combining data) with full awareness of, and the ability to remove, potential roadblocks.