The Anonos Advantage

Gartner award-winning Variant Twins enable you to legally process Analytics, AI & ML under the GDPR.


Anonos helps businesses become data-driven without compromising GDPR compliance obligations.


Anonos’ patented BigPrivacy technology enables the creation of nonidentifying Variant Twins personalized data to enable compliant analytics, AI and data sharing.


Anonos is more than GDPR compliance technology. It engineers privacy into solutions to enable analytics.


90% of global data is illegal under the GDPR.
CEO, Top Ten IT Company


Fair and Lawful Processing of Analytics, AL & ML

Problem Statement: After Subject Access Requests (SARs), Unfair Processing is the most common complaint by data subjects under the GDPR because of failure to satisfy (click below to learn more):


Solution: Anonos revolutionary BigPrivacy technology uniquely enables Fair Processing of Analytics, AI and ML by balancing the interests of data subjects and data controllers by (click below to learn more):


Requirement for data controllers to take action to legally possess personal data
New safeguards now required for Analytics, AI and ML to be lawful
Obligation to prevent unlawful processing and discriminatory practices and prove it
Transform Data into Variant Twins® so it is lawful for centralized and distributed use
Dynamically Control Re-Identification of data subjects
Dynamically Control Processing to be lawful and non-discriminatory
Legally Possess Personal Data


Before organisations engage in data inventories and privacy/data impact assessments, they must take action to continue to legally possess personal data collected using noncompliant “broad-based” consent. The Article 29 Working Party made it clear in the last paragraph on page 31 of the Article 29 Working Party April 2018 Guidance on Consent that this data is no longer legal to possess or process  under the GDPR without taking further action. The GDPR does not include a grandfather or “savings” clause that makes data collected in noncompliance with GDPR requirements legal to possess or process.

New Safeguards Now Required


When Analytics, AI and ML cannot be described at data collection with requisite specificity, unambiguity and voluntariness, consent is not available as a legal basis for processing. To satisfy the alternative legal basis of Legitimate Interest processing, data controllers must mitigate data subject risk by enforcing new technical and organisational safeguards to control what data is provided for each specific use and control the ability to re-link non-PII indirect identifiers to satisfy:

  • Legitimate Interest Processing - Articles 5(1)(a) & 6(1)(f);
  • Compatible Further Processing - Articles 5(1)(b) and 6(4); and
  • Data Minimization/Data Protection by Design & by Default - Articles 5(1)(c) and 25.
Obligation To Prevent Unlawful Processing


"Security-only” solutions (like encryption and static tokenization) only control who can access data and do not support a Legitimate Interest legal basis for processing personal data. Written contracts and policies by themselves do not mitigate risk because they do not prevent improper use and only describe the boundaries of desired proper use. The GDPR requires technical and organisational safeguards over what data is provided for each specific use and control over the ability to relink non-PII indirect identifiers so that Analytics, AI & ML satisfy GDPR requirements for:

  • Legitimate Interest Processing - Articles 5(1)(a) & 6(1)(f);
  • Compatible Further Processing - Articles 5(1)(b) and 6(4); and
  • Data Minimization/Data Protection by Design & by Default - Articles 5(1)(c) and 25.
Transform Data Into Variant Twins

Gartner identifies ”Digital Twins” - digital representations of real-world entities or systems - as a top-ten strategic trend for 2019. However, Gartner also highlights the “privacy paradox” between customer’s concerns over protecting the privacy of their “Digital Twin” and their desire for more personalized products, services and experiences.

The graphic to the right from the Georgetown Law Technology Review article Re-Identification of “Anonymized Data” shows that data exists along a spectrum of identifiability with privacy and utility being on opposite ends of this spectrum - with maximum utility (and identifiability) at the top and maximum privacy at the bottom. The article highlights that traditional privacy technologies fail to reconcile conflicts between data utility and privacy in two respects – they fail to effectively protect privacy while also failing to maximize the utility of data.

Anonos BigPrivacy dynamically transforms identifying Digital Twin personal data into non-identifying Variant Twin ®️ versions of data to reconcile the ”privacy paradox” and ease the conflict between data privacy and utility. By dynamically managing the risk of re-identification of both direct and indirect identifiers, BigPrivacy enforces privacy respectful data use in a way that is auditable and demonstrable. BigPrivacy supports GDPR compliant Legitimate Interest processing by dynamically enforcing technical and organizational safeguards (including GDPR certified Pseudonymisation) to mitigate the risk to data subjects to enable privacy respectful personalized products, services and experiences.

variant_twinsThe difference between Anonos’ BigPrivacy-enabled Variant Twins and other privacy technologies (including differential privacy, anonymisation and static tokenisation) is that Variant Twins support the multiple-use, dataset set combination and decentralized processing that is necessary for successful Analytics, AI and ML.

Dynamically Control Re-Identification

Traditional privacy technologies seek to balance the utility of data output against the risk of unauthorised re-identification of individuals within the scope of the output generated for a specific use. These technologies seek to balance data utility and privacy by introducing uncertainty or “entropy” via static tokens, but only as measured within the scope or boundary of a particular data set. Re-identification risk mitigation is assessed for the dataset as a stand-alone object as if other datasets that might share common static tokens or indirect identifiers did not exist. In the not-so-distant past, when data was less accessible, harder to move, narrower in scope, and collected in much smaller quantities; re-identification was difficult and costly in time, effort, and resources; and traditional privacy technologies could reasonably mitigate re-identification risk while preserving data utility. In today’s world where the volume, velocity, and variety of data are exploding at exponential rates, “easy re-identification” is a new reality. As a result, traditional privacy technologies are now unable to adequately mitigate the risk of unauthorised re-identification from “linkage attacks” exploiting the “Mosaic Effect” when data can so easily be combined with other data or put to another use beyond the original purpose - core objectives of successful Analytics, AI & ML initiatives.

In stark contrast, Anonos’ BigPrivacy award-winning (Gartner Cool Vendor for Privacy Management) data protection/enablement technology, as highlighted in a special IDC Research Report, introduces uncertainty or “entropy” that persists at the data element level regardless of scope, by leveraging patented and GDPR-certified dynamic pseudonymisation technology to protect data privacy for multiple uses and multiple data sources (i.e., not limiting unauthorized re-identification protection to data output within a single data set or for a specific use) which is essential for Analytics, AI & ML initiatives to be legal under the GDPR and evolving data protection laws.

Dynamically Control Processing

The Anonos BigPrivacy Privacy Pipeline® has patented, automated risk assessment scoring capabilities that leverage state-of-the-art and dynamically applied Privacy Enhancing Techniques (PETs) to measure the risk of re-identification. It additionally enforces risk assessments in real time through enterprise class, scalable, real-time streaming technologies. These risk assessment and scoring capabilities provide a transparent Data Protection by Design and by Default and Privacy Impact Assessment (PIA) audit trail.

In addition to enforcing enterprise grade risk assessment and scoring capabilities, the Privacy Pipeline also supports transparent fair processing and ethical Analytics, AI and ML by enabling:

  • Algorithmic fairness via bias-free processing; and
  • Support for classification boundary learning by converting personal data into non-identifying, information-rich data when a user unsubscribes to avoid loss of negative data required for computer learning using the boundary separating positive and negative decisions.

"Are you looking for the right solution to meet your enterprise needs?"




Future of Privacy Forum Chief Executive Officer

“Anonos shows there are smart technical and policy solutions that can ensure we gain the benefits of new data uses while avoiding the risks.”


Information Accountability Foundation Executive Director & Chief Strategist

“Anonos makes effective controls possible that break the stalemate between responsible use and data obscurity.”


Sage Bionetworks Chief Commons Officer

“The potential to bring technical and organization approaches into data privacy debates that desperately need new concepts.”

Want to learn more or request a demo?
Get in touch using the form below.

Anonos does not sell or share your information.