GDPR Recitals 78 and 83 and Articles 25 and 32 require deployment to the fullest extent possible of the state of the art in data protection processing controls and security technologies.
 Controlled Linkable Data was presented at an International Association of Privacy Professionals (IAPP) program entitled General Data Protection Regulation (GDPR) Big Data Analytics featuring Gwendal Le Grand, Director of Technology and Innovation at the French Data Protection Authority – the CNIL, Mike Hintze, Partner at Hintze Law and former Chief Privacy Counsel and Assistant General Counsel at Microsoft, and Gary LaFever, CEO at Anonos and former Partner at Hogan Lovells (see https://anonos.com/GDPR_Industry_FAQ.pdf ) and explained in a White Paper co-authored by Messrs. Hintze and LaFever entitled Meeting Upcoming GDPR Requirements While Maximizing the Full Value of Data Analytics (see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2927540)
 Article 52 of the EU Charter of Fundamental Rights and GDPR Recitals 4, 156 and 170 and Articles 6, 24 and 35 reference EU proportionality principles.
 New pseudonymisation requirements are set forth in GDPR Recitals 26, 28, 29, 75, 78, 85, 156 and Articles 4, 25, 32 and 89. If a vendor claims to “pseudonymise” data to comply with the GDPR, it is important to verify whether they use static pseudonymous tokens or dynamically changing pseudonymous tokens. Only dynamically changing pseudonymous tokens satisfy state of the art GDPR requirements that the information value of data be separated from the ability to attribute data back to individuals via the "Mosaic Effect." GDPR Article 4(5) defines GDPR-compliant pseudonymisation as requiring separation of the information value of personal data from the means of attributing the data back to individual data subjects. Traditional approaches to pseudonymisation use a persistent, or static, pseudonymous token to replace each data element. Using a simplistic example, the zip code value of 20500 in a database would be replaced with a static pseudonym (or token value) of 6%3a8, and this same pseudonym would be used to replace each occurrence of zip code 20500. Due to advances in technology and threat-actor sophistication, persistent (static) pseudonyms can be readily linked back to individuals via the “Mosaic Effect” in violation of stated restrictions in Article 4(5) without requiring access to keys to reveal the value of persistent (static) pseudonyms. Thus persistent (static) pseudonyms fail to comply with new GDPR requirements to separate data from the means of attributing information back to individuals. In contrast, dynamically changing pseudonymous tokens separate the information value of personal data from the means of attributing the data back to individual data subjects. An example of the “Mosaic Effect” is available at http://dataprivacylab.org/projects/identifiability/paper1.pdf where it is explained that if three seemingly “anonymous” data sets using persistent (static) pseudonyms are combined – one each comprised of zip code, age and gender of US citizens, up to 87% of the U.S. population can be identified by name.
 Data Protection by Default is required under GDPR Recitals 78 and 108 and Articles 25 and 47. Data Protection by Default requires real-time, use case specific, fine grain control over use of personal data. Be wary of vendors who highlight adherence to “Privacy by Design” principles but do not similarly state that they comply with “Data Protection by Default” requirements. They are not one in the same – the GDPR mandates the strictest implementation of Privacy by Design, which is Data Protection by Default.
 See footnote 2, supra.
 While “consent” under GDPR Article 6(1)(a) remains a lawful basis for processing personal data, the definition of consent has been significantly restricted. GDPR Recital 32 and Article 4(11) mandate that consent must be “freely given, specific, informed and an unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her.” These heightened requirements for consent under the GDPR shift the risk from individual data subjects to data controllers and processors. Prior to the GDPR, risks associated with not fully comprehending broad grants of consent were borne by individual data subjects. Under the GDPR, broad consent no longer provides sufficient legal basis for processing personal data.
 While “necessary for the performance of contract” is an available legal basis for processing personal data under GDPR Article 6(1)(b), Opinion 06/2014 of the Article 29 Working Party (WP29 Legal Bases Opinion) clarifies that availability of performance of contract as a legal basis must be “interpreted strictly and does not cover situations where the processing is not genuinely necessary for the performance of a contract.” Scenarios in the WP29 Legal Bases Opinion concerning limitations on permissible data processing clarify the limited availability of legal bases for data uses that are not genuinely necessary for a transaction (see http://www.dataprotection.ro/servlet/ViewDocument?id=1086 ).
 The requirements for GDPR Article 6(1)(c) “compliance with a legal obligation of a controller” to serve as a valid legal basis for processing personal data eliminate it as a viable legal basis for many secondary data uses. The requirements for GDPR Article 6(1)(d) “vital interest of a data subject” to serve as a valid legal basis for processing personal data eliminate it as a viable legal basis for many secondary data uses. For GDPR Article 6(1)(e) “performance of a task in the public interest” to serve as a valid legal basis, processing of personal data must be subject to GDPR safeguards to ensure that technical and organisational measures are in place, including GDPR compliant pseudonymisation, that comply with requirements for proportionality and necessity under GDPR Recitals 4, 156, 170 and Articles 6(4), 24 and 35. “Legitimate interest” under GDPR Article 6(1)(f) may be a valid legal basis for secondary data uses if GDPR proportionality, necessity, and state of the art obligations are satisfied by complying with new GDPR dynamic pseudonymisation requirements under Article 4(5) and data protection by default requirements under Article 25.
 GDPR Article 6(1(f).
 GDPR Article 6(4).
 See footnote 4, supra.
 See footnote 5, supra.
 GDPR Article 6(1)(a).
 GDPR Article 6(1)(b).
 See definition of “Linked Data” on page 9 of White Paper entitled Meeting Upcoming GDPR Requirements While Maximizing the Full Value of Data Analytics cited in footnote 3, supra.
 See definition of “Readily Linkable Data” on page 10 of White Paper entitled Meeting Upcoming GDPR Requirements While Maximizing the Full Value of Data Analytics cited in footnote 3, supra.
 Consent has been significantly restricted under the GDPR to require that it must be “freely given, specific, informed and an unambiguous indication of the data subject’s agreement to the processing of personal data relating to him or her.” These heightened requirements for consent under the GDPR shift the risk from individual data subjects to data controllers and processors. Prior to the GDPR, risks associated with not fully comprehending broad grants of consent were borne by individual data subjects. Under the GDPR, broad consent no longer provides sufficient legal basis for processing personal data. The WP29 Legal Bases Opinion clarifies that availability of performance of contract as a legal basis must be “interpreted strictly and does not cover situations where the processing is not genuinely necessary for the performance of a contract.”
 Article 15 - Right of Access; Article 16 - Right to Rectification; Article 17 - Right to Erasure/Right to be Forgotten; Article 18 - Right to Restrict Processing; Article 19 - Notification to Data Recipients of any Rectification, Erasure, or Restriction of Processing; Article 20 - Data Portability; Article 21 - Right to Object; and Article 22 - Exclusion from Automated Decision-Making/Profiling.
 See footnote 10, supra.
 GDPR Article 6(1)(e).
 GDPR Recital 26 stipulates that “The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable. This Regulation does not therefore concern the processing of such anonymous information.” The WP29 published an opinion on achieving GDPR-compliant anonymization (available at http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf ) (WP29 Anonymization Opinion) which includes, inter alia, three criteria for assessing the efficacy of anonymization techniques – i.e., the inability to use an “anonymized” data set to (1) single out, (2) link to, or (3) infer, the identity of a data subject. If these three criteria are met, a data controller is on the “safe side.” If these three criteria are not met, it does not mean that anonymization is not possible but a data controller must conduct a risk analysis to verify that the risk of re-identification is sufficiently low; additional safeguards and techniques may be required. Controlled Linkable Data uniquely enables “Privacy Rights Management for Individuals (PRMI) which enables “anonymous” data to be re-linked under tightly controlled conditions - see discussion on pages 10-11 and pages 14-24 of White Paper entitled Meeting Upcoming GDPR Requirements While Maximizing the Full Value of Data Analytics cited in footnote 2, supra.
 See footnote 1, supra.
 GDPR Articles 11(2) and 12(2).
 Prior to the development of Controlled Linkable Data, the state of the art in privacy technology consisted of tools to support generalized statistical analysis. Traditional technologies leverage “Privacy Enhancing Techniques” or “PETs” (e.g., k-anonymity, l-diversity, t-closeness and differential privacy) enable data controllers/data processors to use isolated, protected data sets as compliant stand-alone data resources. By supporting generalized statistics, these technologies help provide insights into high-level trends, demographics, etc. These protected data sets are considered “safe” because they are purportedly unlinkable and not capable of being linked back to original data sources or to data subject identities. The term “anonymous” is sometimes used in connection with these data resources. Data controllers/processors should be wary of combining these isolated, protected data sets with other sources of data for secondary purposes, re-linking data to original data sources or to data subject identities since such uses of personal data are not lawful under GDPR and are a principal type of processing the GDPR seeks to improve. Privacy solutions premised on traditional PETs to enable use of “anonymized” data protected against re-identification may comply with the GDPR and even be outside of its jurisdiction. However, once a data controller/processor attempts to link results from generalized statistical analyses back to original data sources or to data subject identifies, new GDPR requirements must be satisfied or the data use is unlawful. This requires dynamic pseudonymisation and fine-grain control over data on a per use basis which generalized statistical technologies do not support. The principal reason for this shortcoming is that PETs were designed to protect privacy of data within a data set but not between and among data sets outside of a “controlled environment” in which they work. They do not comply with new GDPR standards for pseudonymisation and data protection by default.
 The Anonos BigPrivacy dynamic de-identification systems, methods and devices that support GDPR compliant dynamic pseudonymisation and data protection by default requirements are covered by foundational granted patents (including, but not limited to, Nos. U.S. No. 9,631,481; 9,129,133; 9,087,216; 9,087,215; and 9,619,669) and a portfolio of over 50 pending U.S. and international patent applications.