Sylwester Frazzoni (VIXIO) At this point, let's turn to Gary. And Gary, please share with us your experience of helping companies comply with GDPR.
Gary LaFever (Anonos) Thank you, Sylwester.
![Words Alone vs New Technology Controls]()
So, the question is often raised: To what extent has my investment in GDPR compliance going to be something that helps me with non-EU data protection laws? And this comes about a number of contexts. Certainly, US data privacy laws, but data privacy and data protection laws are growing in popularity and are coming into force around the globe. But I'd actually like to focus on something that Sylwester said earlier, which is:
“How do we avoid the unpleasant surprises as we move forward to application of these rules and principles beyond the GDPR?” And for that, I actually want to talk about the GDPR for a moment. And to do that, I want to focus on the difference between what I would call table stakes - the baseline data you need for your business - and what kind of data you use to make your company more successful. The table stakes, that's really represented by your transactional data. So, it's the debits, the credits, the payments, and the gaming if you're in the gaming industry. But if you think of it, if the only data that you have to process is from the transactions that you have concluded, that's a flatline revenue model. Where people really use data to improve their operations, to distinguish themselves from competitors, to break into new communities, new markets, bring new offerings, and even develop next best actions for their existing clientele is by repurposing data, secondary processing, analytics, AI, and machine learning. And this is something that people are just now starting to realise that even under the GDPR requires special treatment.
And so, this slide talks about which team you are on. And in this instance, red and blue is not meant to refer to conservatism or liberalism. This is actually a game that was very popular back in the 50s and 60s called Rock 'Em Sock 'Em Robots, and it was a game that children played. But the reality is, there is, as with many games, losers and winners and the losers here are people who tried to repurpose data, who tried to do analytics, AI, and machine learning based on words alone. By words alone, I mean terms of service, contracts, or treaties because new technology controls are now required. And this is important, not just under the GDPR but also under these new laws in the US. Both the California and Virginia laws speak to de-identified data, and sometimes they even use the European term Pseudonymisation. Those types of data must be technologically transformed and cannot be altered using words alone. Next slide, please.
![Maturity of At-Use Technical Controls Determines Schrems II Compliance Strategy]()
So, what we encounter with our clients and regardless of whether they are US companies, EU companies, Asian companies or anywhere around the globe, if they're processing EU data, much of their activity and attention was focused on what I would call primary processing. They actually did not move to the next level to have the technical controls in place necessary for lawful secondary processing and they're now hitting a wall because of that, which I will quickly talk about. This is the kind of wall that can be avoided both in the EU under the GDPR, around the globe, and specifically with regard to US laws. So, let's talk about the GDPR just very quickly. There is an obligation for all processing both primary and secondary, both internal and external to the EU, to practice Data Protection by Design and by Default and also Purpose Limitation and Data Minimisation. Those all require technical controls that enforce the words. So, most of the rest of this webinar is going to be speaking about words, and those words are powerful and not demeaning their relevance and how important they are. But specifically when it comes to secondary processing of data, that data that's going to make you more successful and give you an edge over a competitor and enable you to break into a new market, you must have these technical enforcements of those words. That's what we call Step 1. Once you have Step 1 taken care of, you can then move to Step 2. This under the GDPR is called Legitimate Interest Processing. And under the GDPR without technical controls, you literally cannot conduct analytics, AI, and machine learning on your customer data. You can technically do so but it will be unlawful. Why? Because consent and contract are so narrowly constrained and interpreted under the GDPR, that you don't have the right to do so. But if you have these technical controls, you can use another legal basis called Legitimate Interest Processing. And I want to emphasise legitimate interest does not mean you have a legitimate interest in the outcome of the processing. Rather, it requires you to have the technical controls in place as part of the process to show that you've reduced the risk sufficiently to the data subjects. Then, in the balancing of interest test, your interest prevails.
Which then takes us to Step 3 - Schrems II and international data transfer. If you haven't heard of Schrems II, I suggest you Google it now. And in fact, if you're interested right now, as you're sitting in front of your computer, you can go into LinkedIn and put in Schrems. Within the last 3 months, almost 5000 of your colleagues - very senior General Counsels, Chief Privacy Officers, Chief Compliance Officers - have joined a Schrems II LinkedIn Group to stay abreast of this specific issue, which is what technical controls do you need to be lawful to process data. So, how does this tie into the US? Well, the US has more liberal interpretations of lawful basis of processing. However, when customers and when clients exercise their right to not have their data sold or to ask you to delete the data, that is incredibly disruptive to your data sets that you want to perform this analytics, AI, and machine learning on. And these technical controls can actually give you the right to continue processing a transformed version of that data that under the GDPR we call pseudonymised.
![98%: GDPR Pseudonymisation Enables Schrems II Compliance for Vast Majority of Use Cases]()
The importance and relevance of this last slide that I have is that the vast majority, the vast majority of processing that you want to do to gain an advantage over your competitors - secondary processing, repurposing of data, analytics, AI, and machine learning - can in fact be done using this technologically transformed data. Again, under the GDPR, that's called Pseudonymisation. It's called other things under US laws. Typically, de-identification. And in other global laws, it's called yet other things but it's always the same test: Do you have technology in place that enforces the policies? And I can tell you right now, this is not the technology you already have. This is not encryption. Encryption does not protect data when in use. It protects data on the way to being used, in transit. It protects data when it's waiting to be used or after it has been used, at rest, but not while in use. So, these new technical controls that enforce the words that you'll hear more about through the balance of this webinar are absolutely critical. And then on the periphery, when you're done processing the protected versions of data, you then need to reach out to the actual customers or clients. And you see some examples here. But other examples would be when you have to conclude the payment with the customer or when you reach out with the next best action or offering to a new customer, and there are typically ways that you can do that if and only if you've enforced the technical measures necessary.
So, I'll leave you with one thing. If you haven't heard of Schrems II, I strongly suggest you check it out because a lot of companies are hitting a wall right now where they have to do processing internal to the EU only. They cannot use US operated clouds regardless of where they're located. And they cannot provide remote access to their data from anywhere outside of the EU without these technical controls. And the last thing I'll say is these technical controls tend to be an investment in new income streams. They're not just compliance based because they enable you to have greater sharing, combining, and international data transfers that you might otherwise not engage in. So, I will just rest by saying technical controls that enforce the words you're going to hear so much about on this webinar are actually an asset to your organisation and a requirement for the lawful use of your data to set yourself apart from your competitors and enter new markets. Thank you, Sylwester.
Sylwester Frazzoni (VIXIO) Thanks very much, Gary. Thanks very much for this presentation. I wanted to ask you your experience of working in - from your experience of working with companies, can you tell us how long does it take for a company to become compliant? It's a question that probably has 1000 different answers but if you were to give us a ballpark. And the same thing for costs. You know, in my introduction, I mentioned the figure of $9 billion. I mean, such an abstract figure. If you're Head of Compliance somewhere in the firm and you need to go to your CEO saying like:
“Okay, there are all these new laws coming, and we need to comply with them and I need a budget.” you're not going to ask for 9 billion, right? So, can you maybe try to answer those two questions for us, please?
Gary LaFever (Anonos) Certainly. I want to go back to the Schrems II ruling. By the way, that was a ruling by, in essence, the Supreme Court of the EU, the Court of Justice of the European Union. It was in July of last year. It's a final non-appealable ruling that holds you cannot process EU personal data in US operated clouds regardless of where the servers are, or provide remote access to that same information from outside of the EU without these new technical controls. And so, the issue with Schrems II is Boards of Directors and C-Suites can be held personally and criminally liable for not having a compliant solution in place. And this is particularly difficult in the UK because a lot of people don't realise this. The UK version of the GDPR actually has provisions that the EU version doesn't that makes the re-identification of de-identified data criminal and can hold Executives and Board Members criminally liable for negligence if they don't act on compliance.
And so, when you go to the Board or when you go to the C-Suite to ask for that budget, what you want to do is not just tell them they may have an issue, which by the way they've had for 8 months, but there is a solution and these technologies can actually be put in place literally and as quickly as 24 hours. Let me explain what I mean by that. The supervisory authorities are very sensitive to the fact that most companies did not put what we call Steps 1 and 2 in place. They were focused on other things - inventory assessments and data protection impact assessments - not how you control data when in use. And so, if you can show that you've started the process, they almost across the board will say:
“Great. Keep it up. I'm going to your neighbor, knock on their door, and see what they're doing.” And so, you actually can have versions of this technology available in private clouds running synthetic data almost instantaneously. Now, that's the beginning of your process. And really the question as to how much it will cost is: Are you going to approach this from a compliance only perspective? Or are you going to allow your data users - your actual Data Protection Officers to work closely with your data innovators because what happens is they quickly determine that the same technical controls can enhance business opportunity. So, if an organisation was to use it only for compliance purposes, that could be 100,000 or maybe 200,000 a year. But what we find is that the companies who actually then extend the opportunity to their business innovation teams, it may go to a million or two. But here's the secret. That million or two is more than financed by new business opportunities and revenue. So, again, you can have compliance almost instantaneously and the amount of the cost will be dependent upon whether it's viewed just as a compliance tool or whether it's also extended to the data innovation teams so they can make profits from its use.
Sylwester Frazzoni (VIXIO)
The first question is: On the effects of encryption, is encrypted personal data considered to be pseudonymised personal data not anonymised under the GDPR where the processor holds the encryption key?
Gary LaFever (Anonos) I'll take a first shot at that. The answer is an unequivocal no. So, if you take a look at the European Data Protection Board, they came out with initial recommendations in November of 2020 in response to the Schrems II ruling in July 2020. And they actually came out with seven use cases - the first five of which are lawful and the last two of which are unlawful. The first use case is encryption of data solely for the purpose of retention, storage. The third use case that's lawful is encryption of data solely for the purpose of transmission, not use. The second, the one that is sandwiched between them is the lawful processing of pseudonymised data. And I hate to be wonky about it, but if you look at paragraph 80 and footnote 69, it is not encryption. And so, there's a lot of confusion. Article 4(5) of the GDPR redefined Pseudonymisation. It is not what you think it is. It is not what it was before the GDPR. It is a heightened requirement. The European Cybersecurity Agency known as ENISA has actually come out with a number of reports with detailed requirements for what is required for something to be pseudonymised. You can actually see those at
www.ENISAGuidelines.com. So, unequivocally no. Encryption does not equal Pseudonymisation.
Jill, I really liked your comparison to the environment. I totally agree. It is no more possible to restrict the negative effects of pollution to a given geography than it is to restrict the negative impacts of improper data use. I mean, I agree that I think you're going to have a proliferation of state laws. One of the challenges with the federal law is national surveillance. And that's actually what underlies the Schrems II case in Europe, where the Court of Justice said they don't like the fact that their citizens are subject to surveillance by the US Government. But I want to point out something that's not discussed quite often. Schrems II was decided in July. In October, the same Court, the Court of Justice of the European Union, actually ruled against Belgium, France and the UK, who at the time was part of the EU, saying they don't have the right to maintain unlimited logs through their telecommunication systems for surveillance.
And so, I really do think that there's a tension between data protection, privacy, and national surveillance that until that is resolved I don't know that even if the US passes a federal legislation that it would solve the issue. And at least in the EU, and again the same report I talked about - the EDPB report that came out in November of last year, they do recommend Pseudonymisation as a means of processing that actually is not subject to surveillance. So, yes, it remains personal data. Yes, you can continue to get 100% utility and fidelity in everything you need, but it does not reveal identity. So, I happen to think that technical controls of all these words that we're talking about treaties, laws, terms of use that continue to advance will enable us to balance the interest in data and the benefit to the population. I mean, who can look for anything more than the pandemic and the need for cross border data transfers and processing. But I think we're going to have to have technical controls that make a resolution of these conflicting objectives possible.