Technical & Organizational Controls for Lawful AI & Secondary Processing When Consent is Not Enough
CPDP Brussels

Presentation Transcript
Technical & Organizational Controls for Lawful AI & Secondary Processing When Consent is Not Enough
Magali Feys Magali Feys
AContrario.Law
Gary LaFever Gary LaFever
Anonos
Steffen Weiss Steffen Weiss
German Association for Data Protection and Data Security
Giuseppe D'Acquisto Giuseppe D'Acquisto
Italian Data Protection Authority
Ailidh Callander Ailidh Callander
Privacy International
Panel :
The unpredictable and sometimes unimaginable use of data in AI and other secondary (further) processing is a feature, not a bug. For these data uses to achieve their full potential, safeguards must ensure that fundamental rights are protected while still fostering an environment that encourages innovation.

This panel will cover how proper implementation of technical and organizational controls can help to support alternative approaches to lawful data use [e.g., GDPR Articles 6.1(f), 6.4, 9.2(j) & 89(1)] when consent is not available – e.g., when processing cannot be described in advance with required specificity.

This panel will touch upon the benefits of Pseudonymisation and Data Protection by Design and by Default - as newly defined under the GDPR - to reconcile conflicts between protecting fundamental rights and achieving societal objectives of using, sharing, combining and controlled relinking of personal data for authorized AI and secondary processing.

  • AI and secondary (further) data processing have the potential to advance societal goals.
  • Consent is often not available as a legal basis for AI and secondary (further) processing.
  • Technical and organizational controls can help to support alternative approaches to lawful data use [e.g., GDPR Articles 6.1(f), 6.4, 9.2(j) & 89(1)] when consent is not available.
  • Pseudonymisation and Data Protection by Design and by Default - as newly defined under the GDPR - provide examples of effective safeguards and controls.
Gary LaFeverGary LaFever (Anonos):
I want to start off by letting you know, we had dinner last night, the panelists, and first asked ourselves what we wanted today to be and also, what we did not want this panel to be. And so, first off, what is this panel intended to be?

It's meant as truly an instructional dialogue on how you can use data for even greater innovation in compliance with the law using technical safeguards.

What it's not intended to be? This is not intended to be a discussion on whether consent is or is not the appropriate legal basis. We all unanimously believe that when consent can be supported and appropriately implemented, that it is the absolute best legal basis.

This panel deals with those situations where consent just does not work and we're going to go into reasons why that could be the case. The other thing this panel is not is it's not about anonymization and the shortcomings of attempts to do so in the past.

The Netflix example, the AOL example, even more recent examples, actually deal with older technologies. This is about newer technologies that are specifically called for, recognized and even rewarded within the GDPR.

Primarily, pseudonymisation and pseudonymisation is not pseudo-anonymization and it's newly defined in the GDPR. So it's about, pseudonymisation is a new means of actually protecting data in a way that can maximize the utility.

And the last thing this panel is not is it's not a discussion of technology for technology's sake. There is no silver bullet. There's no golden shield. There's no magic wand that you wave and data use becomes lawful. But rather, using the right safeguards in the proper way, data controllers can extend and expand the right to process data. So, that's what we'll be talking about today.

My name is Gary LaFever, and I'm the CEO and General Counsel of Anonos. We have a booth next door. Welcome to come by. I’m gladly wearing, proudly wearing a shirt that says, “Can you rely on consent to reuse data?” And it has a die on it. But innovation is too important to have the shot of a dice determine if you can use it or not. So, that's really what we're talking about here.

And again, this is about data use, leveraging safeguards to maximize both the ability to share, combine, innovate with the data in a way that is lawful. And so, hopefully, you come away with some new information.

Again, this is new approaches that are actually specifically enumerated, recognized, and rewarded within the GDPR. So I think you'll enjoy the discussion today.

I want to start first. Each of the panelists are going to introduce themselves, give you some background and some preliminary perspectives. Then, we're going to go through eight slides just to get some background further. I have some questions for the panelists then, and then the last question is from the audience.

So with that, Giuseppe, please.
Giuseppe D'AcquistoGiuseppe D’Acquisto (Italian Data Protection Authority):
Thank you very much. Good morning, everyone. My name is Giuseppe D’Acquisto. I work for the Italian Data Protection Authority, The Garante. I’m a technologist. And I think that the presence of a technologist in this panel is justified by the fact that these additional controls that is mentioned in the title of the panel, mostly have to do with technological safeguards, legal safeguards and on top of those, we can put technological safeguards but with a caveat that technology is not the substitute for legal safeguards. It cannot do miracles.

You have to understand the limitations of technology in order to benefit of the many possibilities that technology adoption enables. True, it can do a lot of things but for instance, it cannot be a way to solve in an easy way, legal issues.

So, let me elaborate on a couple of points because I envisaged two modes where technology can have a role in enforcing these safeguards. One is the role of technology in primary purpose and the other one is role and limitation in secondary purpose .

Let's take pseudonymisation as a paradigm for technology, but we can mention other technologies that more or less work the same because the goal of those technologies is to downsize the risk. I can mention secure multiparty computation. I can mention differential privacy. They are all valid tools to reduce the risks.

Now, first of all, technology is not a way to circumvent consent. So, you should not interpret technology this way. I put pseudonymisation, so this is a way to circumvent consent. No. This is not the role of technology. Technology is not a legal basis per se. Technology does not transform a fake purpose into a specific purpose. You need to have a specific purpose in order to process data and then on top of that, you can add new effective safeguards according to what Article 25 says.

Technology is not the purpose. I cannot say, I put in place this processing operation because I want to pseudonymise data or I want to anonymise data.

First instance, this is not possible. This is not the reason why you process data. Hence, it’s not a legal basis. You cannot use data on the basis of the fact that you have put in place these technology-based safeguards and managers.

So, the room for the technology is not in this area. There is nevertheless a wide room for the adoption of technology. And in the first purpose, I think I'm under a disclaimer here. So, what I'm saying does not necessarily reflect the vision on this issue of the entire Data Protection Authority. I speak in my personal capacity. I think that in the first purpose, technology can play a very important role in the case of legitimate interest. There, I think technology is a real talking point.

You might have an interest, of course, an interest that is for profit, for economic reasons. Maybe interpreted the way that the European Court of Justice interpreted is not really the same as of course data ruling. So, there is a private interest of the controller, but then there's also a general interest coming from the processing of the data. But nevertheless, in this processing of the data, the interest of the controller still overrides the rights and freedoms of individuals.

There, technology can play a role in mitigating the risks for individual to the extent that discretion is balanced more in favor of the individuals, enabling the processing. The other case that I mentioned in the beginning, there was no way, no room for technology. You have the globe in your hands, but this did not enable the processing.

In legitimate interest, I think that there is a room for introducing new effective safeguards that can enable the processing to the extent that the risks are minimized and the role of the individual processing is correctly taken into account.

I think that we have to reflect and explore this possibility for reasons that I will touch briefly in a minute because if we don't do that, probably the context of where the processing operation takes place might undermine the validity and the effectiveness of other legal safeguards.

But before going into these aspects, let me just elaborate based on the rule that technology can have in secondary purposes. I think that the context is more flexible. In the secondary partners, we have controllers that have already fulfilled their obligation. So, they are compliant. They have appropriate legal basis for the processing, and they are reflecting on the possibility to reuse the data.

The legal framework, so far, gives many alternatives to controllers and let's look at the role of technology in those alternatives. We have control. We have consent as a way to enable further processing, and there’s a role of technology here. We have compatibility or incompatibility of purposes and there’s a third option which is anonymization of the data.

Now, what’s the role of technology here? If we look at anonymization, the role of technology is to, let's say, downsize the risk to the extent that the utility of the data is wasted in a sense. So, we might wonder if we could not give more chance to technology rather than giving technology the opportunity just to push the operation far from the orbits of the GDPR. It's not an inclusive way to implement technology. It’s an exclusive way. You use technology to minimize the risks, but also to minimize the utility of data.

In the compatibility, there's a role for technology, which mirrors pretty much the way technology has a role in legitimate interest. You might have a purpose, which is not blatantly incompatible. There's a room for introducing more safeguards and you can use all of those techniques. Pseudonymisation among others but also secure multiparty computation tool, in such a way, make this processing less incompatible to the extent that you can say, “This is something that is still in the orbits of the GDPR that has effective safeguards for the individual and in this way, I enable the processing operation.”

This is very important to me because if we don't do this, if we don't give these opportunities to technology, the asymmetry of information will prevail. Meaning that not always the individual is fully empowered to understand the pace of the processing operation and take informed decision. The majority of processing operation take place in a context of asymmetry of information.

If we look also at the enforcement powers of DPA, information asymmetry has a role here. So, if we don't consider this very relevant aspect and we just rely on consent in the context where consent might not be really very much effective, we just, let's say, give this theoretical very strong safeguards and cannot be exercised and empowered and enforced by DPA.

So, we should reflect and explore the possibility to give more chances because incentives, I think, is the key word here. And incentives can be built on top of what the law says.

If you look at Article 6(4), there is room, there's margin for technology. In order to assess the compatibility inter alia, you can take into account safeguards such as, this is what the law says, pseudonymisation and encryption. But it is integrales not the most relevant aspect. Maybe we should emphasize more in order to attract this processing operation to the orbits of the GDPR and let's say, shed a big light on asymmetry of information, which is the real enemy of the adoption of technology, especially these technologies that can play a very important role in bringing benefits to individuals. Just to frame the data [inaudible].

Thanks.
Gary LaFeverGary LaFever (Anonos):
Ailidh, if you can please introduce yourself and give some preliminary comments. Thank you.
Ailidh CallanderAilidh Callander (Privacy International):
Thank you. My name is Ailidh Callander. I’m a lawyer at Privacy International. Privacy International is an international NGO based in London, but we work with our network and partners around the world to challenge corporate and state surveillance and data exploitation.

In the end, what we want is a world where technology empowers us and enables us and isn't used to exploit data for profit and power. When we look around the world today, this is unfortunately not what we see. Rather, quite the opposite.

Therefore, without action, then we’re not going to see this world. And in this way, Data Protection Law is one of the tools that enables the protection of our rights from the whole range of civil and political rights to socio and economic rights. And this makes it more important than ever to consider, how do I make these rights and Data Protection Law that protects these rights a reality?

And one of our concerns is that, when we look at a number of business models that exist today, they challenge our fundamental rights. Yet these business models remain the same despite changes in the law and act as if it’s business as usual rather than really stepping back and questioning the societal impacts of the way that they use data and the impact of them processing.

Yet the law offers opportunities to change that. For example, data protection by design and by default, what does that even mean in practice? If that really was taken seriously, could it really change the way that business is done?

And just to give some examples from the work we've done over the last year and some of the business models that we're concerned about and then to just go and talk about why technological safeguards are important.

So, falling on from GDPR coming into effect, we were concerned with data brokers and the wider AdTech industry and how these industries continue to exist and continue to act, as I said, as business as usual in many ways, despite huge changes in the law and despite it being very clear that you shouldn't be maximizing the amount of data you have for the maximum amount of purposes. That's not at the essence of the law, and they are the antithesis of what the law calls for in many circumstances.

And when we look at a number of companies that we ended up complaining about, we still have a fundamental misunderstanding of what was meant by a number of the points that Giuseppe mentioned, for example, legitimate interest. Is this taken to mean that legitimate interest of those businesses and that anything goes in terms of what that interest might be without then surgical consideration of the impact of the rights of individuals, but also more widely on society? And that's where other organizational tools, whether it's data protection impact assessments, legitimate impact assessments, all these kinds of things come in. But these were not demonstrated to us . How rights were taken into the balance are not adequately demonstrated today.

And this has continued in the work that we've done since we've seen, for example, that the majority of apps on our phone share vast amounts of data. We looked, for example, particularly at menstruation apps that have some of the most sensitive data possible that builds into them. And many of the businesses is automatic sharing of data before you’ve even opened the app in some cases.

And then another example is we all use the web to search for information about our health - about, for example, mental health. And when we did a search on our mental health websites, say, the UK and France and Germany, we found that 75% of these websites have trackers for marketing purposes.

So, this goes back again to the question of, what is the purpose of the processing that you're talking about? What is it that you're doing? What are the consequences for rights and what are the consequences for society? And what can be done about that?

I think, first, you need to take that step back and question, what is the processing that we're talking about? What's the impact of that? And then, is there a possibility to mitigate risk or not? And thirdly, there’s the issue of the ongoing implementation enforcement gap when it comes to the laws.

So to kind of move forward here, we need that stepping back, that consideration of rights at the very beginning and investing in positive technological solutions as opposed to the overwhelming use of technologies to do quite the opposite. Not enable and protect us and allow us to use technology in a way that we can trust, that empowers us. Rather, the use of technology for exploitation.

So, investing in different technological safeguards and questioning the societal impact of the data processing are two of the key points that can really move forward in a positive way for a more positive vision of the future, rather than some of the negative consequences that we currently see.
Gary LaFeverGary LaFever (Anonos):
Thank you, Ailidh. Steffen, if you would please.
Steffen WeissSteffen Weiss (German Association for Data Protection and Data Security):
Thank you, Gary. My name is Steffen Weiss. I’m with the GDD, which is a German Data Protection Association. I met Gary in Berlin on the occasional workshop of ENISA regarding pseudonymisation. And I got the chance to present a piece of work I did for the GDD. The GDD, we have many members. It could be DPOs. It could be companies.

So, what we do on the one side, on the one hand, we advise in regard to GDPR and delegate national delegations. So, it's a pretty much hands-on job on how to analyze the law and interpret the law.

Sometimes, we were able to do a bit more, which I liked the hands-on mentality. Germany has a thing, which is called the Digital Summit. Our government is organizing this summit once a year and it’s made of so-called focus groups. So, the government has asked those focus groups to do something. Please, let us make this summit a good summit. Pretty good topic on the table.

They reached out to data protection focus group, which I'm organizing and we thought of possible topics. And of course, we were listening to what our members are saying, where the problems are. And the main issue of course is what happens to the change of purposes? What are the consequences? Because there are consequences from the transparency perspective, from the legal basis perspective.

And so, at some point, we came up with the idea of doing something with regards to pseudonymisation, which happened in 2017 or quite some time ago.

So, why did we pick the topic of pseudonymisation? One reason was the change of purpose of processing. Another one was maybe because many, many people were asking us, what about anonymity of data? Anonymous data? Can you do something with this topic?

And somehow, we were tired of talking about anonymous data because the conversion on the encryption of data to a state that was called anonymous is a big adventure and it's challenging for everyone.

Aside, our DPAs tend to interpret it of course which I can understand fully and totally the data anonymous in a restrictive way. Another thing that they asked is for lawful basis if we want to make personal data anonymous.

So, we got a bit of tired of the whole topic of anonymous data. And who wants to talk about pseudonymous data? One of the reason was in our Federal Data Protection Act, which was in place before the GDPR, we had a definition of what is pseudonymous data.

So, the Germans are always claiming that they have a long history regarding the processing of pseudonymous data. Our project for reforming the data protection in Europe, they were trying to push this topic also in the negotiations taking place at Brussels.

So, we picked that topic for this reason . The other reason was, there was a lack of understanding of pseudonymisation or pseudonymous data. On the one side, the legal definition which changed from our Federal Data Protection Act with the GDPR. On the other hand, the benefits of pseudonymisation came short in the discussions.

As we distinguished different functions, we can talk about it later on when we talk about pseudonymisation a bit deeper, so many, many aspects of pseudonymisation came short and we don't mention it in the discussions. That's why we picked this topic.

We made our first paper which was a whitepaper, a descriptive paper. Our group, by the way, was built from, I mean, we had academics. We had the industry. We had the GDD. We had DPAs being part of the group.

So, we made this whitepaper, a descriptive paper, a problem-raising paper. What is pseudonymous data? We mentioned anonymous data to try to give its own definition, to make people aware of this type of data. Then, we tackled some technical challenges when it comes to pseudonymisation as I have said.

And so, a descriptive paper and you can tell, a descriptive paper is way more easier to do than a more standardized paper, which we tried in 2018. So, we gave guidance in pseudonymisation trying to standardize the whole issue because what was happening at least in Germany and I assume in Europe as well, many, many controllers and processors have their own approach when it comes to pseudonymisation and we wanted some common goals, some standardization process, some documents telling people, “Okay. Those are the checkpoints you need to go through if you want to pseudonymise data.”

So, we had 2018 guidelines, and then we wanted to do a step more, do a step further towards standardization. So, we drafted last year a code of conduct for pseudonymisation.

As you all know, codes of conduct are something which are regulated within the GDPR as well as the directive back then. But still is a thing which is not very popular in Germany but it’s a big, big possibility to standardize and harmonize and make data processing more transparent to others.

So, we tackled this challenge, establishing the direction of pseudonymous code of conduct and how this code of conduct goes, just to give you a brief information about this one.

In my point of view, this draft code of conduct is an accountable measure. We have a principle of accountability in the GDPR and it makes the decision-making transparent which pseudonymisation technique you are using that you want to implement.

So, it is a big feature for us because what we also tried in the discussions before is to say, “Okay, you have this type of data, this risk classification of data. You have to implement this kind of measure of pseudonymisation which turned out to be impossible to draft the code because each processing is really context-based than the individual thing. So, this did not work out.

Still, we’re asking from each controller and every processor to make his decision transparent to go through checkpoints and those checkpoints also consist of, for instance, regarding the recent class of data, the politics of processing.

So, in case a DPA wants to get to know more about this pseudonymisation process, he can see the steps and get to know the steps and the assumptions and conclusions that the controller has taken. So, it's a big feature for us.

And then part of this code of conduct is about technical aspects of pseudonymisation but more from a best practice perspective. So, we're giving advice on how you should pseudonymise data, what kind of technical aspect you have to keep in mind, the entropy to get rid of anonymity errors, to do the sorting that some attacker can really guess what the data was all about.

It’s still a draft code. Why is it still a draft code? Because as you all know, the GDPR asked for a monitoring body and at this moment, we are trying to assess the draft code. So, we are building good practices. We are asking controllers and processors to please provide us with your good practices when you're pseudonymising data.

We have a template. We provide controllers with this template and they’re giving us how the pseudonymisation actually works. And with this, we try to really assess this code of conduct to see if it's working or if it’s not working.

The code is also available in English if you want to get to know this code better and within this finalization process, every feedback is very much important to us. And once this code of conduct is hopefully finished, we will have some sort of standardization method and controllers and processors can, they don't have to, but they can really adhere to this code of conduct to make their decision-making process better, to make people aware that they’re the subject, especially aware on which checkpoints do we go through.

If we make all the correct assumptions, the code is asking us to do so and in the end, it’s some sort of harmonization document and makes the decision more transparent. And I think we should all get in this direction to regionalize things.

I mean, I'm a lawyer as well. We do a lot of problem-raising, but at least, we want to do something to help everyone to work on this thing and have a common understanding and to have the same approach on this topic.

So, thank you.
Gary LaFeverGary LaFever (Anonos):
Thank you, Steffen. Maggie, if you would please.
Magali FeysMagali Feys (AContrario Law):
Good morning, everybody. I’m Magali Feys, founder of AContrario, a law firm specialized in intellectual property, IT, and data protection. Well, I first started actually as a typical IP lawyer, but doing a lot of patent law always got my practice intertwined with technology because I think as an IP lawyer and an IT lawyer, you really have to also understand the products or the services your clients are delivering. Otherwise, it will be quite hard to draft up contracts.

So, from that, I always had an interest in technology being a big part also of my law practice and I was thrown in the deep pool of data protection and privacy when in my previous law firm; we assisted the former Belgian Supervisory Authority against its legal battle against Facebook.

So, I came, in that respect, in contact with data protection and going further and the changing practice when you do more IT and data protection getting involved in that.

I also became part of the legal working party for the Belgian Minister of Public Health where we focused on the entire e-health plan definitely with a focus on data protection in e-health, data protection in mobile health applications and also when collaborating between health institutions and these startups and scale-ups, how could we facilitate that in the framework?

And of course, there, you already hit a wall where you also have needs for technology and also in our practice, we assist a lot of startups and scale-ups and as a lawyer, it was really great when having these companies comply with GDPR to say, “Yes, from a legal point of view, you have to do this and this and this.” But then sitting with a lot of the CDOs and they went on to say, “Okay. That's really nice. That's legal. But what does it mean for our solution? What do we have to do?”

And then of course, as a lawyer, you're just standing there and that's why when I founded the law firm, I'm really involved also with data security experts and ethical hackers, they were in the past life. But really, people understanding technology, software developers themselves who really could assist our clients with actually designing their solution in a way that it was data protection by design because what I have from startups or the feeling I take away with a lot of startups is that they want to bet also on technology very much.

First of all, because most of them are from a beyond generation and they're all about security, but I think the awareness level on privacy with them is also very present and they also want to build something that doesn't go against the privacy rights of individuals. And so, they’re really thinking also of course, with the interest of profits and yes, we now, for example, if you have a mobile health application for a heart disease, that's our primary purpose. But we are collecting so much data from all our users that maybe that data will be very useful for secondary purposes for other studies.

So, maybe at a certain point, we will want to share that data. So, how can we already think today about building our solution in such a way that we can actually make it possible?

I think learning from the mistakes from the past, but you see indeed as Ailidh pointed out that a lot of businesses did it the wrong way. I always want to put some observations with that, because we also assist big companies and I don't want to be naive and just say, “Yeah. They all just want to do good,” and that sort of thing.

I was against Facebook. So, I do know the interests of some companies, but on the other hand, you have a lot of companies who, yeah, they bring out this historical background where although we have privacy regulations and privacy rules, they just were told, “Yeah. Keep the data. Take as much data as you want because you never know what you're going to need it for.”

Secondly, there was no real, although there was a framework, there were no penalties on that. So for a long, long time, they were able to do whatever they did. And so, it's very difficult if you come in into a company. First of all, you already have to point out that they have to become GDPR-compliant and then just say, “Yeah. Well, the business you're doing, your model doesn't work anymore. So, maybe you want to rethink your model.”

Even in the beginning, in the early age of GDPR where there were not a lot of guidelines, it was not that simple for a company first to do so and to convince management to do so. So that's why I really thought that we really also indeed need guidelines, and we need solutions that enable companies to want to change their business models and that's why I want, well, in my practice, I came into contact with Anonos and they presented me their solution.

And I have to say the first time I was a little bit skeptic if somebody says, “I have a GDPR solution and it will solve GDPR problems.”

Well, first of all, I have to say, Gary put me at ease in the beginning. He says, “It's not a silver bullet.” And I really loved that. You can apply the solution but you still have to comply with all the other aspects of GDPR. We only take a part of that. But it’s a pseudonymisation solution and when you really explained it to me, it really made sense.

And for me, it's really a solution that we can use in the industry, because a lot of players, a lot of health institutions who are working with a lot of startups and scale-ups that want to work together and collaborate with these health institutions are waiting on, how can we do that in a way that we're not infringing the law? And in a way that is also within an ethical framework, which definitely in the health industry is a big concern.

And so, for me, I think pseudonymisation and the solution that we have with Anonos is really, and I don't want to do a sales talk on that because I think people should really take the effort to start thinking about that. And not only about laws and the policies which we need, but we also need the technology that go hand in hand. And I think if we focus on GDPR compliance, both aspects are as important and one will not go without the other.
Gary LaFeverGary LaFever (Anonos):
Thank you, Maggie. So, what's fascinating here is if you look at the wide spectrum of panelists that we have from a data protection authority to an international, non-governmental organization( NGO) to a member state-focused NGO to private practice and there are some common themes here, which is the asymmetry of information. The fact that existing business models sometimes were predicated on parameters that are no longer lawful, the desire and in fact, the need to have standards, codes of conduct and things that people can reference. In particular industries, particularly, healthcare is one where the ability to actually enable greater use of data could have amazing positive results.

So, we're going to go to eight slides right now real briefly, just walk through those, which will give some further context with this discussion. And then we have a couple of questions we want to ask our panelists. We're going to open it up to the audience.

So, there we go. This is my all-time favorite slide. I call it the goldfish slide. But it conveys something I think very effectively. The days on the far left-hand side, if they ever existed, are long gone.

You can’t just process data because you possess it. But with the new laws which are intended to balance protection and innovation, without new technical controls, they're stuck in the middle column and those fish can’t swim to one another and there's limited air in each of the bags.

So, the way you've maximized the utility and innovation of data is actually the third column. And this is like implementing technology controls that actually enable greater use of data.

And so, the way to sell it to management, which is really what it comes down to, right? A lot of CEOs will ask, “Aren't I already done with the GDPR? Didn't that happen last year?” But the reality is, technical controls like pseudonymisation as an example, can actually extend and expand your ability to process data, expand your data sharing partners, new collectives, capabilities of research that didn't exist before. And so, that's a positive spin on this.

So, the next slide, please. So this is just -- I won't go into a lot of detail on each of these, but we like to give you detail on the deck so when you get them afterwards. But fundamentally, the guidance and enforcement actions by data protection agencies have made it very clear that consent and contract will not support a lot of desire to repurposing and anonymisation is very narrowly-defined.

The one in the middle, we often get quite some strong response to. A lot of people don't realize that in the UK, if you process data that was previously de-identified in a re-identified fashion, there's criminal liability and it's not even a knowledge basis even if you're recklessly processing it. So, there are both positive reasons to do what we're talking about, which is to implement technical safeguards and also loss and risk avoidance as well.

Next slide, please. So, this is a slide that helps some people, it depends on your perspective, to understand the different types of processing. And this is a very simple example. Again, I'm not going to go into too much detail. You can refer to it when you ask us the slides afterwards, but it shows the difference between sharing of data, when the data is an input to your organization, still sharing. Primary processing which is what's at the top, number two. Secondary processing which is the bottom, number three and then sharing of data when your data is an output to someone else.

Now, what's interesting is these differences are not necessarily immediately apparent. Visa just played several billion dollars from Plaid. They're going to ingest that data into their operations. That's number one. That's data sharing as an input to your organization. What was the legal basis that that data was originally acquired? Is it compatible? Do you have to re-consent?

These are issues that people don't necessarily think through because the older perspective was, if I have the data as I possess it, I can process it.

Well, not necessarily. And going from number two, primary processing to number three, raises issues that people don't think about. I just want to touch upon one example on the slide. If I am a web-based video streaming service, I need to know whether you've paid your bills and how much you're paying and what level of my service you're entitled to and I need to deliver you the movies that you rented.

That's all primary processing. But the second I aggregate your data and compare it to other viewers’ data to make recommendations, I've now crossed to the secondary processing. What's my lawful basis?

Again, technology can help in this regard. And then the last one is, data sharing externally is not just monetization of data, right? You can be sharing the data with a party because they're helpful to you with regard to analytics, AI, ML or it's a part of a consortium or whatever it happens to be. It's these margins as you go from one of these to the other, the technical capabilities and safeguards like pseudonymisation that enable greater data use, innovation, and value.

Next slide, please. So, these are just some of the expressed statutory benefits in the GDPR for pseudonymisation. You hear a lot of people talk about privacy-enhancing techniques, and they're all very worthwhile and worth getting to know. But the term ‘privacy-enhancing techniques’ does not appear in the GDPR like the term ‘pseudonymisation’ does.

If you satisfy the new definition of pseudonymisation and we'll get to that in just a minute, these are some of the benefits that you can avail yourself of. For the processing, it helps you with all kinds of issues, reduced obligations, it can actually support scientific research and the bottom right-hand side, the Article 29 Working Party specifically called out pseudonymisation as a means to help tip the balance in favor of the data controller when they're seeking to use legitimate interest processing.

So, again, these are not just abstract concepts. The GDPR specifically and explicitly recognizes pseudonymisation and even rewards proper pseudonymisation with expanded data use rights.

Next slide, please. So, here is how I like to describe pseudonymisation. If you can separate the information value of the data, so we have there, male and middle aged, from the identity, in this instance, a fictitious John Jeffries who's 47. If you can separate the information value from the identity and you can show that the only way to relink those two is by access to additional information that's kept separately. I refer to that as the courtyard.

You actually get these expanded usage rights, but a lot of the traditional approaches to key coding or what's sometimes called pseudonymisation or tokenization, do not satisfy this heightened requirement under the GDPR.

So, you don't get the statutory benefits on the prior slide unless you satisfy the new statutory requirements and there's some fantastic guidance up there.

On this slide, I'm highlighting two ENISA reports. One from November, 2018 and one very recent in November, 2019 which has very specific recommendations to actually have compliant pseudonymisation.

So, again, statutorily recognized and rewarded with significant guidance that’s available from authoritative sources that enables you to have greater use and value from your data.

So, with that as background, I have a couple of questions I'd like to ask the panelists.

First off, why doesn't consent work? And the focus of this panel or this whole conference is on AI. Can you ever get effective lawful consent on AI when the specifics of the processing aren’t known in advance and perhaps can't even be described?

So just very briefly, anyone who has comments on situations where, well, consent is always the preferred method that it can be implemented correctly, simply won't work.

So, anyone have any particular thoughts on that? Please.
Giuseppe D'AcquistoGiuseppe D’Acquisto (Italian Data Protection Authority):
I think that there are two reasons why consent cannot work. One is possibility a mismatch in the definition of the legal basis. You use consent while you should have used different legal basis where there is an imbalance , for instance, with working environments, one sector is not entirely free because there’s an imbalance of powers between the controller and the individual. This is classic.

There are areas where consent is the appropriate legal basis, but nevertheless, it's not so effective. Why? Because of contextual constraints.

I mentioned this issue, which is relevant to me of asymmetry of information. Let’s look at it a bit more in detail of what I mean. We live in an era of abundance of information, volumes of data double every two years and the costs of processing information is halved every 1 ½ years or the cost of transmitting, storing, and processing information goes down and down.

And these things happen in context where you are not aware of the fact that someone can use your data and can collect your data. So there is asymmetry of information for the individuals. There's also an asymmetry of information that impacts the work of data protection authorities.

Numbers matter here. Let's look at the situation that we have in my country, Italy. We have four million businesses and public administrations. The majority of those businesses are controllers because they process personal data. Let's say, half of those are controllers, two million.

We in the IDPA, we are 150 fonctionnaires. If you just divide the minutes, I wouldn’t say the days of work every year, the minutes and the number of controllers, each fonctionnaire should take care for the person and corporation in each and every controller for three minutes every year. Three minutes, not more than this. Can you believe that this is a possible way to enforce the rights?

So, consent is the best possible way to provide safeguards because everything is in the hands of the individuals, it’s full control, but there's no way to practically, effectively enforce those rights. That's why we need to give more chances to technology in order to rebalance the situation more in favor of individuals.

So, the legal basis is correct in order to enable the further processing and full control of individuals. It’s the best way to provide safeguards, but what’s defective of this framework is individuals don't know, don't have a clue about the way controllers process their data and if the DPAs cannot step in and let’s say, find remedies when things go wrong, of course, we need to have a cultural risk because if you give more space, more room to technology, you have to admit that bad things can happen. Adverse events can happen.

But this is not a tragedy. The fact that adverse events may happen, doesn't mean that it will happen. There are many things that they are not technical in order to, let's say, keep the distance between ‘may happen’ and ‘will happen’ as high as possible.

We have to create an environment where you have incentives to behave well. You have to behave that way. You have sort of a framework of responsibility and liability and maybe also compensation in cases where this is applicable where adverse events may take place and may happen.

You can lead with those remedies. You need to look at the issue of adverse selection and moral hazards because if by law or by decision of a DPA, you say that technology enables the processing and an adverse event can take place, two things will immediately happen. That when an adverse event happens, the controller will say, “It's not my fault. This happened because of the malfunctioning of technology. It's not my fault.” And the other things that will happen for sure is that they will push a little bit, a step further what they can do that is more outside. I have the safeguards in place that technology gives. I will try to explore this.

You have to manage all these things which go beyond the length of the mechanism for storage, the access mechanism. They should work together. And I have to say that from the strongly technical side, technology works, because state-of-the-art encryption provides a lot of safeguards in terms of intelligibility of data in order to protect individuals from the possibility that others can break the confidentiality.

There's a lot to explore on these environmental elements that make the processing of information, the technology more broadly adopted among companies and businesses.
Gary LaFeverGary LaFever (Anonos):
That's a great segue way to the next question because the scalability of data protection officers, authorities, and even internally, the ability to keep up with things, technology when used to implement safeguards is more readily auditable, which I think is part of what you're getting at. But Ailidh, I'd like to ask you the question, the importance of the right policies and procedures and transparency that the technology implements. The term I'd like to use that I’ve heard used quite a bit here is ‘demonstrable accountability’.

So, the best technology in the world enforcing the wrong policies and procedures has no positive effect. So, if you just speak briefly on the need to have the underlying assessments that are also very transparent so that the technology can do a good job of enforcing it.
Ailidh CallanderAilidh Callander (Privacy International):
Thanks. And I think that speaks a bit to what was covered earlier in the sense of the example of pseudonymisation and some transparency over the techniques that can be used, that everything that we've been talking about is a very broad topic and this question and the one before it and particularly, when we think of different AI systems and different AI applications, it’s such a vast range of issues that will come up and there you have the issue that was covered in terms of legal basis and that’s just kind of one of the parts of GDPR. And then you have all those other principles and transparency and fairness there obviously called.

So, how do we even begin to consider whether the requirements of the law are being met both from a legal perspective, but also from a technological perspective without that initial transparency? And that's in all organizations including the DPAs and companies and society and combination of lawyers technologists, and enthusiasts since it’s really important to bring it together too because we need to be able to [inaudible] the claims of compliance.

And that's where [inaudible] and introduction of that requirement in GDPR raises the bar so much higher. The burden has changed. That shift of focus on, “Okay. It's not enough to say that you comply. You must do it.” And that's still lacking in many senses.

As a civil society organization, we try and interrogate claims that companies make, that public authorities make. When you're looking at a public authority, you have access to information to try and interrogate, “Okay, how are you complying with the law? How are you taking into consideration individuals’ rights, the societal consequences of what you're proposing?” And trying to get a hold for example, of data protection impact assessments, requiring documentation and all the organizational and technical safeguards have been considered.

I’ll say that those same access to information rights don’t apply in the private sector. However, if a company is trying to lead by example, is trying to use data processing for positive reasons, then opting the transparency is one step and that could be something as basic as publishing impact assessments and publishing information about the technical safeguards that you have used. And that's just the first step. But without that, it's really, really hard to know what's happening and is it lawful? And as I said before, what are the consequences of it?
Gary LaFeverGary LaFever (Anonos):
Thank you for that. Please, Steffen.
Steffen WeissSteffen Weiss (German Association for Data Protection and Data Security:
The code of conduct in this context of pseudonymisation is another example that we cannot talk about this one solution which fits all because it's impossible. There are many, many pieces of the puzzle which you need to keep in mind and comply with.

In the code of conduct, we also rely on different pieces of the puzzle. We need a legal basis for the processing of personal data. A code of conduct can never make a processing lawful. It's just a piece, a method to have everything in mind, to keep in mind, which is important when we talk about technology, which is an important part of what we are dealing with especially when it comes to AI.

It's not a piece of the puzzle and within the code of conduct; you cannot say which technology is the best. You can just say, “Okay, here are some best practices and some guidance you need to keep in mind when using technology.” So we would never rely on a certain technology.

It's just that as a mediator, it's something in between combining many, many aspects until their final goal to reach compliance with the GDPR. And this is another example of the code of conduct which cannot do more. We've been criticized to have the [inaudible] in this code of conduct working party. But I have to say, we're not making any processing within that industry and the controllers lawful. We just need the industry within this working party because we need best practice approaches. We needed to be made aware how do you actually deal with the topic of pseudonymisation and to learn from this approach and also match what we had in mind is important when you have a pseudonymisation process in itself.
Gary LaFeverGary LaFever (Anonos):
Thank you for that. So, we want to make sure we have the time for questions from the audience. So, I'm going to ask two more questions briefly. This is an interesting one which is, as we saw before, pseudonymisation is mentioned over 10 times specifically in the GDPR, but I want to highlight two.

It's highlighted within Article 25 as a means of affecting data protection by design and by default and it's highlighted in Section 32 as a means of increasing and improving security. If there is a breach of pseudonymised data, it’s possible that there's not even a disclosure obligation. In fact, there's no likely damage to the data subjects.

So, again, one is an innovation-enabler, Article 25. The other is a protection mechanism, Article 32.

So just briefly, start with you, Giuseppe, if you want to comment on that. But I do want to keep it brief because I want to make sure we get some time for questions.
Giuseppe D'AcquistoGiuseppe D’Acquisto (Italian Data Protection Authority):
This is a very relevant question and it makes the work of policymakers really hard because the fact that this notion is mentioned in two articles runs the risk of jeopardizing the beauty of pseudonymisation. When I say ‘we’ I also include policymakers and technologists within DPAs. We tend to interpret mainly or only pseudonymisation as a way to reinforce, to give safeguards by making the data, I would say, more obscure, hiding the identity of individuals.

We have the same notion in Article 25 and the rationale of using pseudonymisation is to enforce, to make the implementation of principles more effective. We really need to explore more of this possibility.

From my perspective, pseudonymisation is a way to generate identities. In Article 32. it is to hide identities.In 25, it is to generate identities. Each of us may have plenty of identities. Master identities, my name and surname and other pseudonymised identities that can be used either to make individuals more identifiable if this provides more safeguards or less identifiable if this provides more safeguards and implements more effectively the data protection principles.

Through this lens, I think we should look up the principles in Article 5 of GDPR and see whether, and there are many cases where we can be successful in thi,. we can make the implementation more effective meaning that we can give them proof that we have just invoked, we are inspired by the principles, but we have effectively implemented those principles in a way that can be demonstrated according to accountability.

In the talk that I gave in this workshop where we had for the first time, I tried to explain the cases where this can happen very effectively, and I also appreciate the work that ENISA is doing in making this concept not only as a security tool, but also as a privacy-enhancing tool available in simple terms to all the controllers.

So, stay tuned. I think that you will get more information on this aspect which is really relevant in the effective implementation of the GDPR in the context where we have this abundance of data, this easiness of reusing data.
Gary LaFeverGary LaFever (Anonos):
Would anyone else like to comment on that? The difference between protection versus enablement?
Steffen WeissSteffen Weiss (German Association for Data Protection and Data Security:
This came short in the discussion when we talked about pseudonymisation, same in Germany. The Chairman was talking about the protective function as we called it also on papers. No one was thinking of the enabling function when it comes to further processing operations, and changing of purpose.

I mean, pseudonymisation, stressing again, cannot answer the question, is it legitimate to change the purpose? But it’s also possible and very important.

I have one wish for the audience here. We’re talking about the subject of pseudonymisation. I am asking each and every one of you to go and to talk to your CDO and explain to them pseudonymisation does not equal encryption. Part of pseudonymisation can be the fact that your data subjects will be re-identified.

Of course, this is part of the pseudonymisation process. It can be. But you need guidance. You need to make it transparent in your decision. In which case, there can be re-identification.

So, we should really try to make people more aware of the topic, especially enabling and protective function.
Gary LaFeverGary LaFever (Anonos):
We’ve said it several times here, but ENISA has some fantastic materials. When you talk to your colleagues about pseudonymisation, I would suggest that you politely not assume that the person you're talking to is familiar with the new definition because again, much of what is passed as pseudonymisation in the past is not pseudonymisation under the GDPR. And so, in order to get the statutory benefits, you have to satisfy the statutory requirements.

Maggie, a question for you, the overall focus of this conference is on AI. Do you think the GDPR puts the EU at a disadvantage for AI?
Magali FeysMagali Feys (AContrario Law):
That's the argument a lot of people want to make and what we hear from the industries and I strongly believe that it doesn't. And if I can, I want to take the Google Home example. So, you might know Google Home records also after you give your commands the entire conversations. And when two whistleblowers came out, a lot of people say, “Yeah, but there you go again. It's GDPR again. And it's these rules that actually put a ban on innovation.”

And actually, I've talked to a lot of startups and scale-ups here in Belgium and they told me that first of all, in order to train the algorithm, because of course the industry needs to train their algorithms, definitely in natural language processing.

And they said to me, “Well, first of all,” and in a technical way, I'm going a little bit short. It’s not completely, completely the whole picture, but downsizing it. In order to train a command algorithm, which is needed for Google Home and Siri and all, you don't need conversations because that is to train a conversational algorithm.

And so, you don't need to record an entire conversation. And so, the explanation of Google saying, “Yeah, we need those recordings in order to train our algorithm for Google Home.” Well, it did not work.

And on the other hand, we saw that by using techniques and by using technology, our scale-ups and startups in collaboration with universities came up with a way of actually building in the hardware already software that adheres to the principle of data minimalization that only send to the servers and the transcriptors a part of the data that was really necessary, for example, from dialects to understand and to train the algorithm in natural language processing on dialogues that they were able to do that.

So, my argumentation was then, you see, we actually are better. We are more innovative than actually Google. So, I think our role should be and our message should be as from Europe and maybe also other countries and states, should be that innovation does not only need to come from the purpose of profit and building solutions, but that innovation also needs to be implemented from the start and that we also need to be innovative in order to protect our human rights and in order to protect privacy and the data subjects’ rights.

And I think it takes more effort. Yes, it does. But on the other hand, I think we all want to live in a world where we still have to adhere to laws and ethical principles rather than if you go to some countries and you sit in meetings and they tell you, “Yeah, the camera in your hotel room, well, it records everything because we just can.” And I think honestly, we can’t set aside our ethical principles even in business purposes.

And so, yeah. For me, AI in its way and innovation in its way, have to adhere to GDPR and if we do a good job in it, I think it's going to be innovative in itself and it will not stand in the way of being a player in the world for it.
Gary LaFeverGary LaFever (Anonos):
That actually makes me think of the third column in the goldfish slide, right? If you do things the right way, you could still accomplish your goal. So before we move to any questions in the audience, does anyone else want to speak on that as to whether or not the EU faces a disadvantage for AI?
Ailidh CallanderAilidh Callander (Privacy International):
I would just add to that. It goes back to that question of what kind of society do we want to live in? And having a data protection law and despite the implementation enforcement gaps that there are and they are huge is now more the norm than not. There are over 130 countries around the world with a data protection law and that's because data set the course to many things and so many things that impact our lives and I think what we can see is, if we don’t take these issues seriously, it exacerbates inequality and damages democracy.

So, these are huge issues for the society. And so, I think that technology is inevitable. So, what kind of society do we want to live in and what's the positive vision? And having an active and conscious input to that is really, really important.
Gary LaFeverGary LaFever (Anonos):
So, I have a number of other questions for the panelists, but why don't we see if anybody in the audience has questions before we run out of time? And if not, I'm happy to carry on with more questions. Would anyone like to ask a question to any of the panelists?

Going once. Please step up to the mic.
Q&A: Marty Abrams (Information Accountability Foundation):
So, I'm Marty Abrams and I found the panel to be very interesting, and I'm not sure if my question is really a question. It might be more of a provocative statement. The fact is that the concept of defined demonstrable policy and the technology that supports it is very much like the concept of “When I'm moving fast, I need both belts and suspenders to make sure that things are done in an appropriate fashion,” which means that you have to have a very well-articulated set of rules that relate to the innovative processing of data. They have to be something that is articulated clearly, needs to be articulated clearly in terms of the impact on all of the stakeholders which almost means an integration of data protection impact assessments and balancing processes together as one process.

The technology backbone then enforces those rules in a way that is also demonstrable. So, there needs to be an integrated process and I was just wondering if the panel could talk about this sort of an integrated process that takes the interest of the stakeholders into mind.
Gary LaFeverGary LaFever (Anonos):
Would anyone like to take that?
Giuseppe D'AcquistoGiuseppe D’Acquisto (Italian Data Protection Authority):
Well, I think the presence of the rules and innovations are not unattainable They can live together. We know lots of hyper-regulated markets, the data markets, these are hyper-regulated but still growing about 6% per year if I’m not mistaken is the last figure in my mind.

Banks. Let’s take aside the turbulence of the last year. But it's a hyper-regulated sector. Yeah. A hyper-regulated sector and it grows a lot.

The presence of rules is an element for trust. So if we find the right rules and the right ways to envisage good processes, this is not a way to limit the innovation. It's a way to give a forward look at the perspective through innovation.

This is the way I think we are reasoning on these aspects . It’s not a conflict between rules and free innovation. It’s a way to find the right path to innovation in the presence of big opportunities and big risks.

This is my view on this.
Steffen WeissSteffen Weiss (German Association for Data Protection and Data Security:
This is also part of the thing we all need to keep in mind is that humanity needs to stay in control of technology. So, you cannot let technology run into things. So, AI needs to be on the one side, regulated which will be a bigger task if you want to apply GDPR to AI, especially when you're talking about changing the purposes. What is the purpose when we talk about AI? So, we need to stay in control.

So, we need the guidelines, procedures, and then make technology and make it happen, what you want to achieve. I mean, there's a purpose for every business. So, you need both sides of the table and you need to really interact and [inaudible].

Going back to your question, Gary, is the GDPR steering us from innovation?

Maybe not, because humanity needs to be trained as well as an algorithm. When we have regulation, [inaudible] things. We need to be innovative. As the example Maggie was pointing out, to be innovative, to really try to make it happen, although there's regulation, there's regulation for a good reason. There's data protection for a good reason.

So, how do we compare these two systems with the same principle? You need to learn. You need to use data. You need to be innovative.
Gary LaFeverGary LaFever (Anonos):
Okay. Well, we have two minutes left. So everybody gets 30 seconds for a closing statement. Giuseppe, please.
Giuseppe D'AcquistoGiuseppe D’Acquisto (Italian Data Protection Authority):
We have this very important notion of accountability. I think that we need to make an effort to expand this notion. We need to talk about the accountability of the controller. This is relatively easy because we can talk to controllers. We can make prescriptions and issue fines. It's easy.

All the GDPR is talking to the controller. We need to have another level of accountability. It's the accountability of policymakers. We need to give clear rules on the implementation of technology and clear ways to manage the risks.

And we need also a third level of accountability. The accountability of individuals. Individuals must be more demanding, more selective in the data that they disclose to controllers.
Gary LaFeverGary LaFever (Anonos):
Great point.
Giuseppe D'AcquistoGiuseppe D’Acquisto (Italian Data Protection Authority):
This needs to be in place in order to foster real accountability of the overall environment.
Gary LaFeverGary LaFever (Anonos):
Ailidh?
Ailidh CallanderAilidh Callander (Privacy International):
Yeah. I think accountability is a good point to end on in the sense of, we've talked a lot about GDPR in this panel and in many of the panels. Like other laws, it’s got such a long way to go to become a reality, but I think it is, as I said before, one of the tools and an essential tool as other data protection laws and complementary tools like ePrivacy to actually making our rights a reality.

And this is something that we all have to work harder and it’s a huge responsibility on regulators but also on those data companies and public authorities to do this and to do it well and to do it urgently and we need that now to have a world where technology enables and empowers us.
Gary LaFeverGary LaFever (Anonos):
Thank you.

Steffen?
Steffen WeissSteffen Weiss (German Association for Data Protection and Data Security:
Just briefly, we have to really combine our forces and our efforts to provide more practical guidance for the data protection role of the controllers and processors. We're dealing with many, many isolated approaches. The German one, there's the UK one, there's another concentration in the member state. We really need to combine our knowledge and our will to really make the GDPR happen in practice. That's what we're really asking for.
Gary LaFeverGary LaFever (Anonos):
Maggie?
Magali FeysMagali Feys (AContrario Law):
So yeah. I can only say that yeah, I agree with what has been said and also for data protection authorities and I know their R&D initiatives, but business models need to change within certain companies and certain sectors. But rather than being behind the door, ready with a stick to slap them, just please sit around the table, think about what solutions you may reach for each other which will benefit the industry but will also benefit the regulators. And not only put lawyers and technicians, but also ethical people around the table.

And I would also encourage them to what I would say is not think inside the box, not only think outside the box, but also with regard to techniques and what is possible, think like there is no box.
Gary LaFeverGary LaFever (Anonos):
Thank you, Maggie.

I just want to mention really quickly for those of you who might be interested, Ailidh and I are actually on an IAPP webinar this evening. It's 5:00 PM Brussels time. You can go to the IAPP website. It's free. You don't have to be a member to see it. It’s on legitimate interest processing. So, it's going to be covering some of the same issues.

To give you an example of the level of interest, before this webinar was even advertised, we had over 1,000 people sign up. We're now at over 1,500.

And if you do sign up and you don't make it in real time, you can still get a copy of it. So, maybe it’s something that's of interest to you.

I'd like to thank everyone in the audience and if you would help me, thank the panelists on this great subject.

Thank you.
CLICK TO VIEW CURRENT NEWS