9. October 2019

Hearing of Margrethe Vestager – questions & answers

Alexandra Geese (Verts/ALE) – The upcoming Digital Services Act is a huge opportunity to get things right in the digital market. The General Data Protection Regulation (GDPR) was a milestone but there is still a striking power imbalance that European citizens and consumers are facing today. Ad-tech driven micro-targeting enables disinformation campaigns, and political interference strongly influences consumers and leads to what some have even called surveillance capitalism. What I would like to hear from you today is a clear commitment to the high data-protection and fundamental rights standards already set. But furthermore, I would like to know how seriously you plan to fight the existing power imbalance. Are you ready to tackle ad-tech driven business models as a whole? Are you willing to take certain data exploitation practices, like microtargeting, completely off the table?


Margrethe Vestager
, Commissioner-designate – One of the things I have learned about surveillance capitalism and these ideas is basically that it’s not you searching Google, it is Google searching you. And that provides a very good idea not only about what you want to buy but also about what you think.

So, we have indeed a lot to do. I am in complete agreement with what has been done so far because we needed to do something fast. The code of conduct, the code of practice, is a very good start to make sure that we get things right because we couldn’t, as it were, sacrifice either the European election nor the forthcoming national elections because we needed regulation to be put in place.

In that respect, we have a lot to build on. I don’t know yet what should be the details of the Digital Services Act and I think it’s important that we make the most of what we have, since we’re in a hurry. It is important to take stock of what I would call digital citizens’ rights – the GDPR – so that we can have national authorities enforcing that in full, and hopefully also have a market response, so that we have privacy by design, and are able to choose that. I think it’s very important that we also get a market response: to be able to say ‘Well, you can actually do things in a very different way’, rather than just allowing yourself to be, or at least to feel, forced to sign up to whatever terms and conditions are put in front of you.

I find it very thought provoking if you have time, once in a while, to read terms and conditions. Now, the fact that they’re obliged – thanks to this Parliament – to write in a way that you can actually understand makes it even more scary, and very often it just makes me think ‘Thanks, but no thanks.’ That, of course, is the other side of that coin: yes to regulation, but also to enabling us, as citizens, to be much more aware of what kind of life we want to live and what kind of democracy we want to have. It cannot just be digital, for then, I think, we will lose it.


Alexandra Geese (Verts/ALE)
 – Staying in the digital sphere and speaking about trust and fairness, we have already heard about the great potential of artificial intelligence and automated decision-making. But they also bear the risk of direct or indirect discrimination. Studies and evidence have shown that women, people of colour, LGBTQI and poor people are often disadvantaged by those systems. So, with regard to the legislative framework for artificial intelligence that you mentioned, announced for the first 100 days, how do you plan to make sure that algorithmic systems as a whole are not discriminatory, especially with regard to biased datasets?


Margrethe Vestager, Commissioner-designate
 – I share these concerns. Usually I’d say, well you can have your AI when I have a gender-balanced society. Because the problem of AI is that it’s not any wiser than the data you feed it, and the patterns that it finds, it assumes are the right patterns. And these are, of course, man-made. So there is a risk that if we don’t do something, we just cement the inequalities that we have already, instead of actually doing our best to change it.

One of the principles of creating trustworthy AI – I think it’s number 5 – exactly addresses this: that if you will not in itself design your AI to get rid of the biases, then you need to have human oversight so that you can self-correct. And that I find to be very important. And these principles: now we will, of course, see on the feedback that we get from all the different businesses that have tried it out, how it will work. But I think this question about how to avoid biases is one of the core questions when we are to discuss how to put a framework in place that will allow us to trust the technology.

Stay up to date