 The digital world is great. We have a million of applications and services that make our lives so much better. But in the digital world, there are no walls. There are no barriers. So all of the data that we give to these applications and services are accessible to service providers and third parties and can be used in unpredictable ways. A very known example of this unpredictability is the Facebook Cambridge Analytica scandal, in which data from users was used to try to sway their voting intentions. And at the core of this scandal is the machine learning revolution. Algorithms are becoming better and better at inferring and predictive human behavior. Companies are becoming more data-hangry in the search for this data that is gonna help then improve their profit and revenue. And this great revolution for the digital economy also came as a tsunami for privacy. We have more algorithms that now can make inferences about our private information, for how we talk, where we go, which people we talk to. They existed before, but attacks were concocted by humans, by our fellow researchers. And as such, we could study them, we could understand them, and find ways to combat these inferences. But nowadays, the game has changed very much. The adversary is not human anymore. The adversary is the machine. And the machine uses correlations that humans never dreamt of even have identified before. And as such, all of these inferences that they can produce are breaking all of the defenses that we bought before. So you may be thinking, if the machine is cleverer than the man, just put more noise into the structure so that the machine cannot learn anymore. And this is an experiment from fellows at Cornell Tech in New York, in which they took traditional way of putting noise into images in order to make them non-distinguishable. And to their astonishment, no matter how much noise they put, the machine was still very good at recognizing letters, vehicles, or faces. So the machines were winning the game. And why is this? Is because the machines are very good at dealing with noise. That's actually in the very nature. We give them a lot of noisy data, and from that they are able to find the patterns that actually end up being this inference of predictions that we like so much in the business world. And you may be thinking, okay, why don't you try to understand the real truth is that we still don't understand very well how machine learning works. Like interpretability of algorithms is at the very infancy, and even though the results are very promising in knowing why and why not machines make a decision, what is still far away of being able to use this for privacy. But there is a way of actually beating the algorithm without understanding is called adversarial examples. And these are particular way of putting noises like here in these photos, each of these turtles has some noise such that even though you see a turtle, the machine is seeing a rifle. So what we thought is, okay, we can use these enemies and convert them in our allies to make them the basis of new privacy technologies against machine learning. The problem is that most of these adversarial examples I thought for machines were actually you're pretty fit to put noise like before we all saw turtles, even these images were a bit modified. But some of the privacy problems don't have much freedom. Like imagine that we are creating some transformation for changing a tweet so that the machine cannot recognize your gender. We still need to keep the tweet understandable and we need to keep the meaning for people actually want to use these transformations. And there are other problems. So many of these transformations also come with a cost. A known thing is that from encrypted communications, machine learning can still learn many times the content of the communication such as which word was sent over Skype or which is the destination of the communication. And to protect from this, what we do is to put more packets in the network. Every packet costs money. Every delay has a cost for the user. So what we did at the lab during the last year is to develop libraries that not only can find good adversarial examples that are actually useful such in the case of the tweets that are still understandable and keep the meaning but also come at a minimum cost so that we really can deploy them and we have a chance that users accept this kind of technology. So the good news is that we're back in the battle. We again have a good mathematical basis in which we can build the construction of privacy technologies against the machine in a systematic way. And it is our hope that the technology that we are developing can put back humans into a position of deciding what do they want to reveal and why they don't want to reveal in the digital world is such a way that we can enjoy all of these fantastic applications and services without the need to endanger the democratic values that underpin our society. Thank you very much. You