 and welcome back to the R3S of the RC3 in Moenheim. One thing that came up in the IRC that was unrelated to any talk was about the little display you see here next to the Wienke Katze. What this is, it is a CO2 indicator so we can see when we have to ventilate the room. So one of our producer, the nice guy who does the video here bought this so we can see when we have to exchange air to prevent the spread of aerosols and so on. Anyway, if you have any questions regarding our stage or the talk, please feel free to join the IRC channel RC3-R3S on Hackend IRC or use the hashtag RC3-R3S on Twitter and Mastodon or use our handle at r3s at chaos.social on Mastodon. So in our next talk we are going to stay with artificial intelligence and with GANs. Now in English as you may have noticed, our next speaker is normally Hans, the hackerspace in Ghent. She does a master's thesis on GANs. She is very interested in the ethical aspect of what this technology can do. So please have a very warm welcome for Lisa Greenspecs and her talk, but this politician said XYZ. Hello and welcome to my talk, but this politician said XYZ. I want to talk today about the technology behind deepfakes and its ethical implications. So if this was a live talk I would have asked you to raise your hand if you knew this person and I would expect that nobody to raise their hand and if they did I wouldn't believe them because this person does not actually exist. This person does not exist.com is a homepage launched in February 2019 by software engineer at Uber and every time that you refresh the page another phase shows up which was generated by the same technology that is behind the deepfakes which are called the generative adversarial networks. Next to this person does not exist. There is also this cat does not exist.com. So you could argue that this cat does and does not exist at the same time. To my disappointment, this dog does not exist yet. So maybe that's a test for later. So let me walk you briefly through what I'm going to talk about today. So first I'm going to explain to you what generative adversarial networks actually are. Second I want to give some use cases what guns are already used for. And then we're talking about the downsides of guns. So for example deepfakes but also other negative use cases of deepfakes. So what are guns? Guns were introduced by Ian Goodfellow in 2014. Ian Goodfellow is an ex-Googler and now is the head of machine learning at Apple. He's also a former PhD student of Andrew Ang who's a very popular figure in deep learning. And then this treat here on the right you can see a post of Ian Goodfellow in 2019 about the evolution of guns. So we started on the left with a very pixelated black and white picture of a woman in 2014. We go through the years up to 2018 where we already have a very photorealistic picture of a person that does not exist generated by a computer program. Now as you have seen in this person does not exist now we even have hyper realistic pictures of people that we can't even distinguish from real people anymore. So what are guns? Guns are short for generative adversarial networks and guns consist of two neural networks competing against each other. So on the one side we have the generator that generates an image or audio or video for example and is also sometimes called the artist. The discriminator on the other hand discriminates an image or between audio. It's also called the art critics so it's telling whether an image or whatever other input is realistic or not. So that's a lot of new words so let me walk you through them. So what is a network actually? As a disclaimer this is a very simplified view so please fellow machine learning engineers don't touch me on that. So a neural network is based on the idea of human brain physiology. And each note in the neural network would be a neuron in the human brain connected to other neurons forwarding and transforming information. Neural networks are a part of deep learning, which is for example part of the hidden layers in the neural network. And deep learning is a part of machine learning, which is a part of artificial intelligence mimicking the human brains intelligence. So a neural network typically consists of three main parts. We have the input layer with one or more hidden layers and we have the output layers. So in for example our input layer could be an image of a cat that would be the RGB values, the pixel values of the cat, which are getting forwarded to the one or more hidden layers. And the hidden layers are doing some sort of feature extraction. In a simplified you could say that the hidden layers are checking whether there are pointed ears or acute nose, whiskers or the typical eye shape of a cat. And it's getting this kind of information forward to the output layer, which calculates a probability how likely it is that the image that we put in is actually the image of a cat, yes or no. And this is basically what our discriminator is doing. Our discriminator is the art critic that sees an image, for example of a cat, and it's supposed to tell us whether it is indeed the image of a cat or not. Our generator on the other hand works the other way around. So it gets so-called noise as an input layer. Noise are randomly sampled values. It forwards those to hidden layers, which are supposed to form ears, eyes, snout and so on, and it transforms that into a pixel values to generate the image of a cat. So how is this working together now? The generator who only gets random noise at the beginning starts to draw very random stuff that can be plops, black and white lines all over the place, and it forwards those generated images to a discriminator. The discriminator who does not know yet what a cat is then makes a guess. Is this quickly lying here a cat or not? So in this case, let's say it gives the information, no, this is not a cat back to the generator. The generator then knows, oh, okay, well, I have to change something about that. So it keeps trying and trying and trying until it gets closer to what an actual cat is supposed to look like. The discriminator is not only learning through the generator and its output, but it's also learning by getting real images of cats. So the discriminator is getting the fake images of the generator, but the discriminator is also getting images, real images from our labeled input data. So every time the discriminator sees a picture, it makes a guess. So yes or no, is this a cat? And then it gets feedback from the system by, okay, this is a real image or this is a fake image generated by the generator. And discriminator's goal is to be able to differentiate the two to say, okay, this is fake and this is real. And the generator's goal is to make pictures as realistically as possible. So these are our two neural networks, the generator and the discriminator fighting or competing against each other. So that's the adversarial part of generative adversarial networks. So this process keeps going on and on until the generator can generate pictures that are indistinguishable from our real images. Some of you might have seen this rather popular gift already from Alan Paper from Sue et al. in 2017, where our input were moving horses and the input was images of zebras. And the guns goal was to map the pattern of a zebra onto a horse. And while this looks very funky, if you took a screenshot of it, it would, to most of us, at least look like a zebra. So now that you know what guns actually are, what are guns actually used for, what are they useful for. So they're, for example, used in medicine, for example, to reduce noise in images or fragments that are not supposed to be there. They're there to up sampling images. So in case we have a low resolution that upscales the resolution, we have classification, is it this or that, we have segmentation and we have object detection. So here in the lower left image you see pictures of an eyeball and a gun, for example, could extract the image of the blood vessels in this very eyeball, which would then could be used to diagnose something or to see whether everything's working fine. And on the right image, you see MRI scan of the brain. And the gun would be able to detect abnormalities in the tissue that could give hint to a disease or something which might not be visible to the eye or could at least save time in the process and resources. So GUNS is one big point where guns are already used for, but guns are, for example, also used for ours, a video game. So here we can see that it has been used in the Legend of Zelda from 86 in a paper from Tirada from past year that a gun could generate new levels in this game. So about 60% of the levels that the gun generated were actually playable levels. So in these kind of levels, you always have to have a certain amount of items, you have to have a key, you have to have a door and so on. And guns were able to produce up to 60% of playable levels compared to other algorithms that from which only about 10% were playable. Some of our beside video games are movies where deep fakes or guns are already used. So here the Reddit user DerpFake uploaded a gif of the face of Nicolas Cage put onto another actresses body from the same in the movie they were both starring in Man of Steel. Nicolas Cage's face on different bodies has gained quite some popularity in the recent years. And beside putting Nicolas Cage's face on other actors and actresses bodies, other users have shown that generative adversarial networks can also outperform CGI which might be used in the creations of movies in the long term. So here you see that in the movie Rogue Run, the young Carrie Fisher on the left side with CGI and on the right side with deep fakes or guns produces a far more realistic and prettier picture. Another example is Robert De Niro's The Irish Man, where he was D.H. since the actor was already 70 years at the time, and it took Netflix about $10 million and two years to D.H. Robert De Niro, while it took one YouTube user about one week and his home computer to generate this. So next to science, arts and video games, there is another use case that most of you people have either used themselves or at least have seen on social media platforms. And there are so-called filters, we have aging, we have face swapping, we have putting bunny ears, cat ears, dog ears onto people's faces. They are all also created by generative adversarial networks. Another use case, for example, is creotherapy that has been used for the first time in this year, where therapists have spoken with the voice and face through somebody who just passed away unexpectedly. So for example, a father could talk to his just passed away daughter and work on his grief through that. So what are the downsides of GANs? Now we've seen many positive use cases, many useful use cases, but GANs are not without any problems. So one big problem is bias and especially racial bias. And here on the left side, you see a pixelated picture of former US President Barack Obama that got up some pulled by NVIDIA's style GAN algorithm into a very widened version of Barack Obama. Twitter user Osasua has used this algorithm a couple more times, where you can see here on the left side the original images of the people that he used. So he first pixelated them, which is the middle picture, put them into the algorithm and the right column is what the algorithm puts out. So here you see a variety of ethnical backgrounds and skin colors and their pixelated versions got up some but in a very widened version of them. So another problem is that GANs that only produce pictures with buyers, but similar techniques are used to predict the probability of which people who are accused of crime would commit crime again. And this also shows a substantial racial bias with people of color who are getting longer sentences because the judge would use such biased software. Another problem with GANs is that they can be used to create fake identities. So for example, social media bots are getting more and more realistic and are used to influence people's political opinions and decisions. So Facebook removed over 900 accounts which spread pro-trump propaganda to about 55 million users. Facebook held a coding challenge to develop an algorithm to detect fake images, which they called a deep-fake detection challenge in December 2019. And Twitter, for example, said that they're marking tweets that contain fake images and wants the user when they want to share the tweet with a fake image. Most of the algorithms that are supposed to detect fake identities or deep fakes are typically also based on GANs. One of the biggest issues and downsides of GANs are identity theft and about 96% of all deep fakes are porn. That is celebrity pornographic videos, but for example also revenge porn. And this created a whole sub-genre of porn. So it's mainly used for against actors and actresses, but also you and me could be a victim of this process and, for example, political opponents. So while I was researching articles about deep-fake porn and so-called deep-nudes, I found this terrible article reviewing the best deep-nude apps of 2020, which I tried to report. So let's hope that it's getting removed at least. And in 2018, people tried to silence Ahrana Ayub, who is a Muslim and investigative journalist from India. Her social media account got infiltrated with fake posts and fake porn, such that she wasn't accepted at any Indian publisher anymore and she couldn't leave her house for quite a while. A last problem that I wanted to mention is the tempering with medical imagery. So it starts to spread to other domains as well. So researchers have shown that you can inject or remove a tumor on an image of a 3D CT scan of a lung that fools medical professionals as well as detection software. And then there are many things that we're probably even not thinking of yet that the GAN and deep fakes could influence and take over. And I would say, why is that a problem? Only people who have a lot of commuting power or a lot of knowledge about these things can create deep fakes. But that's not true. Basically, everybody can create deep fakes. It is important for you to know that everybody can make deep fakes now. You can turn your head around, mouth movements are looking great and eye movements are also translated into the target footage. And of course, as we always say, two more papers down the line and it will be even better and cheaper than this. So now I've mentioned all the dark and negative sides of GANs. But what shall we actually do? What can you and me do against the down sides of GANs? So when we talk about bias, especially as a researcher, there are several things that you can do. So you need to try to balance your data sets and you can do that with a variety of things. You can try to put more variation of collection methods. You have to have a high diversity of people labeling your data. And you also need diversity of where your data is collected. So here the image at the bottom, you see on the left side image net, which is a very popular data set with live images, but more than half the data was collected in the USA and in Great Britain. So it's not very representative of the world, but it is used in all kinds of tasks. And it's also usually the case that the dominant culture is often higher represented in a data set or even reflect correctness when it's put into an algorithm. On the other side, you can try to balance your algorithm that can be done by checking losses and weights, et cetera, but also the people who are coding influence the bias of an algorithm. So there was an example with a soap dispenser where a group of people with white skin color developed a soap dispenser that would react to your hand putting under the soap dispenser and it would not react afterwards to people with a darker skin color. So your bias is also reflected by the people who are coding. So to sum up bias can be introduced to machine learning model basically at any point where a person might have designed, engineered or touched the system and everybody of us is biased whether they are aware of it or not. But what can we do against deepfakes? Unfortunately, not a lot because the technology is already out there and is already freely available for a lot of people. What you can do and what I can do is we can question our sources of information, our sources of images, et cetera, to detect deepfakes. And if you're sure that you detected a fake reported and also don't use or support gun algorithms in a harmful way. Thank you for your attention. I hope you found it interesting and learned something new. I'm looking forward to answer your questions. And don't forget whether you're programming with guns or whether you're using or consuming guns or their products. Don't forget with great power comes great responsibility. Thank you. So, yeah, I think I'm back on stage. Oh, nice. So, I see that there is... Oh, okay. So your talk seems to have been quite comprehensive and as nobody did leave any questions. Considering that this is the second talk on guns in a row, I think, yeah. What was really nice of you was that you covered the other side of guns. The speakers for you tried to cover or did cover a specific use case and how to implement them. And you covered more fundamental concepts. So, yeah, thank you very much for taking the time of preparing and giving this talk. And, yeah, have fun on the remaining RC3. Thanks for virtually coming over and the... Oh, yeah, one came in. One came in just now. How big is the computational cost for a discriminator? How big is the computational cost for a discriminator? That really depends on how complex your guns is. If the input for your gun is very small images, then the computational cost is, of course, more as well. You can't, yeah, you can't generalize that, really. Okay, so thank you for taking your time. Thank you for answering this last-minute question. And, yeah, have fun. Thank you for hosting me. It was really nice. Yeah, absolutely welcome. Thank you for coming over.