 Ihr findet das ja wahrscheinlich alle völlig normal mit großen Datenmengen umzugehen. So Typen wie ich, die herkömmliche Journalisten sind, ich bin im Hauptberuf Hörfung Journalist, die verzweifeln regelmäßig daran. Also wir haben gelernt, damals Leute zu fragen, die sich mit irgendwas auskennen. Jetzt ist Leute fragen, die sich mit großen Datenmengen auskennen, auch nicht unbedingt zielführend für uns ein. Dafür gibt es aber neuerdings einen relativ frischen Beruf, dieser Beruf nennt sich Datenjournalist. Die Datenjournalisten sind diese Menschen, die aus aberwitzig großen Datenmengen Zusammenhänge rauspropeln und diese Zusammenhänge dann idealerweise auch noch so visualisieren, dass solche Noobs wie ich das dann auch verstehen und hinterher kompetent drüber reden können oder jedenfalls so aussehen können, als würden sie kompetent drüber reden. Und einer, der so Leute wie uns sehr dabei unterstützt, ist Datenjournalist, das nämlich der Michael Kreil und sein Vortrag seht ihr jetzt. Viel Spaß. Guten Morgen. Ihr habt gut geschlafen? Ich nicht, weil erstens halte ich jetzt einen Vortrag vor vielen Leuten. First of all I'm going to give a talk in front of a lot of people and I'm going to talk about a subject that's led to a lot of heated debates and therefore I don't want to stand front and center. I would say that I'm on the left side of the political spectrum. I distance myself from all racism, sexism, anti-semitism, xenophobia, Islamophobia and in general anyphobia was directed at a group of people. And for this talk I'm a scientist and I'm observing a heated social debate and I think I am bound by neutrality. So for this data analysis I've prepared 150 slides. So basically the first is the methodology. First of all I would like to talk about my methodology so people know how all of this works and to ensure that it's reproducible. Twitter has an API that lets you request status, tweets, users, what lists have been published, you can look at them, follow us and then you write an interface for this API and if you put in a bit of effort you're going to put a cache in between as well and if you make a massive effort and invest some days or weeks you scale the whole thing so that you no longer have just one interface but lots of them and each of them represents a single Twitter user. So I'm not just calling the Twitter API as a single user but as lots of users and on top of that is a load balancer that balances the load because the Twitter API is limited. If I want to know who follows whom I can only make a certain number of requests at a time. But if lots of people have given me their authorization to make requests that means of course I can make more requests at the same time. This is a kind of democratic process in which people can support me to cause more load on the Twitter API. My colleague jetzt has built this interface where you can donate a token and you click this and when you do that we get a long code and it allows us to send additional requests to the API of Twitter. Thank you also to logbuchnitzpolitik especially Tim and Linus and the 700 donors who gave us their token which was necessary to pull large amounts of data out of Twitter. Schon bei mir zu Hause kann ich dann über Kabel Deutschland 10 Request pro Sekunde abholen. So even at home I can use Kabel Deutschland to make 10 requests per second and if I want to get the metadata of all 330 million Twitter accounts that would take me about 4 days without tokens you'd be looking at months or even years. Im Prinzip auch sehr stark auf Twitter fokussiert die Analysen. Das bedeutet, dass meine ganze Analysen auf Twitter fokussiert. Fake news ist ein komplexes Subjekt. Ich habe dieses Graphik von der Stiftung Neue Verantwortung der Charakterisation der Fakten und was nicht. Auf der linken Seite sind die Politiken, die nicht Fake News sind. SATA ist nicht Fake News für Journalismus, es soll eigentlich nicht zu Fake News sein. Einige Artikel sind fakturärisch nicht Fake News. Fake News ist Targeted Disinformation, manipulatet Content und die images, ich hab das in einem bestimmten Fall so kroppzt oder Content, das einfach aufgenommen hat. Ungefortunately habe ich nur 1 Stunde für die Werte. Ich kann nicht mehr auf Fake News sehen. Das bedeutet, dass ich einen betreffenden Van der Amtes sehen will, das war... Es ist ein Tweet von der AfD, Ich habe mich verabschiedet, dass der deutsche Foreign-Office für die Schweden eine Travellwarnung hat, die natürlich nicht in any kind of fact ist. Hours or days later, der deutsche Foreign-Office sendet diese Tweet, wo sie verabschiedet haben, dass sie diese Travellwarnung haben. Es gab fast 2000 Tweets auf diesem Thema, wo ich diese Twitter-Frame-Frage benutze, um sie zu analysieren und zu denken. Hier ist das, was ich gemacht habe. Hier gibt es ein Timeline. Es spannt von 2. März 2017 bis 9. März 2017, so eine Woche. On top of those are those who published the actual fake news, and on the bottom are those who published the counter news. And retweets are visualised by these arcs that span across the top or bottom of the timeline. So this allows me to visualise the impact of a certain tweet. So if lots of people retweeted a tweet, you see a lot of arcs. The first tweet on this subject was published at the end of September, but it got no impact whatsoever, no favourites, no tweets. In February there was another tweet on this from an account with just one follower who did not get any retweets. So these topics came up before, but they weren't carried into a larger debate. Look at our timeline. The first person who tweets about this is a screenshot from a bot that publishes travel warnings. It got six retweets and six faves, and that was the first small impact. And this grows and grows, and we get this travel warning for Sweden with 29 retweets. I can't show you the next tweet because the account has been blocked in the meantime. This is a phenomenon that we saw quite often. Trump was right, the Foreign Office issues a travel warning for Sweden und verweist auf einem Artikel von zuerst.de, und es steigert sich immer mehr. Es ist doch nicht alles ruhig in Schweden. It keeps growing. Apparently not all is quiet in Sweden. Islamic terror, und dann zum Schluss die AfD Magdeburg und dann die AfD-Party in Magdeburg. The jumps on the bandwagon and add zero in hashtag. You see this crescendo of, where we get from a travel warning to Islamic terrorism. And people countering this news started with the German Foreign Office, and then the German media jumped on the bandwagon who published this. And if you look at the entire timeline, it's interesting to note that this was not just published as fake news and everybody was really upset. But it's more of a stutter. It looks like people were experimenting and we saw a lot of network resonance. I would say that fake news are a meme that transmit growing this network. The core phenomenon of fake news is not their existence, but the fact that they can spread so well. It's not the source that creates them, but it's the resonance they find in a network. So if you were to ban fake news sources, they would still spread in a network. And because fake news as a meme have a large reach, you can draw capital from them. And I'd like to mention Breitband News from the United States, who get a lot of political capital from this in the United States. We're looking at the timeline again. We're looking at which accounts are actually going against the fake news and how they're followed to spread this message against the fake news. The blue are the ones that are against the fake news. And the closer the stronger the blue is, the more the accounts are linked together. And then we can analyze how complex the network is. The interesting thing is that the ones that are spread further apart are actually not part of the core news teams. Man sieht natürlich auch noch Tendenzen, dass viele Fake News verbreitet dann mit Lügenpresse und so weiter versuchen. And we see that actually the fake news is being spread by the lying press und Lügenpresse, die man hier sehen kann. Und das erste Mal, als ich das Bild gesehen habe, habe ich... The first time I saw this picture, I thought this must be a mistake. But the good thing is, because we're working with Twitter APIs, we can actually reconstruct the timeline from that. And here we're representing a schematic of the tweets. Von den Fake News verbreitern 89% die Gegendarstellung mitbekommen haben, aber sie nicht weiter teilen. From the fake news, 89%... Okay, so 89% of those that spread the fake news actually saw the other side, but only 43% of those that spread the other side saw the fake news. So what we can see is that there is a filter bubble. Ich habe 1,6 Millionen Twitter-Accounts. So what I did is I extracted 1,6 Million Twitter-Accounts und tried to represent the entire German-speaking Twitter-Network. There's Switzerland and Austria and the rest of it is all throughout Germany. And you can use different colors to color this. So here you can... This one is red, our old accounts, green, our new, younger accounts. So green is a lot of youth and YouTube. There's a bubble with porn. Please don't click it. Then on the left side there's a group that has marketing stuff. And on the left right, on the left bottom, there's people that are interested in politics and media and those are probably the ones that we're interested in at this point. We can also use different colors, Bundestagsabgeordneten oder Kandidaten von Parteien, wo sich in diesem Netzwerk befinden. Now we'll look at which party candidates are where in this network. So yellow is FTP, green, blue, pink. These are just the usual colors of the German parliament and they're all in a sort of group together except the AfD, which is removed. And now again the fake news are colored. So we can see that the group that is close to AfD tends to be... Tends to be in the same group as those that are spreading the fake news. It isn't actually AfD members that are spreading this. Instead it's new people that are not members of parliament, not members of AfD, but they're retweeted by the AfD. And here we're trying to compare them with other Twitter-Users. So these are some words that are often used as compared to other Twitter-Users. Islam, Germany, migrants, Merkel, mass, politics, fighting for liberty, refugees, SPD, etc. Now what we can see is that we actually have a very monothematic group. And what kind of group is this? So they're new on Twitter, they're close to the AfD, they often share fake news, they're very active on Twitter and really they only care about migration and asylum politics. So we can also talk about social bots next. And we'll get it closer. This was published in the media at first in other themes together like democracy. Everyone has their own opinion of what is exactly a social bot. Here from the German government we see an official statement of what it is. There are bots that can communicate with unsocial media, but they're actually not people. They have automatic communication. And then, what these social bots can do is they can actually interagieren with can hinterher sagen, ob es ein Mensch oder ein Computer ist und diese social bots sollen jetzt mal... And then the turning test is can you tell if it's actually a social bot or is it an actual person? The German parliament, they publish that they don't really know what happens with social bots, what kind of reach they have. And if I want to find social bots on Twitter, I need to search them. And for that I need to know how they look. And for that I need to analyze them. So we kind of analyze them. I either need to have this group to analyze them and then I can search for them. Or I need to search for them to get this group and then analyze them to get through this circle of terror. There's one way to go and what we did is we bought social bots. So, now just as a clarification, they aren't actually social bots in a real sense, but they just bought 4.5.000 social bots. And I didn't tell anyone about that. So, I just looked for Google and I now have 4.5.000 social bots. So, I just looked for Google and I just looked for Google and I just looked for Google and I now have 4.500 followers and 100% of those are bots. And this was just to see what kind of characteristics they have and what would make them different from humans. So this is like a dry run. And then what we did is we tried to analyze the metadata. Now we have a graphic again. Each single point is one of these followers. So, I have 4.5.000 points and each of these are one bot. So, I know there can't be a person in there. So, I tried to sort them according to whether they look like a person or a bot. On the left side, I have all of these characteristics of this one bot. I looked, is he an egg? Does he not have an avatar yet? Does he have less than 10 followers? Does he have less than 50 favorites? Does he not have a description? And if he has a time zone or not? So, this is the kind of metadata that you can get from Twitter. To see whether we can actually group those boots. So, on the left side, we see the stupid bots. They don't really do anything. They're usually young. Then they're in the middle. We have pretty realistic bots. Some of them are actually real accounts that are inactive and were taken over by bots. I'm guessing someone just tried passwords and logged in that way. The third are very interesting. They're accounts that are actually people. Verified people. This author from the States. He's a survival specialist. Chris Glorioso. He's an investigative reporter. These are real people that followed me because I paid them as bots. Sie haben dann eigentlich nur noch dm's. Sending them an e-mail oder a message. But not everyone answered because it seemed like their accounts were hijacked and they didn't know. I tried finding out how they were hijacked. But I didn't really find out. Most of the time when they were hijacked, it wasn't about political stuff. It was sex crap. But I find interesting, the CDU, they say that they want to to mark each social bot. But what happens when these accounts are hijacked? Now what's interesting is looking at it 11 months later. Which accounts are still active today? Ja, red are the accounts die von sich aus sich gelöscht haben. Yellow are the accounts that have been locked by Twitter. Red are those that somehow disappeared by themselves. So what we can do now is look at the follower networks of those bots. So I did the following graphic. There's four and a half thousand bots there. And you can see where they follow each other. Red are the bots. And blue are other users. And if we overlay it, we can see it looks quite interesting. Kind of like a flying spaghetti monster, I think. The blue dots, they seem to be teenagers that actually just bought themselves those Twitter bots. And maybe it can be interesting to analyze them for me. And you need to be careful. It's quite easy to go to actually just give 10,000 followers to someone. And it's not even very expensive. Now what other ways are there to explain this? One way is to do a Tweet-Apps-Analysis. Now a couple of years ago for every Tweet, you could see which app it was published from. The web interface no longer has this, but the API still does. I can run this over our bots. And it's mostly Twitter for iPhone and Twitter for Android. But also 350 additional Services. They look like this. So it's a massive Potpuree of things. Korean, Japanese, everything's in there. I didn't realize that there was this massive ecosystem of scripts that publish things on Twitter. And in my study alone there were 350 To show people what kind of services are behind this I grabbed some of them. Here's Feedblitz It's a bot that does nothing, but posts an RSS-Feed on Twitter. There's Tweetbot.net seems to be something homemade. Tweet's nothing but rubbish. There's the Kim Kardashian Service that probably publishes the newest Scores. There were nine that were deleted and renamed to erased and a long string of numbers. I'd love to know if this number is sequential. Does it mean that 5 million apps have already been erased? What does it mean that 5 million apps are not registered with Twitter? Bots sind auch nicht nur Böse. Bots are not always evil. I don't know if any of you know the town hall clock of Neukönig. He publishes something like 8th Bam Bam Bam Bam Bam Bam tissern Dat ... .... ... Und imcript .. ... online. So, to sum up what, how could you detect a bot, you can check if the profile is fully filled out, you can check if it's part of a larger network, you check which app uses, which app sends the tweets and you can look at the other metadata of the tweet. All of these, none of these are conclusive, they just, they just indicate because taking for themselves, each of them can be explained by human behavior and the other way around, of course there might be a highly intelligent, artificial intelligence behind this, but let's go and find some bots and to make a long story short, I found inactive accounts, service accounts, trolls, teenagers, racists and haters and then spam with links to YouTube, porn or bitcoin bullshit, but what I didn't find were politically active social botnet books and this means that either they don't exist or I'm just stupid. So let's ask the professionals, one of them is Professor Dr. Simon Hegelich, he studies political data science using machine learning plus X. He publishes studies for the Conrad Adenauer Foundation with headlines like Invasion of the Opinion Robots, but he doesn't explain his own method. He did, however, write this study, where they actually found botnet books in Ukraine and they found out that these bots used tweetfarm, which is an app that was used to publish these tweets. Well, this seems to be a botnet work that was used to post tweets in Ukraine, but what about Germany? Maybe I just didn't find it, but at least I couldn't find any study by Professor Simon Hegelich. I did find, however, this article from the German public broadcasting where Mr. Hegelich found a potential candidate, Egon Dombrovsky, who posts 136 posts per day with a clear tendency towards the right-wing AfD-Party. And the question is if this is a bot or real person and the editorial staff sent him a message and the potential social bot is called Egon Dombrovsky, he is 50 years old, is retired and is in Erfurt, but he is very active in the AfD-Party, he calls media propaganda press and he isn't paid by the AfD or anybody else, he just does this because of his own political views. So we try to find a political bot, we stumbled upon a real person, but there are other people who are trying to do this, like botswatch.de, look at Twitter and at the involvement of social bots on Twitter during important events in Germany and they link to an explanation of their method. And one of their criteria is 50 tweets per day, but Dombrovsky makes it to 136 per day. And because we were looking at the German Federal Elections anyway, I made this long list of accounts that post things about the German Federal Election and even the large parties, so the Christian Democratic Party, the Liberal Party posts more than 50 tweets per day. So if that's your criterion, you'd get a lot of media as well, you'd get politicians like Christopher Lauer or Anja Schenaneck from the Green Party in the Berlin House of Representatives and of course the political parties who are especially active in the campaign to the German Federal Elections. So 50 tweets per day as a criterion for social bots is simply not scientifically sound. They explained that this is one of the criteria that Oxford University uses and I thought that looked interesting and I read all the scientific papers on the subject. So I'll only show you two of the social bots during the US elections. This is a paper that was published with the title Bots and Automation on Twitter during the US election. And I go to the definition now. Sie fangen an von social bots zu reden. Sie start talking about social bots and then they say like social bots, heavy automated bots. And then they say we define a high level of automation as accounts that post at least 50 times a day using one of these election related hashtags. So if you publish a tweet with a special hashtag more than 50 times a day, these are the hashtags that they used. This is Trump, the second is Clinton and the third is general election hashtags. But I have a couple of questions for this definition of social bots. Why 50 and not 30 or 100 per day? And is that the same for every country? Can we use this criterion for Germany and what accounts are these actually? If I can identify those in particular, what are the characteristics? How many followers do they have? How many fafs? How many retweets do they have? Maybe no one reads them. And what services are they using to post? You saw that we can actually see that. And of course I can see that there's a lot of accounts but there's a scientific method to just take a sample, a random sample. Did the scientists actually do that? There's no study that actually says whether anyone actually did this sample. Unfortunately, I can't check that because the University of Oxford hasn't published the data. Partly that's because of the developer API of Twitter. But you can actually do that und kann sich dann nachträglich den Tweet nochmal holen und genauer analysieren. Zum Glück gibt es von der George Washington University ein Papier auf 2016 für die US-Präsidential-Elektion-Tweet-IDs. Und das ist meistens über Tweets, die die hashtag-Elektion-Date haben. Unfortunately, I don't have the same data as the University of Oxford, but I can use this data, which is a small sample from the University of Oxford-Data, that is those that use election-day hashtags. So, so, so, so, so, so. Die, die die University of Oxford-Data, those that use election-day hashtags on the day of election, diejenigen, die sogar mehr als 100-mal am Wahltag getwittert haben, sind diese 12. And those 12 actually publish tweeted more than a hundred times on election day. And those would be the ones that are most significantly seen as social bots. Werden jetzt diese social bots wirklich sind, nehmen wir uns mal Zeit und jeder dieser zwölf Potential-Bots. Der erste ist DREAM OF DUST. Er hat 437 Tweets auf den Election-Dag. Er hat Twitter-Links zu reddit.com. Er benutzt IFTTT. Und wir können sehen, dass er definitiv ein Bot ist. Er benutzt reddit.com, aber es gibt keine Influenz, die er exerzt. Twitter-Screenshots von Texts. Very, very strange. Conspiracy-Theories. Sehr strange. Er benutzt für seine Screenshots, die er dann veröffentlicht. Und für die Screenshots, die er publisht, er benutzt die Trending-Hashtags. Er hat 4 Follower. Und er ist nicht ein Bot. Der nächste Account. Er ist Trump-Supporter. Er ist terrible. Er benutzt Hashtags intensiv. Er ist auf jeden Fall kein Bot. Man zieht ein Break during lunch. Er es nicht. Frau Bernd. Er loves Bernie Sanders. Sie h时 Trump, Hillary Clinton, Bill & sheerley Clinton & XPediaチエルシ delivered-Manage. Er ist kein Bot. DER nächste sieht mir the coloringsflectric phone out. ein Programm, das versucht, die User auf Advertisement-Sites zu klicken, für Apps oder so. Es gibt keine politischen Dinge da, und es benutzt nur Trending-Hashtags. Da nochmal ihre Links zu veröffentlichen. Es hat einen Follower. Ich glaube nicht, dass es ein Boot ist, weil es ein paar Mistakes gibt, also das Programm würde wirklich schlecht sein, wenn es ein Boot war. Aber es gibt keinen Impact da. Jetzt gehen wir zu den mehr interessierten Cases. Eve Hurley, sie ist ein Granma von Brooklyn. Sie hat vor Kurzem mit dem Rauchen aufgehört. Sie hat an dem Wahltag, wie will sie das, hat sie Twitter getweetert. Sie hat versucht, Leute zu veröffentlichen. Sie ist nicht ein Boot. Dann habe ich hier noch den hier. Ich vermute, es ist ein Studium. Ein Student aus Tokio, der tweetet in Japanisch. Und dann, das war der Moment, in dem ich die letzte Übersetzung hatte. Ich habe keine Ahnung, was er versucht, zu sagen. Aber ich glaube, er hat Toru Hashimuchi und Kazuma Yiri. Er ist ein Politiker. Er ist ein Internet-Entrepreneur. Er hat einfach die aktuellen Trending-Hashtags genommen. Er hat auch nicht eine Ahnung, was er tut. Er hat nicht die aktuellen Trending-Hashtags genommen. Er hat nicht eine Ahnung, was er tut. Der nächste, Paolo. Der nächste, Paolo. Er kann 13 Tweets in 3 Sekunden schicken. Er kann 13 Tweets in 3 Sekunden schicken. Er nutzt dafür 3 unbekannte italienische Twitter-Services. Er nutzt dafür 3 unbekannte italienische Twitter-Services. Er nutzt dafür 3 unbekannte italienische Twitter-Services. Es ist keine Faßung oder Quote, er ist ein Bord, aber es ist wenig Influenz, dass er eigentlich über nichts aus funktioniert. Und dieser, er liebt Jesus, die Familie Trump, den Wikileaks, die Fox News und die Waffen, hat Abtreibungen, Feminismus, Abortion, Feminismus, Hillary Clinton, CNN, Obamacare, nicht ein Bord. Dann haben wir noch Präsident Trump. Dann haben wir Präsident Trump. Der ist ein Satire Account. Er ist ein Satire-Account, er hat trotzdem nur einen Follower, aber er hat nur einen Follower, und er ist nicht ein Follower. Und wir haben Marika, er ist ein Journalist aus Südafrika, er arbeitet für die French Press Agency AFP und Twitter live über den Election-Dag, auf dem Tag, er ist nicht ein Follower. Dr. Van Nostrom, er liebt Baseball, American Football, Fox News Trump, hates Hillary, Bill Clinton, CNN, Islam Feminist, everything on the left side of the spectrum, he followed Trump's win live on Twitter, den Wahl-Tag intensiv verfolgt und begleitet, aber ein Bot ist er auch nicht. Und das war jetzt der 12. von den 12. Und das war jetzt der 12. von den 12. Das war jetzt der 12. Das war jetzt der 12.كر أن photography wie die Negative Krippe durch die und niemand hat sie eigentlich retweeten oder über das, was sie rettete. Es gibt einen großen Gruppel von Pro-Trump-Humans. Es gab nur einen Pro-Klintens-Satire-Account in der Union. Und in den neutralen, wir hatten ein paar Leute, die den Election neutral waren. Dann haben wir noch die Anschuhe, der seine Links-App-Werbe-Seite hat. So, everyone who actually participated in a political discussion was actually a human. Clapping again? Die Studie, Bots and Automation over Twitter during US-Election müsste eigentlich heißen. So, what they should actually call that was not Bots and Automation over Twitter during US-Election, but instead highly active humans during the Twitter-Election. Social Bots are politically active people, hashtag spam, some very simple bots that can't really be called social, media and journalists. Now, I tried finding another study that used a different definition. Social Bots distort the 2016 US presidential election online discussion. And they use a service, they are called bot or not or botometer now to identify whether it's a social bot or not. So, what they do is very similar to what I did at the beginning, they use all the metadata to categorize an account. And then they try to find a probability for whether an account is a bot or not. And that's the basis for the study. There's a link for this, in case you want to play with it a little bit. Now be careful, you need to authorize your Twitter account und he also wants right permission on Twitter. I put a couple bots in there just to have an idea of whether we can trust this. The criterion is 50%, everyone less than 50% is a human, over 50% is a person. The criterion is 50%, everyone less than 50% is a human, over 50% is a person. Und die Bundesministerium für das Dienst- und Verbraucherschutz, 55%. Das BKA ist ein Social Bots. Now, as you can see, all of these big ministries are actually Social Bots, whereas those bots that we have identified, they're not bots, according to this criterion. But maybe this is just because of the ones we looked at were tweeting in German, but no, even the American president would be considered a bot under these criteria. Das könnt ihr auch alles selber ausprobieren. Und you can try all this for yourselves. So scientifically none of this is tenable, and if you're not convinced, then well Hegelich does not expect any social bot interaction during election day. Social Bots filter bubbles and fake news. To say it carefully, the research is still at the very beginning. We still need to develop and check our methodologies, and we need to review scientific publications. So if any of you are doing your own research on this, then I would like to ask you to check these scientific studies. In the discourse, we called people Nazis, socially detached, now in union, social bots. But what these people have in common is they're scared and they're angry, and they do not believe in social institutions. And new groups are entering social networks and they're using their right to free speech, and they even take recall to fake news, if this aligns with their own opinions. Maybe we will someday have a technological solution, but maybe we should also rely on media literacy to develop a culture of debate. This is a new research field. We're seeing a new science development. Maybe it's still very small now, but it's undergoing a fundamental shift towards a science that tries to find rules, to find laws and to produce reproducible results, like we know from the natural sciences. We saw something similar with astrology, which was the law of stars, and then we developed telescopes and could actually observe the stars, and then astrology became astronomy. And the debates that we have in and about social networks could turn into a science that looks at social networks. And I think this is not only possible, but also necessary. At least we need to do a lot of research to encourage all of you who aren't sure if they want to work in politics and political science or in data science to combine the two Bachelor and Master's programs are very suited to reproducing these studies and to try to validate these studies. And I would like to support this in my time to publish parts of my frameworks and parts of my data over the course of the next two and a half days. You can also reach me under DECKT 3600 or on Twitter at Michael Kreier. I would love to connect people who are doing their own research on this and to see if we can't start a scientific debate on this subject. Mindestens das will ich nachher noch mal hören. Jetzt haben wir noch etwas über eine Viertelstunde Zeit für Fragen und Antworten. Habt ihr Fragen? No, we have roughly a quarter of an hour for questions and answers. Please go to the microphones. And I'll try to be fair. Steht schon irgendwo jemand? Ah, da hinten, Mikrofon 4. So microphone 4 is up now? Ich habe eine Frage bezüglich States-Propaganda. So, people that are paid by states or governments to do this propaganda. Have you seen that or have you looked at that? We've searched for such people. And look for the bots if they're actually being paid or not. Wir sehen aber, wenn jetzt z.B. jemand bei der US-Propaganda oder bei den nächsten Monaten bis heute noch, from the last months until today, that they really live off the social networks, including Facebook. So it doesn't look like it's actually paid people. So we've searched for such people. We've searched for such people. We've searched for such people. We've searched for such people. We've searched for such people. We've searched for such people. And look for the bots if they're actually being paid or not. It's actually paid people. The question that I'm asking is even if someone pays, what's the impact? Can I actually convince people to change their political opinions on Twitter by paying them or by influencing them by paying someone else? Vielen Dank für den großartigen Vortrag. Thank you for the great presentation. I find it very important. And I see that we really need to change the type of methods we're doing further research behind it. And I have a question that how do we actually have, how does the reach actually change and how do we know how the reach impacts and how it's impacting people? Auf jeden Fall. Wir haben auch zum Beispiel gesehen ... We should find a new word. We shouldn't call everything fake news. We see that a lot of this fake news comes from a town in Eastern Europe and they don't even have a political agenda. They just like people to click them and they get money from clicking. Gar kein politisches Ziel dahinter. And there's no political goal behind that. Das hat einfach nicht so gut eingeschlagen. They tried pro Clinton first and then they switched to Trump because they made more money with it. So that's why I think fake news is kind of wrong. We have a question from the internet. Es wurde angemerkt, es ist interessant, den Hashtag 34c3 zu analysieren und Trolle aufzudecken. And we're talking about trolls' hashtags. It would be interesting to look at 34c3 and look whether there's bots or someone who's just trolling on there. Do we see bot wars soon? Is there anything known about this situation? Flamebots. Das sind alles theoretische ... Theoretical concepts now. When the studies are read through, that it couldn't be true. I can't stand here and say it doesn't come to fruition. We have to see what happens in the next days. But until now, what we've seen, that this could be chaos through it. Servus. Das erste harte Kriterium, das mir einfallen würde, um Bot-Accounts ... The first hard criterion I could think of to distinguish posts from bots would be the frequency of posting. Einmal auffällig war. And you only chose that one. Why? The other accounts ... Also, wenn ich jetzt ... So, when I'm developing a social bot and it sends something every half a second a tweet. In this case ... In this case, it only appeared once that this bot had this bug. And I would consider it a bug. Because if I was developing something, I would make sure that the bot doesn't publish tweets as often. Next question is, does that mean that the social bots actually lost the influence and that has connected to filter bubbles? And have you seen in the research, does it propagate further? Also, zum einen, ich glaube, es gibt so einen Satz, den man in den letzten zehn Jahren immer wieder gehört. One sentence that we've heard again is that it's not the internet's fault. The internet just makes things visible again. And I think there's a lot of people that would like to participate at a political discourse. And they couldn't before, but now they can actually participate in this discourse. Is there a danger ... All I want to ... The question is being repeated again, which wants to ask whether the social bots are not as much of a danger as a set, because you've shown that most of them are not social bots and that the social bots don't actually participate in this course. So ... The answer is that he can't really say it, because we don't really know where it's going at this point. And this is still experimental technology. And the world network, the internet, is still kind of experimental technology, so we don't know what happens. Hallo. Erst mal ein Kompliment für diesen super ... First of all, a compliment for this really important presentation. My question, then, is because you were talking about Twitter, and how do they actually influence the impact of how far the reach is impacting. The question is ... The question is ... ... when the tweet is published, when they do have 10,000 followers, that's then more important than one follower, obviously. You can think of metrics to figure that out, and usually I don't look at tweets that aren't retweeted at all, but I don't know if there's actually a good metric that you can use that would classify different numbers of followers. The other thing is other social networks. You see what happens on YouTube, is that a lot of people, they record their opinion on YouTube, and there's a Russian network as well, which is called V-Contacts, and I know someone who actually looked at that. But the big thing that people look at is Facebook, and Facebook has two problems, first of all. There's a lot of closed groups where everything happens. I looked into them, and it wasn't very different from the public stuff, and the second problem is that the API is not very public. I think Facebook actually has a lot of data material to do this research, but unfortunately it's all within the company, and they don't share it. A social bot has a question. Are there studies about the effect of fake news in the media? They didn't answer that question. Is it the legitimitation of the network and the impact of the fake news? Aus meiner Erfahrung würde ich sagen, Sie haben keinen Einfluss, weil Sie halt ... I think they don't really have an influence, because people, they actually go into these networks with their opinions already set, and then they just get this basically reinforced. I think there's a little bit of an influence of people that have some sort of latent tendencies already, and then they go into this filter bubble and they find all these other people that have the same opinions, and I think that, for example, crimes against refugees are partly motivated by this. Okay, danke für den Vortrag. Thank you for the presentation. You had this first nice picture that showed that the AfD political party was connected to the fake news, and how do you know that these tweets were actually fake news? So what I did is I chose fake news and looked at who shares them. I chose stories that I knew that were fake news, and then I looked at who shared them on Twitter. This is the direction I went at it. Ich habe noch eine Frage zu antworten. Ich habe oft gehört von Journalisten, die sich beschwert haben, dass da irgendwelche Bots sie jetzt mit antworten, zumüllen oder so was. Hast du so was beobachten können, dass die Bots von Journalisten, dass die Bots eigentlich beobachten können? Also viele Journalisten, die mir gesagt haben, dass sie die Probleme haben, die Bots zu beobachten können. Die Antwort ist, dass ich nicht sage, dass alle, alle, die auf Twitter sind, eigentlich eine reale Person. Und ich kann auch eine Bots, um ein Hashtag verwendet zu machen. Es ist nicht wirklich eine Influenz, dass ich eigentlich auf die Leute auf Twitter aktiv sein kann. Ich weiß, dass ein paar von den Accounts konfrontiert werden. Die Leute, die auf Twitter aktiv sind und bekommen viele und viele Accounts. Aber ich weiß nicht, ob es Bots sind oder ob es das Gleiche ist. Ich weiß nicht, ob es Bots sind, aber ich weiß nicht, ob es Bots sind, aber ich weiß nicht, ob es Bots sind, aber ich weiß nicht, ob es Bots sind, oder ob es das Gleiche ist. Und es ist ziemlich schwierig, um viele verschiedene Tweets mit einer unterschiedlichen Intelligenz zu bekommen. Eine Frage zu einem Kriterium. Eine Frage zu einem Kriterium. Eine Frage zu einem Kriterium. Eine Frage zu einem Kriterium. Wie war die Aufmerksamkeit? Wie war die Aufmerksamkeit? Wie war die Aufmerksamkeit? Wie war die Aufmerksamkeit? Ich habe immer Wettkontaktivodorite und Tweets von der News. Und manuell, sozusagen. Wie war diese aufmerksamkeit? Wie war diese Aufmerksamkeit? Wie war die Aufmerksamkeit? Wie AfD-Faktor? Wie ist der Risse im Studium? Ist es eine Rolle, in der Studie zu machen? und die Zahl der Tweets. In meiner Arbeit habe ich gesehen, dass die Leute, die sagen, okay, ich will das nicht bießen, sondern ich kopiere den... Es gibt einige Leute, die nicht auf die Tweete wollen, sondern sie kopieren eine Tweete. Und es gibt einige Weile, die auf die Tweete kommen. Und das ist ein Text-Data, das wieder und wieder gespielt ist. Und die zwei bitte. Ja, hallo und auch nochmal vielen Dank für den Vortrag. Also, again thank you again for the presentation. A question about the theme of the travel warning in Sweden. How much work did you have to find out which Tweets to find out which Tweets were hearing about this and how did you find out about it? The answers. We looked at how many accounts were actually deleting the Tweets. And then we later looked at it in the meantime were actually deleted. I'm not sure that I found all of the Tweets about this. But I did look for everything that was related to traveling to Sweden and also what was connected to others. And so we looked further and further into all of the related topics. Then I had 1800 Tweets. And then categorized them all by hand if they were fake news. So that was my method. So it was half automated hand, I would say, personally done. Die Eins ist das bitte. Now, I have one short question. And one plead for further data. How much of your raw data can you publish? And second, your example of the Twitter timeline with 2000 Tweets. Ich habe mehr ungelesene Mails in meinem inbox. Und das ist eine Größe, die man in diesem Debate oft sieht. Aber ich habe mehr als 2000 unredte Tweets in meinem Mailbox. Es gibt tatsächlich eine Guideline von Twitter. Es gibt eine Guideline von Twitter, das 10.000 Tweets zu veröffentlichen. Es ist nicht möglich, so viele 10.000 Tweets zu publishen. Ich würde es mir noch mal genauer angucken. Aber ich muss mir noch einmal genauer angucken. Das heit-weighted Tweets heißt das. Das heit-weighted Tweets ist das Name dieses Konzepts. Wir können durch die Daten schauen. Egal von einem oder anderen. Das neue Data sollte uns helfen. Und wir müssen in den nächsten paar Tagen schauen. Für diese 2000 Accounts, ob es einen Impact hat oder nicht, ist der Kontext zu viel? Oder ist es eine große Menge? Ich denke, dass 2.000 Follower nicht so viel sind, wenn wir über die Population von Millionen Menschen sprechen. Das ist ein klassisches Argument. Wenn man auf Facebook schaut, ist es ein kleiner Teil davon. Wenn man auf Facebook schaut, ist der Impact ein kleiner Teil des Follower. In den sozialen Netzwerken, in den sozialen Netzwerken, und in den Journalisten, haben sie Influenz. Die Twitter-Diskussion, Twitter ist eigentlich nicht politisch relevant. So, letzte Frage. Erst kurz Danke, weil dein Projekt mir im Studium ziemlich gerettet hat. Danke, weil dein Projekt mich saved. Es ist mir aufgefallen, gerade bei diesen Trollarmy. Was ich fand, war, dass in diesem Trollingarmy, in den Accounts, wie oft, es gibt, dass es in den sozialen Netzwerken nicht so viel sind. Es gibt oft das gleiche Profil-Picture, und Accounts, mit konsequenten Nummern, wenn das erste Trollarmy-Picture ist, dass es in den sozialen Netzwerken nicht so viel sind. Wir müssen dann mit den Leuten unterhalten. Wir müssen dann mit den Leuten unterhalten. Wir müssen dann mit den Leuten unterhalten. Das ist gefährlich. Das ist auch gefährlich. Das ist auch gefährlich. Das ist auch gefährlich. Das ist gefährlich. Das ist auch gefährlich. Das ist auch gefährlich. Du hast erst mal einen Kalt!