 Thank goodness Phil Menzer is here who is a professor of informatics at Indiana University who has recently been awarded a $2 million grant to build systems that can detect online persuasion campaigns. Alright, hi. I'm Phil Menzer from Indiana University and I'm here to tell you a little bit about a few examples of through the memes that we've uncovered with this system that we have online. Please go and play with it. It's truthy.indiana.edu and Michael Nover and Bruno Goncalves are sitting there and they're among the developer of this system. So it's a website where we track memes coming out of Twitter and we try to see if we could spot some signatures based on the networks of who retweets what basically and who mentions whom. And so, you know, you could look at lots of different memes. For example, you know, here's GOP which I want to highlight because it brings up the issue that has been brought up before about echo chambers. And in a lot of memes that concern politics we observe this clearly by clustered polarized structure where people only tend to retweet other people that they agree with. So that's an interesting theme but that's not the theme of today. So let me just show you a handful of examples. Here is one that Mike pointed out to me. It's not in a realm of politics. It's probably would put it more in this sort of spam category. And it's just a bunch of accounts that promote a particular, I think, club in Atlanta. So the dots are all the people and the oranges are the mentioned. So there's just a bunch of bot accounts that keep tweeting about events and mentioning other people. And so some of them then eventually retweet and they used to generate buzz about it. So this is one kind of pattern that we observe. Now let me show you a few patterns that we've observed in the run up to the last election in 2010. Definitely a truthy thing. So this looks very different. It's just two accounts and a huge edge that blue thing is just one very thick edge between these two accounts. And that means that these two accounts kept retweeting each other. They were bot accounts. They generated tens of thousands of tweets and all of them were promoting one particular candidate. And posting about this person's blogs and articles in the press and website, et cetera, et cetera. Here's another one that is not quite as hugely successful, but I think it's very scary. And it keeps going. It's AMPAT. It's a tag that is used to post material that is very scary, very graphic movies about beheadings and promotes the idea that pretty much Obama is trying to push sherry a law in the United States and things of that sort. So very interesting. Here's another example. This is a bunch of bot accounts that all turn around this Freedomist account. Freedomist.com is a website that posts fake news and it was extremely active and still is in posting all sorts of... It tends to be also another example of right wing, but we found a few examples of left wings if you're interested. But anyway, during the last elections, this was very active in marrying certain candidates and promoting other candidates. And there was about 10 bot accounts or accounts that were controlled by this one person who's also the person who owns and manages the website. And then, of course, all of them would be retweeting. And there was an interesting pattern in which they would all, at the same time, trying to target one influential user, hoping to get this one person then to believe it because it looked like it was coming from different sources. And then if that person retweeted it, then there was a chance of creating a cascade. So it was very effective. And they got around Twitter's spam detection by adding random characters at the end of bitly shortened URLs so that they look like different URLs, but in fact, they were all pointing to the same sources. So this was very effective. And a reporter actually contacted this guy and he fully admitted, said, yes, of course I'm doing it. Everybody's doing it. This is a Republican activist in Pennsylvania. And he's still there. Some of those accounts have been shut down by Twitter, but several have not. So still doing it. And the last example, it's more current. It's the hashtag Obamacare, which is one of those that are used quite actively. And it turns out that, so we have this new tool that we just released. It's sort of an interactive visualization. It's a little bit hard to see here because you can't see all the edges on this monitor. But basically you can see what are the users who are most influential that are retweeted the most. And what are the patterns of propagation? And you can sort of explore and dig down and play a little bit with the data. That particular account there that is the most active on the Obamacare meme happens to be the Heritage Foundation. And we have a few additional tools that let you see what other memes, a particular account is active promoting or discussing. Also we try to automatically detect language and do sentiment analysis and a few other things. So you're welcome to play with it. But these are just a handful of examples to get us discussing. And of course the fundamental issue is can we detect these early? Of course, as we've seen in the previous speaker and also Takima Taxas has done this work, if you can go afterwards and you have the time to do some real legwork and you can find out perhaps, oh, you know, there was this group behind this ad. It was that paid consultant or it was this corporation or it was this particular organization. But at that time very often the damage is already done as Takima has shown in his work. So the trick is can we detect it early before a lot of damage is done. And that's what we're trying to do, but we're just at the beginning of that.