 Well, welcome everybody. My name is Jonathan Zittren. I am so pleased to see a full room, even after classes are over and exams are done with. And the power of research must continue, however. And it does wonderfully today with our guest, Renee Diresta. Renee is going to talk about the ways in which all of us may now be familiar with Twitter and Facebook as propagandistic battle zones. But what about Amazon? Why leave them out? And Renee has a lot to tell us on that front and has been doing a lot of research, I think, fair to say, intrinsically motivated. That it's just like Renee had it pretty good as somebody who was an investor, investing other people's money, and then I gather her own, a product maker, somebody very familiar with the worlds of business, and then was sort of bitten or newly activated by the public interest bug and worrying about the state of our technology ecosystem. Renee, what else should we know about your background before you start? I spent a long time on Wall Street. Spent a long time on Wall Street. That was the banking and investing part I was mentioning. Oh, no, I was a trader on Wall Street. And then I was a VC with Tim O'Reilly. And so it was a great transition from Wall Street to tech with a person who was really passionate about ethics. Yes, that's very good. Wonderful, so a few other quick announcements, housekeeping variety before we start. This event is being webcast live to at least any number of bots, but also potentially to humans. It will be recorded in posterity at the Berkman Klein Center website. So we are all in the Panopticon together. You can follow us online with the hashtag hashtag BKC Harvard as in Berkman Klein Center Harvard. And in fact, we'll be keeping an eye on that. Somebody's keeping an eye on that. So if there are good questions that come up under that hashtag, we can inject them into the conversation later. And next week will be our final luncheon of the academic year featuring Jessica Feld and Mason Quartz on the topic, quote, art that imitates art, computational creativity and creative contracting. So here for a slightly less alliterative presentation title is Renee Doresta. Thank you, Renee. All right, thanks everybody. Really excited to be here today. I landed this morning. I promptly started coughing and having watery eyes from your beautiful weather that's 20 degrees warmer than in the Bay right now. So sorry about that. But looking forward to chatting with you, I switched into doing disinformation research full time in January of this year. So my affiliations are a company called New Knowledge, which is a startup that looks at detecting and mitigating misinformation and disinformation as it targets corporations. Also has an election integrity practice. I have a Mozilla Foundation Fellowship, greatly affiliated with Berkman as well, to look at media misinformation and trust. And then we have a nonprofit called Data for Democracy. And for the last few years, I've been doing kind of public, informing the public and informing lawmakers about the kind of problems that I'm gonna be describing today. We've been doing a lot of work to help educate policymakers about what's happening here, thinking about where responsibility lies and how we wanna think about our platform, our social ecosystem moving forward. But what I'm gonna talk about right now is actually Amazon. So I think most people in here are probably familiar with narrative manipulation issues. So broadly speaking, some of the ones that we talked about a lot in the context of Twitter and Facebook and YouTube would be things like manufactured consensus, the idea that a large chorus of voices can create an appearance that an opinion is held by a majority through the use of strategic automation. We talk about brigading where groups of both real and fake accounts act in chorus to either harass a person or to push a point of view or in what I'm gonna talk about here to leave reviews. We talk about information laundering, fake accounts, news voids when you go searching for something and there's not a whole lot of information about it and so what Google or any other search engine returns is of kind of dubious quality. Ways to manipulate a narrative is how I would kind of summarize it. So one of the things that we've been looking at is how these tactics, as we've seen them be used to shape opinion and manufacture consensus on social platforms, are actually also beginning to be deployed and have been being deployed on Amazon and that's kind of skated under the radar in part because Amazon, while massive, is not really seen as a social platform but there's actually quite a bit of social activity that happens on Amazon just with the goal of facilitating commerce. So the research into this actually started by accident so what you see over here, this is an Alibaba listing and over here is the identical product up on Amazon. So the process of looking into Amazon manipulation actually started when I was curious about how products on Alibaba were, how sellers on Alibaba were beginning to kind of leverage Amazon's brand and Amazon's logistics channel Prime to take effectively the same product that people were not particularly receptive to on Alibaba or Aliexpress because it has a reputation of being low quality, slow to ship, you can't get a refund. Most people don't go shopping on Alibaba or Aliexpress despite their best efforts but they do in fact go shopping on Amazon. And so we started looking at electronics and apparel in particular, saying where is the kind of arbitrage there between it's not even just wholesale to retail, it's actually the same exact seller selling under multiple names to try to achieve kind of the greatest reach possible on Amazon. So one thing you'll notice about this is 6,674 people bought that dress supposedly. Now one thing that I don't have up here is if you search for T-dress on Amazon, if you search, sorry, on Google, if you search for vintage T-dress on Amazon, that is usually second or third in Google search results. So even if you go to Google to find a vintage T-dress, this is what it's going to give you. So we started looking at the idea that Amazon was like this next kind of SEO battleground, that it had effectively become the product search engine and that Google in fact was actually reinforcing the results. So there was a second order effect. If you could game Amazon search results and nail the plum spot for something like vintage T-dress, you would then also have very, you would have great success on Google as well. So we're looking at how this type of activity was facilitated, recognizing that it's the number of stars that's kind of the greatest predictor of where you're going to be in the Amazon results. So if you can game the Amazon results, you have effectively then also done yourself, gotten yourself some positive juice for Google. So $259,000 is the amount of money Amazon will log in about one minute. The one minute it'll take you to read this and listen to me. And so in just one day it'll log more than 372 million, roughly 10x the median annual sales of the thousand largest online retailers in North America. So gaming Amazon and getting to the top of Amazon search results and then subsequently Google as well as a phenomenally lucrative practice. So there's a great deal of economic incentive underlying this behavior. Now as I said, the types of manipulations that we see on social platforms, the ways that people game the ideas marketplace, the way that they get ideas to the top or get hashtags trending or these other sorts of activities, through coordination on social platforms and through kind of cross-site manipulation where something is coordinated, the operation is planned on one site, maybe Discord or 4chan or something else. Then it goes to a content production site where you'll host your content on Imgir or YouTube or wherever else. Then you'll push it out to Facebook and Twitter aiming to reach mainstream audiences and ultimately you'll get pickup into a social channel. It's kind of the approximate narrative manipulation tactics that we see over and over and over again, time and time again. We see the same facets, the same kind of pieces being used to game Amazon results and in turn the Amazon recommendation engine and Google search. So taking you all back in time. Thank you, I'm glad somebody laughed at this every now and then people look at me and they think I'm nuts. How many people in this room know the story of the Tuscan milk jug? Ah, it's amazing. So this is like seminal meme culture, you guys. So you'll notice 1,679 customer views and 122 answered questions about this Amazon milk jug. This is one of the original lulz-y review manipulation. It is still in fact manipulation but it was manipulation done kind of for the lulz. So these are people, when Amazon listed this product about 10 years ago, what began to happen was there was this culture of people who like couldn't, this was before people bought their groceries online. So the idea that you would list a gallon of milk on Amazon and people would buy it was like a foreign thing in 2004. And so there became this culture of asking questions or of leaving reviews. So this one is from 2008. By the way, the poem goes, I mean, this guy goes on for like 10 more stands as I couldn't excerpt it here. But I mean, it's a really funny piece of internet culture, right? The idea that people would go and would leave these like either five star or one star reviews for this milk jug, they would tell a funny story. It became like a cultural phenomenon for a while. There's actually a whole subreddit for Amazon reviews where people share kind of funny Amazon reviews. But so what I wanna get into here is actually this. So this is current, this is all from today. You can search for Tuscan milk jug and I encourage you to because it's fantastic collection of reviews up there. But you'll notice so Amazon is remarkable at its ability to take recommender systems and parse the data in many, many different ways that we're gonna go into as I go on. So you'll notice the distinction there between customers who bought this item also bought and there are the people who are grocery shopping, right? And then customers who viewed this item also viewed and that's because people who do know the story of the Tuscan milk jug will actually go, will do things like what I did, right? And go here and that's where you see like uranium ore which has 1500 reviews and had to avoid huge ships which was 1400 reviews. And then there are some fantastic reviews on the wheel mate steering wheel attachable work surface tray. Which is a tray that attaches to the steering wheel of your car and so you have a table in your car. But so what you should, what you should notice there is I feel like most people in here have a general sense of recommender systems. There's content based filtering, right? Where you're saying like people who like this content here are related topical pieces of content versus collaborative filtering which is people who like this content are similar to you in the following ways ergo the kind of Venn diagram overlap of your behavior and their behavior suggests to us that you might like this topic also even if you've never searched for it. So that's something that you're seeing kind of illustrated up there. There's another search result that will come up later another thing Amazon does which is frequently bought together so frequently bought together in the context of milk you'll see milk bread and cheese. Occasionally that will surface some really interesting things like fertilizer for a while was showing ball bearings and switches. So there are occasionally these moments when the recommender system surfaces something that is of interest as an indicator of behavior and what people might be looking for as opposed to searching for fertilizer and getting some flower pots and things that would be considered a little bit more normal. Amazon has recently, excuse me, expanded into ads as well. So this is where we get another way to monetize if you go search for that milk. People can sponsor products related to that milk. So this is Amazon acting as advertiser, as advertising network for these products. So as Amazon's platform gets more crowded and its marketplace launched, the reviews become much more important. There are ads sold kind of in context of the products as well. The recommender system is using things like stars to rank the content. So this is the system as it evolved from the Tuscan milk jug in 2003 or so to where we are in 2018. So what happened? Well, anytime you have a phenomenally economically lucrative system, there are gonna be people who are out there to game it. So what Amazon started to notice, there's a couple of companies that track fake reviews. Fake Spot and Review Meta are two of them that I'll allude to here, but there's a couple of other ones also. Began to kind of parse Amazon reviews and mass and for a while you could incentivize someone to go review your product on Amazon like that dress. You could send that dress out for free and a certain number of people would, and then people would leave a five star review and they would say on the bottom, this review was incentivized, I received the product at a discount. So what Review Meta and Fake Spot started looking at was how did the star rankings compare if you extracted the reviews that had been incentivized and what did the distribution of reviews look like? After that what they found was that incentivized reviews were often a star, star and a half higher than non-incentivized reviews. So people were seeing their reviews, feeling inclined to buy the product, getting the product, it was not as expected. It was not the quality that was indicated by 6,000 positive reviews, for example, and they were of course disappointed. This became a brand issue for Amazon as people began to feel that the star ratings were manipulated. So in October 2016 Amazon banned this behavior and said no more incentivized reviews. So that of course did not eliminate incentivized reviews. They moved. So the way that I came across this was again, I was looking for something related to Amazon on Facebook and Facebook's recommendation engines served up an Amazon review group to me. And so again with recommendation engines, once you've clicked on one thing, that is all you are going to see for a while. And so all of a sudden I was getting, the Facebook recommended groups algorithm was serving the Amazon reviewer groups. I said, all right, well let's go see what Amazon reviewer groups looked like. So all of a sudden I discovered that there are, there were literally hundreds of groups that existed with tens of thousands of members organized geographically. There's Amazon reviewer Spain, Amazon reviewer's Italy for effectively mass manipulation of Amazon reviews. So it's important to note that when Fiverr and some of the kind of smaller gig marketplace companies were selling Amazon reviews, there were people who were on their selling Amazon reviews, Amazon's response was to go and sue them. So Amazon actually tracked down the people who were selling their reviews, the individuals, they could not of course sue the platform, CDA 230 does indemnify platforms against that type of, that bearing that responsibility. But what did happen was they started pursuing the individual people who were brokering the reviews. So Amazon takes this seriously to an extent, but of course Amazon is not looking for evidence of mass manipulation of its reviews on Facebook, which is where it's happening. So, just to kind of take you inside one of these groups, if thousands and thousands of members, so the first thing you notice is the seller accounts, they are mostly fake. So these are primarily most of the stuff that I saw in these groups were Chinese sellers selling the products that were on Alibaba. So as I mentioned, that initial suspicion that not everything was on the up and up in terms of that porting from one platform to another, the discoverability challenges on Amazon of course are vast because there are so many products on it. So this is a way for those sellers who are selling relatively brand free kind of low quality products to gain that initial audience. So this is Jovita, Jovita is not redacted because Jovita is not real. So this is Jovita's profile, there is the stock photo of Jovita. This is the kind of thing you see, excuse me, time and time again. So this is what one of the initial posts look like. So when you join these groups, there are hundreds and hundreds, I mean, it moves so fast, there's just people posting products constantly, just throwing them in there. Some of them, there'll be subgroups for kids clothes, for electronics, so on and so forth. But ultimately you'll see these people will just, you know, Jovita has both a bubble machine and kids headphones that she wants reviews for. If you are interested, you leave a comment right there. This person says, you know, that was a real person. They want the bubble machine. So what happens? So then they tell you, we're gonna move to PM and we're gonna have this conversation privately so that there's no evidence of it in the Facebook group. So here is my private conversation with an individual who had some headphones to sell. Well, he's selling the headphones, I'm getting them for free. So this is our conversation and so he had a couple of different variants. I said, you know, I want the second one of these variants. And so they really wanna find people with old Amazon accounts. I've been on Amazon since it started. I've had a prime account for, you know, 13, 15 years, whatever. So I am highly desirable. So what happens is they send you these purchase steps. So each of these sellers has a different kind of way that they're trying to game the system. So you can see the relative sophistication here. So I had to add the product to a wish list. I had to leave, so actually, sorry, this guy isn't the one who asked me for the wish list. No, no, he did add the product to the wish list. So sometimes they want you to add the product to the wish list, wait three days, add other products like it to the wish list so that Amazon's thinking that you're a real person shopping, you know, for headphones and you're thinking about it for a couple of days and then you go buy them. Don't use an Amazon gift card because for a while, the sellers were incentivizing with Amazon gift cards. Amazon started to police that more heavily. So this is the arms race that's happening with the sophisticated sellers who are motivated to game their reviews and the groups with 70 to 100,000 people who are willing to go through this process to get some stuff for free. Tons and tons of activity on both sides. So the seller is mostly fake, mostly fakes and then the buyer's very much real. Here's an example, once you indicate interested once, again, Lily is not a real person, so she's not redacted. But once you get into this, once you get into one of these groups, this person just messaged me for, you know, 10 straight days. This is a subset, by the way. It was just this steady stream of, like, which of these two garbage products do you want? Over and over and over again. It's because the sellers, I believe, do communicate behind the scenes. So once you have successfully completed a transaction with one, left the review, all of a sudden the inbound is just intense. And I was in maybe seven or eight of these groups for a while. So what happened? So we spent a couple months observing the activity and then looking at what was happening on review meta also for these categories that we were getting, you know, I got a couple of some apparel, some electronics, wanted to go through the process a few times. Some of them incentivize you with extra money on top of the review, so they are actually in fact paying for the review. Some of them are just giving you the product for free. The money is transmitted back to your account via PayPal. Sometimes it's after you leave the review, sometimes to avoid the appearance of it being an incentivized review. They just give it back to you for free as soon as you place the order. So that way technically you have not yet reviewed, but then if you do not review promptly you get the litany of messages and then they will go and communicate into the review groups that you are a bad person and then you'll get kicked out of the review groups. So it's kind of a fairly complex social contract. Also once you're in some of the closed groups if you have successfully completed a few transactions you are invited into the secret groups. So these are the ones that do not say review or they are entirely outside of the radar of the search for something like Amazon reviews which is what Facebook or others could do to detect this kind of activity. And so you're in the secret club. So that's how that works. So what happened? So I did a bunch of write-ups on what we were seeing and how the manipulation was being carried out, found it on some Reddit groups, some other areas of this activity and Washington Post was interested in understanding systemic manipulation on Amazon and the subsequent reviews on Google. To their credit, Jeff Bezos is of course their owner. They did give Amazon a couple of days heads up for comment. All of a sudden the conversation in the groups we knew that Amazon had taken action because all of a sudden the buyers in the groups, buyers, the people getting the free stuff started complaining that their accounts had been deleted, that Amazon had wiped all of their reviews, sellers had their accounts shut down. So Amazon did in fact take action and then an article appeared in Business Insider saying that Amazon had taken this action and had done this mass purge several days before the Washington Post article came out. The other thing we started to see is, so this happens for Walmart also because Walmart has a marketplace. So we started to see the groups changing their names. So all of a sudden the word reviews was kind of phased out and deals came in and that's because Facebook also took action when they were notified to say this violates the terms of service, terms of manipulative activity in Facebook groups so Facebook also started to go through and call and delete groups and the ones that did not get called the first time around changed their name to Deals. So this activity continues. One of the big challenges with social web manipulation is no one is really in charge. So it's not clear how or who is ultimately responsible for these things so we find these instances of manipulation, communicate them back to the companies and then rely on the companies to take the appropriate action. So there's a person saying he's done $16,000 worth of merchandise over the last year and then another thing that Wapo did was they really dug into filtering out questionable reviews and so you can see the impact because this is in fact customer manipulation. So this is not something that Amazon or Facebook or anyone else should be tolerating because it is in fact highly manipulative activity, manufacturing consensus about product categories through brigading and coordination in secret groups and money coming back to people through PayPal. But I wanna talk about one other aspect of this that isn't yet as widely understood that we're starting to look into now and this is the interesting ideological component. So the economic piece is interesting to me just in and of itself but I'm gonna walk you through really briefly because I wanna make sure we have time for questions. Some of what we see on the ideological front. So there's two interesting things that come up here. So this is my first page search results for the word cancer. So you'll notice two things here. So one, best seller in oncology, three reviews. So that's kind of the same type of thing that we see where Amazon will promote something, will pop something up to the top of search results because it is a best seller or it's trending in a category even if it's remarkably thin. So much like you see with newsvoids where there's not a whole lot of results to search and it serves up kind of the most recent thing, this is a cancer quackery book being served as the number one best seller in oncology. This is a book advocating juice fasts. What you need to be doing for your cancer is drinking more juice. The truth about cancer is a web series and it's promoted on Facebook through affiliate links. It's also up here on Amazon. It's got a kind of a documini series and then also a book. This also is a cancer quackery product that's number two or three on the page when you search for it. Again, what you need to know about cancer is that the government has been keeping the cure secret from you and again, juice is the way to go and chemotherapy will kill you. So this is the kind of stuff where the way that this happens. So number one is an example of a news void. Number two is an example of brigading where you have 1700 people. Again, the call for reviews goes out on Facebook into conspiratorial communities who have tens of thousands of members in them and those people all go and leave positive reviews. So here's another example of that. This is kind of a slightly more notorious and long-term one. So you'll notice the documentary Vaxt is Andrew Wakefield's documentary. Mr. Wakefield is the person who claimed that the MMR vaccine causes autism and even 13 years out is still, that claim still will not die. Vaxt was his documentary that was kicked out of the Tribeca Film Festival. It turned into a bit of a media circus, no theater picked it up and so they were going direct to consumer and so of course when you go direct to consumer, you want to be on the top of Amazon search results. So what they tried to do was they put out these calls in Facebook groups asking for positive reviews. Now this is a community of kind of conspiratorial diehards. This is the activist truth or community. There are again tens of thousands of members and dozens and dozens of groups for the anti-vaccine movement putting out a call to leave positive results. This is sort of like a noble thing that you can do for the cause. So what you see here is release date September 13th. Here we are on August 23rd with 627,000% jump in activity and 1,200 positive reviews. So and then up there that was from a hashtag like hashtag op vaxt because they think that this is like a, it's like an operation to win. You're coordinating to get your thing up to the top. So what winds up happening is Amazon did detect brigading here and what happened was in addition to leaving a positive review, the call went out to downvote all negative reviews. So this means that when you see when Amazon surfaces what reviews are the most helpful if the 1,200 people who go leave positive reviews are also simultaneously deprecating any negative reviews. You have this asymmetry of passion factor where there aren't very many like diehard truth or communities on, you know, truth there's not even the right word to use for like the very legitimate science of vaccinations. And so what can you do? The only thing you can do is put out kind of like a call to counter brigade. This kind of activity happens notoriously in political book releases also. So what Amazon does is Amazon blocks the ability to leave a review without a purchase. The interesting side effect of that though is the only people willing to pay the money to buy the thing are the true believers. So this means that most people are not willing to shell out. You know, I think it was like 1495, 2495 when it was released. Another thing that you'll see is occasionally the price will dip. Sometimes it would even dip to free and then they would put out a secondary call for people to go and review when the price was free so that even their own supporters didn't actually have to pay for it. Now that effect of this is shortly thereafter Amazon releases a prime streaming video and it puts its top trending movies up there front and center as a promotional for prime trending video. Which means that the gamification aspect of gaming it, you're not only getting the search results, you're also at this point getting free amplification from Amazon which says, oh, well this is our number one documentary. So of course we're gonna include that in our promo when we put this out there. People did, by the way, email Amazon, Health, I think the AAP and a couple of other organizations wrote a joint letter saying, hey, you might wanna look at this and their response was crickets. Again, if you go and you run it through fake spotter or review meta, what you see there is, so now it's up to 3664 reviews I did this this morning and of which review meta believes that probably 741 are legitimate. The adjusted rating is probably not that different because again, you'd have to go and pay money to buy the thing to leave the review. So at the same time though, the difference between highly coordinated BOMB 600% increase in review leaving versus a much more organic pattern of reviews. So what happens to my recommendation engine when I look at this stuff, this is basically, you spend like two seconds grabbing some screenshots before a talk and this was my recommendation engine this morning. So again, it's like the idea of the conspiracy correlation matrix, this is the same thing that we see on Facebook, this is the same thing we see anywhere on the web that uses collaborative filtering. YouTube is notorious for this. It's the idea that I have indicated to the system I have trained it and told it that I am receptive to this type of content, I go looking for this type of content, ergo it shows me more and here is again an example of collaborative filtering where the anti-vaccine movement used to be kind of ultra-lefty, that's really changed over the past two years, it's become a lot, it's much more kind of both fringes far left far right and so what we see is Dinesh DeSousa movies and various other kind of anti-government or the government is out to get you kind of conspiratorial content, recognizing that from the collaborative filtering we use it as a way to look at, on Facebook you see anti-vaccine group members getting referred into pizza gate and that's again that collaborative filtering, there's a sufficient overlap in people who are receptive to one type of content and then people who are similar to them are receptive to this other type of content which is how you get traditional kind of far left communities being referred content and conspiracies that we typically think of as being far right. So everything that we're seeing on social platforms, we are seeing the same types of activities, the same types of problematic results on Amazon as well and this is kind of one more thing that I'll just show you, this is the frequently bought together problem so this is the idea that if you look for, if you were searching for autism there's this extremist treatment called MMS which is basically a bleach enema. It is considered to be, it's deeply harmful, it's an extremely fringe thing, no reputable medical provider would recommend it. On Amazon if you search for the book it not only gives you the book, it gives you the products to buy to take home and do it yourself. So Amazon is in fact directly profiting from this and that's the thing where I feel like when we talk about the culpability and the responsibility of platforms, the same way we talk about this in the context of what is the responsibility of social platforms to be addressing extremism and radicalization and this again when we look at Amazon it is directly profiting from it. This is not even a matter of hosting or second order effects, this is direct. So I wanna kinda open it up for questions and conversation. I do think as far as questions to talk about some of what we write about is how do we, are there ways to think about this in terms of reforming recommender systems more broadly across the web? Are there ways to think about incorporation of choice architecture? The recommender system is doing exactly what it's designed to do and it does it very, very well. The problem is sometimes one might argue that this is not something we want done quite so well as it is and where are the, how do we think about that? How do we think about down ranking? How do we think about free amplification? How do we think about curatorial surfaces and obligations from the platforms that profit from them? And this was a meme that I found that I thought was fun. So I will just close with that and open it up to questions. Thank you so much, Renee. I think we can all agree that that talk deserves five stars and if we log in right now, sorry. So I sense your uncertainty at the end where normally there would be a, and here's the call to action, here's what we all should do and you're still in the phase of like, so it's a big mess and something ought to be done. I feel like I've written the call to action, like I wrote a, I write for Wired sometimes and I wrote a whole treatise on like recommender systems and thinking about the ethics of recommender systems. It's, you know, I don't work at a platform. It's not my call to make. I can only keep advancing. I'm hoping that that sense shifts. One of the things we talk about, we use as an example, is like this isn't just a right wing or left wing problem, this is, and that's one thing I love about actually Amazon as an example, it's you can see the kind of political aspects of it, but ultimately there's also just the straight up economic motivators. When YouTube down ranked the channel, you all know that there was a woman who showed up to YouTube with a gun, there was a shooting at YouTube, right, and the reason she gave was that her content had been down ranked and she had been demonetized. So this is, you know, then we had subsequently about two weeks ago the diamond and silk hearings, are you guys familiar with the diamond and silk hearings? Yeah, so the idea that that conservative content was being censored, but the censorship was that Facebook had kind of down ranked them in the algorithm and they weren't getting the traffic that they once were. So there's a really important conversation to be had about where that, you know, how we think about the line, what is censorship when we talk about recommender systems? Are we advocating for do not recommend topic list? Who would control that, what would it look like? At the same time there is precedent to some extent because we do see them already down ranking, you know, you can't find suicide related content most of the time. Anorexia, pro-anorexia content has been deprecated since the era of the kind of pro-anatumblers 10 years ago. There's a number of areas where we have seen the platforms take action, but at the same time it's considered this like radical call to censorship, you know, to say, hey, maybe you shouldn't be promoting for free via your recommendation engine, bleach enemas for children. And it's perplexing that that is an extreme statement, but that is where we are because you wind up having a slippery slope argument. Well, we should maybe break it out a little bit because it does allow of some categories and you've had multiple categories of examples in your presentation. One of them is when a vendor of a product cheats in some way to get reviews that are in fact not at arm's length. And that I think could be described in a kind of content neutral fashion, which is, yes. And that it appears that there are sort of Amazonian shock troops at the ready, they just need to sometimes be roused out of their slumber to, as you put it at one point, demonetize people when they're caught breaking the rules. And it calls to mind even the Google death penalty too about search engine optimization in Google back in the day that was seen as crossing a certain line. I remember when Matt cuts at Google, occasioned the Google death penalty on BMW at one point and that was a big, wow, he really took down a big rhino as it were. But then there's another category which is, no, these are true believers. These are people not being inspired by the vendor. They really like the product and the brigadiness simply lets all say we like the product. And I don't know, how many people in this room have at one time or another been asked by a friend, I'm in some contest, if you like my work, please go to this site and help out. And how many people were like, no, I will not help out, that would be wrong. A few, and did you communicate that view? Yeah, see, you're kind of like, you probably wrote back and were like, I'm with you, and then didn't do anything, or voted for the other ones. Yeah, that's, whoa, really sticks a knife in. But those are two. It's two very different. Different kinds of phenomena, but implicit in your remark at least, is for stuff that is in the first category, cheating. And in the second category, brigading, but for something that you are prepared to say is truly toxic, quite literally, you're looking for the platforms to intervene. Is that right? You want them to have the shock troops and do what they need to do. So one of the things that we feel like, it's early days of these conversations. There's a lot of these moderation conferences that are beginning to come up more as they think about what are their obligations, what are their responsibilities. Back in 2014, I did some of the work on the counter ISIS, dealing with ISIS on Twitter basically as a summary. And the resistance to take down ISIS content at the time because we would hear this, well, one man's terrorist is another man's freedom fighter. And how do we decide if we draw the line there what happens and it is a thorny issue. Sometimes we're arguing for, leave it up via search, but pull it out of recommend. And I feel like that's sort of the... Since the recommendation is coming ex-cathedra, it's Amazon making a statement of what it recommends. So we call it like curatorial surfaces. Like where is the platform volunteering it to people? On Facebook that looks like, like I said, someone in a chemtrails group being proactively served pizza gate content if they've never searched for pizza gate. And the question is like, should pizza gate content be allowed to exist? One would argue, yes, they have a First Amendment, right? It doesn't violate the platform's terms of service. Should Facebook or Amazon or anyone else with kind of recommender power, this influencer power, then in turn be serving that up to the people who are potentially most likely to be receptive to it. This is where we get at issues of we on the outside don't know what the recommender. We're in the kind of, I would say discovery phase of analyzing this problem, which is to say, if somebody searches for it, how many people are finding their way to extremist groups or buying extremist products because of the power of suggestion? We don't have that information, only the platforms do. And so by highlighting the problem and having these conversations and beginning to gauge where they see the line, we do know that serving something up via an ads interface on Facebook where Facebook directly profits from it, Facebook takes more seriously than how they would moderate simply hosted content. So they do have this idea of their culpability in terms of monetization. And do you have an instinct on whether it's good or bad to have one platform de facto or many? I mean, we're in a kind of one platform world with Amazon right now. I guess it means that if Amazon makes what we might or you might deem to be wise choices about curation, it's one and done. Or would you like there to be the equivalent of gab for products so that if you're into the vaccine, you can go to the vaccine. You can go to whatever the counterpart to Amazon is and shop to your heart's content and have a recommendation engine there. I think that the centralization, the kind of monopolistic tendency, I love Amazon, I use it all the time. I feel like I directly benefit from it at the same time. The incentive to game this one platform is because there really is only one platform, similarly with YouTube, similarly with Facebook. Propagandist needs an audience, right? And so if you have the audience at the ready, why wouldn't you take advantage of it? Although it could just like there's a news void and here a product void, there could become a platform void of a platform small enough without the shock troops to curate that could then be ripe for a takeover. I think I got, when Wapo kind of ran the economic motivation story, a lot of people sent notes saying like, I only buy things if it's like reviewed on Wirecutter or for electronics or the idea that there is a, the same one we used to have kind of like editorial oversight into information spaces. There's an idea that because social is so abused everywhere it appears, because it's so easy to abuse it, it's free except for the cost of your cheap dress, right? The idea being that the kind of alternative, the editorial oversight is these trusted reviewer spaces. Well, you also hinted that instead of maybe just trying to keep the cat and mouse game in the current configuration going, there might be room for a new paradigm entirely, that there could maybe be a separation of where you buy the product from where you receive reviews about the product or notifications. And I don't know if you have a sense for what that might look like, but. Well, I think a lot of the review sites, you know, I'm a mom, I've got two kids, when you have a baby you've gotta go buy a ton of stuff, right, and so there's an Amazon baby monitor, again, manufactured, it's in China. This is literally a product that has eyes and ears in your house, and it just all of a sudden had 19,000 positive reviews and people don't really understand like, how that happened, where it came from, it's not sold in retail stores, so it's not like people are like, oh, we really love this thing, we buy it, buy, buy, baby, let's go review it online, or anything like that. It was a product that just kind of appeared one day, and then once you're in that number one spot, it becomes kind of a self-fulfilling prophecy. But I think that the way that a lot of these like more community-type organizations do it is they leave the review, they have the Amazon affiliate link, so they do make some money from the review. Usually you'll see that disclosed as a disclaimer, but then they're kind of pointing people to product, so there is still this link back to Amazon without, but none of them serve up that one with 19,000 positive reviews. Wow. Let's open it up to questions. I don't know where the microphone is. Ah-ha, Ellen has it, so Ellen, if you wanna, oh, there's other one over there too, Ruben. Feel free to deliver, tell us who you are. Hi, I'm Charlton Gillespie, Rene, this is fascinating, thank you for laying out with such clarity. It seems like the, I guess this is a question that follows on some of the things that Jonathan was asking, the conspiracy matrix, right? Where it's not only, it seems like there's a step to me from recommending the product that goes with the book, right, the unpleasant treatment product, and then the movement across conspiracies. And I wonder if there's any, I mean, part of the question is how do you think that the recommendation system is recognizing that cross conspiracy, is that patterns of purchases, is that brigading, is it some of both? And do you think there's a way to distinguish, if we treated them as two problems, and we said, okay, we get that people buy the book and then they buy the liquid drops, and that pattern of purchasing is gonna be recognized. But now we've also got this category of you bought the one conspiracy book and now you've got the seven down the line. If you imagine those as two different problems, is there anything that distinguishes them that could be sort of a reason to treat them differently or a way to treat them differently? Yeah, so I think one thing that was interesting to me when I got that collection of product recommendations was that it paralleled what I see in terms of social recommendations, in terms of group recommendations on Facebook, or follower recommendations on Twitter. You follow one extremist, you know, you can kind of immediately see how Twitter thinks of you if you look at who it suggests is your who to follow, you know, who you're positioned against. And that's the difference between kind of content-based filtering where I'm just continuing on this path of like, I have indicated an interest in this topic and it's going to continue to show me, you know, three or five of those books were related to vaccines, right? So I searched vaccines, here's three or four more of vaccine books, versus the collaborative filtering, which is where we get at. It's not, that's where you get at the challenges of the fact that the platform's amassed a substantial amount of information about our behavior. So on Facebook, that's every action that I take, every website that I visit, it kind of gives them this persona of who I am and then there's some overlap between my persona and the personas of people who are 75% similar to me. And those people, we perhaps have the same kind of existing content examination behavior, but also that person has this 25% of their interests that are anti-government and that's how that happens, right? So I think one of the interesting questions is, to what extent is collaborative filtering itself because it pulls from private, things that we would consider kind of private interests, browsing behavior, search engine behavior, that's only made possible through the sort of mass aggregation of information about us. I think that that's another topic that's begun to come up more, which is what are the externalities that come from platforms having this incredibly robust profile of people. And- But here your worry is that it works. It does work. It absolutely works. We're happy when we get the recommendations. It's just a batch. Totally, that's the challenge, right? Which is, the system is working exactly as it was intended. I have communicated all of this information through my 13 years of being on Amazon. Sometimes you see recommender systems that shift like instantaneously. So on Pinterest, when I did a lot of the research into the Russia content for the 2016 election, and one night I found a cache of their content on Pinterest, and I happened to be logged in as me in true name, and I've been on Pinterest for, again, since the beta, these tech early adopter things, and I found the Russian content, I engaged with it, I screen shot it, and I sent it to a reporter, basically, two hours of work. I came back in the next day, and my entire Pinterest recommendation engine had gone from baby clothes and stuff to Russian language craft projects and Dinesh D'Souza videos. And that's where you get instantaneously the entire feed just goes. So their auto at least be a- They know what they're doing. Reputation button, or a Declare Pinterest interest bankruptcy for the past 24 hours, and just get a do over. Just get a do over. But it's fascinating, the systems are so effective, they're so good at what they do, because I imagine that the people who were engaging with the Russian content were the trolls who put it up there, who probably searched Russian language crafts in their spare time. And then the people they were targeting, which was in the case of, like I was looking at the anti-Muslim and anti-Muslim cash, the extreme right who were prone to enjoying Dinesh D'Souza videos. And so it is giving you exactly what it is that someone who was browsing for that kind of stuff would wanna see. YouTube is notorious for this, right? And so the question becomes like, is there a ethical responsibility that the platforms have? And if so, how are those judgments, how are those content judgments made? Kathy Pham, I don't know if we should get a mic over. Sorry, I also have two questions. The first one is more personal. As you put yourself out there on all of your personal accounts, how does it feel? And do you ever feel like, I'm sure you've been attacked. So what is that like? And the other question more on your presentation is, I actually was at an event a few weeks ago where there was a room full of folks who adamantly were like ads and personalization just all have to go away. And we have to go back to the days where you just send all ads and you just hope for the best. Curious to what your thoughts are on that, just getting rid of all filtering and all personalization completely. Here are some random recommendations, which make them no longer recommendations. I know, see, and that's gonna be the last one. No, it comes up regularly. I work with Tristan Harris a bit on a couple of projects and he and Center for Humane Tech feel really strongly about the ads business model and what it enables or what the, I personally don't like the term surveillance but what the idea that the platforms are always tracking, always tracking, always tracking, like to amass this profile without that, you can't do this quite as well. I personally do prefer to see targeted ads. Like I actually like, I like targeted ads, but that's because I'm always curious like what it thinks, you know. I'd be curious, like show of hands in here, how do you all think about that? Would you rather have well targeted or do you feel like it's more of an invasion of privacy? Would you like to have relevant ads serve to you? How many people want relevant pipe targeted, pick your choice, advertising? How many people would like irrelevant slash untargeted advertising? How many people are like, I don't want any advertising at all. I wanna pay for my content. Okay, well that said, it's no doubt affected by the representation in the room of course as well. But you know, it is an interesting, I think Zuck was saying that they are like kind of looking at whether people would be willing to pay. As far as the doing it, you know, doing yourself under true name, it's, Facebook has gotten a lot better at shutting down fake accounts to their credit and so it is a little bit harder to operate sock puppets. But it's, you know, one of the things that we really keep coming back to them on is like we need better third party research or relationships. And they are listening, I would say, since the tech hearings that conversations really have been advanced, we started talking to them back in December about improving the research relationship and then they came out with their announcement two weeks ago with SSRC. I think then it becomes, they have to like scope the project and go through all of the kind of things that you all do in academia that I don't really have as much as I do. Ah, Shucks, I'm just a country researcher, she says. But it's amazing what you're, but it's a neat point that in the early days of the web, anybody could just research, just set up a scraper and off you go and you're still kind of in that spirit in the absence of co-operational companies. That's what I do, in violating terms of service at scale, right? We can marantize you about that later. Why don't we move to another question with the other mic? Is that, yeah, can we, oh, Ruben, yeah. I see a number of parallels of this to what we had earlier with spam and fake webpages. And I noticed at that time, the Berkman-Claying Center had some effort to identify them. Do you see the rise of third-party reviewers of recommendation engines in the same way that we had third-party independent reviewers of some of the spam and fake webpages that we had in the past? It's probably more difficult with reviewers because there's more of a spectrum between people who are just legitimate reviewers who are just encouraged to give a review and totally fake reviewers who are paid or have some exterior motive. And in addition to what we see now with the sites like Amazon, I'm kind of wondering what we're gonna see in the 2018 election, where the technology for this has kind of matured. If we don't have some third-party, at least reviewers, consumer reports for this kind of thing. Yeah, no, it's a great question. So I would say two things. One, the thing that I feel like I'm always broken record saying is no one is in charge, right? No one is in charge. And what I mean by that is it's a systemic problem where one of the things I really liked about the example with the Facebook groups was like you have these two behemoths, neither one is really like Amazon doesn't want fake reviews, Facebook doesn't want to be facilitating manipulation to another platform. Facebook is not actually liable for this. They have, there's no liability, but that also means that they're not necessarily kind of pen testing what's happening on their own platform. So a lot of what I talk about now is on the third-party researcher front and the non-academic, the role of non-academic researchers is, I see a lot of what we do is like penetration testing for the information system. So it's the idea that you have this ecosystem, there is no kind of top-level monitoring at the moment, except that which is being done by third-party researchers who go and find manipulation campaigns. We then in turn can now, after the tech hearings, communicate them back to the platforms because they're much more receptive. Prior to the tech hearings, prior to November, there really wasn't that information pipeline happening which is why so much of what was found was given to the Washington Post. Because it was a- And then that shaming might work. Exactly. It's the idea that like, you know, Amazon knew we were gonna release the story and all of a sudden these reviews were wiped, right? But like I said, you know, there are times when people who are victims, businesses who suddenly discover that they've just received an influx of one-star reviews and some new competitor is trying to downrank them and gain their own system, will try to go to the platform and like plead their case and you're kind of met with this like wall of the automated reporting systems and things like that. So when we think about it at a systems level, we do believe that for the election 2018 as well, a lot of what's gonna happen is third-party researchers who are looking at the system as a whole are gonna be instrumental in communicating back evidence of deception to the platforms and that's because Facebook is not hanging out in 4chan reading the, you know, the coordination, right? Or they're not infiltrating discord channels or doing any of the things that other types of, you know, people who aren't necessarily attached to the platform are doing. Of course, if they were interested, they could set bounties or something, I guess, setting their own incentives. Yeah, I'm actually, I'm sort of surprised. You know, we joke around about how we do like content moderation for free a lot of the time and it's really true. I think that shifting that, thinking about it more as like an information operations is a facet of cybersecurity and I think that that's absolutely true because it's easy to do. You can reach targets directly and why would you do all of the work of infiltrating a network if you can kind of achieve results through social engineering and manipulation. So it's something that at New Knowledge we actually talk about quite a bit. The idea that this is basically kind of a penetration testing, a bug funding model. I have a question over here. I'm curious about what you think about curation, paid curation of content because like similarly, Steam, which is a gaming platform, had a similar problem with games of people making bad reviews for their games and they started to hire players that were being followed by a lot of gamers because they were experienced gamers and this created a whole new experience inside the platform. Kind of resolved the problem of bad reviews because of surge prices of stuff like this. They still have some episodes of these games but usually the users trust these gamers that makes, they write these reviews of games and they can play first and they receive prices. Got it. The influencer thing. So Amazon, so yeah, and Amazon actually has their own program for that called Vine where you have to be invited to be a Vine reviewer on Amazon. I'm pretty sure that's what the program's called. And then those reviews are more heavily weighted in the algorithm. I think that the problem is there's that, the sort of start of that but I think a lot of the platforms don't want to look like undemocratic or like they have their finger on the scale and so that's where you start to see, can you get them there or can you get them towards a model like that by highlighting how problematic and gameable the existing system is. I think Steam, they recognized they had a problem and that was what kind of got them there. So I think a lot of exposing manipulative campaigns is in the hopes that the platforms will recognize the problem and then only they can really take steps to fix it. We can just make recommendations. Other questions? I just want to hold on to what you said about that no one's responsible. Is that the exact problem? Or what do you think? Shouldn't we hill, for instance, Amazon or Facebook, these kinds of platform responsible with new kinds of regulations? Do you think that's the solution more? So you mentioned liability a few times. So one of the things I do with the data for democracy had on is talk to regulators about what that looks like. No one wants to see regulation come through about these are the topics you will allow. And so we're trying to think more in terms of how do we deal with ideas of manipulated narratives and manipulative behavior. But also how do we think about, you'd mentioned spam, I think another, we've seen the platforms come together before. So there's this tension between like can you achieve a self-regulatory model much like they did where they kind of came together and took action on spam where we're starting to see that happen with terrorism a little bit slowly, I would say, but getting there. What is the role of doing, kind of codifying and regulation? Some of it is, the problem is a lot of the regulation is really bad. And I say that as someone who like has been discussing like what is a bot with legislators for a couple of weeks now. Right, because the idea is like what are the restrictions that you can put on things like manipulative automated behavior bots and things like that. And the realization that when you kind of get down into the weeds, they're saying like, well as a bot somebody who puts out 100% automated content, 25% automated content, two tweets run through like a, so you get into these really niche debates about what that's going to look like. How do you define your terms? How do you define the size of a platform that matters if the platforms do fragment or new ones come up? Is it like a tree falling in the woods problem or we only care if it's happening on a platform with like 50 million monthly users, that sort of thing. The regulatory conversations are happening. Some of them are actually anti-monopolistic and we see the antitrust conversation happening, the anti-automation. The culpability, we see the erosion of CDA 230 with things like FOSTA and CESTA and some of the other bills that are happening. It's a matter of like, these are areas where it's an evolving conversation and I think nobody has the right answer yet. Are there any lessons positive or negative that might be learned in the regulatory context from Wall Street? Because that's a place where there's only money at work a lot of the time. There are incentives to take it from people that you are promising that you're acting in their interest, but in fact, you're not. There are affiliate programs, third parties. A lot of the things come over and I also find myself as somebody who with Jack Balkan has been excited about a fiduciary model for the platforms. Yeah, that's what I was just about to say. But I find myself with a real puzzle when if people are thrilled after they finished the anti-vax documentary to see six other things, it's almost like somebody coming to a financial advisor who, let's say this advisor has a fiduciary duty to say, I want to invest exclusively in Jackalo branches. And you're like, I got to say, that's not a great investment. Look, that's where I'm at. And can you recommend something just like it? What would the good fiduciary advisor do in that circumstance? That's where I think the fiduciary conversation is a really interesting one that I think is just beginning to take shape. It's the idea that the platforms have an astonishing amount of information about you, your behavior, your likes. Google knows your deepest, darkest secrets. So do they have a responsibility to not manipulate that information? I think there's so much in what you said. I look at the choice architecture model. You can still serve people up things. Maybe at some point we do think more hierarchically about are we serving up junk food or vegetables. How do we think about the role of if we were prioritizing an informed electorate in a functioning democracy, and yet our biggest video platforms are serving up Alex Jones constantly? What are the outcomes? What are the downstream outcomes of the recommendations we're making up here? This is where I feel like that research is only now beginning to become possible. And we're really reliant on people who are going to actually get in there and say like, is this a thing where we see it, but it doesn't matter? We see it, people are getting served and they're just ignoring it. It just flits by. Or is it what we're starting to see when we look at things like the number of YouTube views on extremist content, extremist conspiratorial content? It really does seem, at first glance, like that's a huge problem. But we have no idea what those people go and search for the next night. We just can't see that. We have no visibility into that. It's also funny to think the phrase, there's no one running the show, would be in 2002 the happy title of a book celebrating the internet. For once, there's nobody running the show. It's Burning Man. And now it's like, well, actually, there are people running the show. We just don't know who they are. And they're not the government. And they're shoving junk at us. Well, the government has to plead sometimes for action to be taken, which is another. The terrorism stuff was a big problem. And there's just a lot of interesting things we look at. I've been looking, kind of doing a lot of going back and looking at total information awareness and some of the DARPA programs in 2002, some of the DARPA programs that were designed specifically to get ahead of this in 2012 to 2015 that we shut down. So the idea that the government has been aware of this, this was not something that just came out of nowhere that we were surprised by. I think we were surprised at how effectively political adversaries were able to execute deliberate campaigns. But I think that that's, again, where we get at the idea of who bears that responsibility. And it's tricky when there's just such a posity of trust with every institution. It's not like we want to trust Amazon, people don't trust the government, people don't trust each other, people don't trust the bots. It's like, I don't know, give me a shotgun. Where's the, sorry, I was a metaphorical statement to those listening and processing. Where have the mics found another home up there? Ruben, you want to deliver to something nearby? Why don't we call this our last question? At the beginning of your presentation, you said that initially these incentivized reviews were honestly marked and Amazon would label each one that way. That sounds like a system that worked well. So why did they get rid of it? Because that way you could tell, well, these are the shills, these are the honest reviews, and you could see what the score was for an average of each one, or at least you could imagine doing that. Why was that abandoned if it seemed to be working? It was because there was a number of third-party researchers wrote reports saying that, so Amazon does not break that out. It will break out, one thing I probably should have said. If I could eat through you to set, this is an incentivized review. But so when you go to the page and you look, the first thing you see is 4.5 stars. So it's the initial heuristic that that is the rating, and so Amazon's reviews were not breaking that out. You could go and sort by verified purchase, but a lot of these are verified purchases. The reviews that I left for products that I got for free are still in fact verified purchases. And so it was a question of, this is something that I think Twitter is doing well, which is to say when it recognizes manipulative behavior, it deprecates that and it underweights it in trending to prevent it from trending. So these are the sorts of things where, yes, like as the platforms get more sophisticated, you can envision that becoming something. But if I put this label on, that means that Amazon actually wanted there to be such a thing as an incentivized review, or else they wouldn't have, you know, create a label, call that the first place. No, no, no, people would paste it into the review. Oh, okay. Yeah, people, users, it was a system where users would disclose, because there were third-party sites like JumpSend where you could go and register for the site and then you were, they would start to just send you stuff and the terms of service said you were supposed to disclose that it was a paid review. Yeah. Well, there is also a template for a world without trust where a helpless populist turns to the superhero that can save Gotham City. And I feel like that template very much applies on the internet today. So just wanna thank you for your service and recognize that not all heroes wear capes. Thank you. Thank you.