 Hi everybody, my name is Mary Gray and I'm a permanent researcher at Microsoft Research over in Cambridge, Kendall Square, so I came up, hopped a couple of squares and I'm also on the faculty at Indiana University in the anthropology, gender studies, American studies and media school. So those are all separate departments and then a school. I like to collect those affiliations, you know, like box tops. So I have the great pleasure today to introduce to you the PhD interns who are working with Microsoft Research this summer in the New England lab. And they're all pretty amazing stellar emerging scholars and I want to save their time and not necessarily repeat bio information they're going to share with you. But Ifeoma, Stacey, Nathan, Alina are here to talk with you about the work that they've done and how it connects to the work they're going to do this summer. I wanted to just share with you some information very briefly about our internship program because I think these emerging scholars, their scholarship really represents something we I would say desperately need in academic and industry settings and that's folks who can do bridgework. People who can work between industry settings, university settings and remind us all about how and why we need to connect with each other. Eventually their projects are also taking a tack that is not common in media and technology studies. It's a more social critical approach so you're going to hear mostly about qualitatively driven projects. So I think our group is particularly interested in showing the value of taking an approach that isn't the more familiar methodologies that we see in industry and in university settings that look at technology and media. It's a 12 week program and though it's sometimes called a summer institute or summer internship program we do have folks who will apply for this. So I'm saying this generically as though there might be potential applicants in the room or people who could share this information with their peers but we are always looking for people who are interested in taking this more critical qualitative approach and we will usually post our announcements in October on our blog about the internship program. We have labs all over the world and each of those labs accepts up to around 100 PhD students to do this work and it can really range in terms of what kind of proposals people want to pitch and probably most importantly and maybe sadly Microsoft research is pretty unique in that whatever a PhD student does it's available for public consumption. It's not hidden behind a walled garden, the expectation is that you're sharing it with your peers and building on a scholarly conversation. So it's a pretty unique opportunity to be able to do some work that both feeds academic conversations but also might give scholars a chance to connect with product groups or other groups that could really benefit from their expertise and their insight. So let me stop the promo of the Microsoft research internship program that I hope many people in this room will apply for and remind you that for these lunches we will be doing a webcast of the presentations that will be available on the Berkman website within a few days. It's pretty miraculous. It's like magic and part of today's conversation is you, so will at one point after the presentation be able to turn to questions, discussion, I'll be moderating that time and I'll be looking for bringing as many of us to the table as possible so I'll just be picking folks and trying to draw out voices that may not always be heard at the table so just be aware if you have your hand up and you've spoken a couple of times I might ask you to step back and let others speak. We'll also be recording your responses so please make sure to ask your question into the mic so that we can get your question on the webcast so that people can hear it. There's a back channel as well, is that true? Is there a back channel for Berkman for this discussion? Oh, hashtag Berkman for those folks who like to hashtag things. So please feel free to share out what's said here because it is going beyond this room. So just keep that in mind when you're asking questions or making a statement. So with that, let me pass it over to our first speaker, Ifyoma. Good afternoon. So hello, Berkmaniacs, Berkin Turns and the Cambridge and Boston community. Many thanks to the Berkman Center for agreeing to host this talk. To introduce myself, I'm a PhD candidate in the Sociology Department of Columbia University. And writ large, my research is preoccupied with data. Notably, the privacy ramifications of our voracious appetite for data, the legal issues presented by the inelegant analysis and interpretation of data, and the ways that data could be employed to widen the divide of inequality. And as an FYI, for all the computer scientists, I tend to think of data as a collective singular noun. So bear with me. So almost everything we do generates data. This was a quote by Gary Wolf, who is a proponent of the quantified self movement. However, in the book, Data and Goliath, Gary Schneer writes about the government and corporate surveillance that Americans are subjected to in their daily lives. And similarly, in the Black Box Society, Professor Frank Pasquale writes about data miners and data brokers who trawl the internet for our data. And how so much of big data is fed through algorithms, secret algorithms, right, that determine how financial institutions and order corporations choose to interact with us. But what about the data we generate from logging our own lives? The quantified self movement is to believe that self tracking or self logging as it is termed can lead to greater self knowledge and can provide solutions for health and work productivity. But should we be concerned that this small data could be corrupted in ways that are ultimately detrimental to our agency and personhood? How can we allow for the autonomy and personal expression that is life logging while protecting against the laws of privacy and the new opportunities for discrimination? For the computer scientists in the room again, I would say that this problem is hard, maybe not NP hard. Let's call it quantified self hard. So today I want to talk about the ways that our data may be used against us, right? As Kate Crawford recently wrote about in The Atlantic, small data from a fitness tracker has been introduced into the courtroom in Canada. This time the data was introduced by the plaintiff's attorney as part of a personal injury case. But imagine that next time it could be your insurance company asking for your fitness tracker data before they will agree to ensure you or determine how much to charge you for premiums. It could also be your employer asking for the same data to determine whether they should continue to retain you as an employee. So I see the quantified self movement as a manifestation, right, of a societal desire for certainty and accurate prediction of risk. But we should not be so seduced by the shiny apple of knowledge that we neglect to inquire as to the cost. I ask you to consider that the quantified self may have a dual meaning. Yes, it can refer to the self knowledge that derives from tracking our most intimate and minute bio functions and processes. But it can also refer to the quantification and subsequent discounting of certain classes of people based on perceived risks. In my research, I have seen the desire for the quantification of risk result in the widening of inequality and in startling encroachments in privacy. So take for example, the plight of the formerly incarcerated. The discounting of formerly incarcerated individuals through permanent criminal records has resulted in a paucity of second chances for those individuals in the workplace. Formerly incarcerated people find themselves permanently branded with a modern day's credit letter in the form of collateral legal consequences that disqualify them from certain professions and that serve to exclude them from needed governmental benefits. This quantification of people also plays out in the genetics arena. As the technology of genetic testing progresses, Americans now experience genetic coercion, which is an overwhelming compulsion to scrutinize and police the genome. Consider that since 2008, all newborns in America have been required to undergo genetic testing for disease. And that 85% of American consumers express a desire to engage in genetic testing. The quantification of individuals also plays out in the workplace. In my current research work at Microsoft Research with Kate Crawford, we focus on the quantification of workers. Louis Brandeis is credited with popularizing the term scientific management. This term was then adopted by Frederick Taylor for his theory of management. Also known as Taylorism. But unlike Taylorism, where the focus was on the job task itself and breaking it down into my new compartments that could then be mastered. The focus now is on the individual worker's body. And in the organizational corporation, mastering the individual worker's body for greater productivity. Or better yet, inducing the individual worker to master their own body for the benefit of the corporation. Yet, in what Professor Julie Cohen has identified as a surveillance and innovation complex, organizations seek to remove this process from the regulation reach of the state. By presenting the increased surveillance of individual workers bodies as enabling innovation and economic growth. Our research at MSR focuses specifically on workplace wellness programs. Which is a $6 billion annual industry dedicated to promoting healthy behavior among workers. These programs encourage healthful diet and exercise and they track people's weight, people's smoking, and people's spending habits. Our research then seeks to understand the standards applied to the data collected from these programs. And how this data and the interpretation of this data could impact the worker from whom it is generated. As computer scientists, as social scientists, as lawyers, we must keep asking, how can we make technology work for us, rather than against us? How can we harness the computing power of both big and small data, while being vigilant that such power does not create divides and inequality or open new avenues for discrimination? Thank you very much for your kind attention. I look forward to your questions. Hello everybody. All right, well, talking with Nathan took up what he called an opportunity to introduce myself to the Berkman community of interns and scholars that are here. I'm excited to be here in Boston. I'm a PhD candidate from the University of Illinois in Chicago. And I'm in the communication department, but I have this really unique opportunity there, where I'm in a fellowship program where we bring together computer scientists, computer engineers, the health school, and communication scholars to study issues of privacy. So the opportunity to come to MSR, which also has a very similar environment of people coming together from different fields to look at different issues in computing and social, particularly the social media collective. It was a great opportunity for me to tie together some of my previous research. So I take privacy research and from that perspective, I'm usually looking at how do people enact privacy? So when you go on Facebook and you think I'm going to post this scene of me celebrating the Blackhawks victory last night at the bar. And I go, well, do I want my mom to see this? Do I want my dad to see this? Do I want my boss to see this? And so we look at the behaviors that people do online, and then the considerations that go into those behaviors. But what I found doing that research was that we're kind of ignoring the platform that these behaviors take place on. And we're treating them as very neutral. So that's kind of led me down this path where I started looking at, as you mentioned, the data that is collected in the algorithms that then in turn use that data to provide information to us and personalize what we see on the web. And I've taken the opportunity here at Microsoft to work with some people who specialize in very qualitative discourse analysis, which is something that I'm here to learn more about how to do and do well. So this is an opportunity for me to move away from some of the more quantified stuff that I do and move into this discourse analysis arena. So the project that I'm going to do while I'm here involves looking at the Facebook News Feed algorithm. Facebook put together a page that they call Facebook Tips. And this is a public page that hosts, you can go there and they'll provide you information on how to look at who's around you with your friends and use icons. And they also post a lot of information about how the News Feed works. So I'm going to be taking a look at this page, and particularly several videos that they posted. These are actually all the videos that they posted and all of these are tagged under the headline News Feed created by you. So there are several different examples. We have men, women, different ages. What was interesting about these is Facebook pushed them out through the sponsored posts. So they did generate a lot of views. In the larger world of Facebook, it might not be a lot, but in terms of millions of video views, it got a lot of traction. So I've collected those videos and in addition I've collected all the comments that were posted on those videos. And I'll talk a little bit about that more, but I want to give you an example of one of the videos. So we'll see if this works. Through my News Feed, I actually found an amazing gym and I made a commitment to change my life for good. I was able to surround myself with so many inspiring, ridiculous, crazy people with so much knowledge and expertise. And I was like, I just want to know what you know. So I made my News Feed about wellness, nutrition, and just how to live your best life. A community that I can look forward to seeing every day. So Tim is completely and solely responsible, apparently, for creating the News Feed that he gets to see. And what I'm hitting at is something that several of the researchers at Microsoft focus on, and it's that discourse matters. It's very important the way that companies present themselves to users. It's very important the way that users then understand how these things work in their lives and make sense of them. So the focus of my research is to look at the discourse going on between Facebook and the users, both in the videos and how they present these algorithms to be working. And then also, interestingly, Facebook, as you can see on the, I guess it would be your right side, Facebook responded to most of the comments that were posted in the first week of each video going up. And these videos were posted in late 2014. So I was hoping I would be able to read the screen from here, but here's an example of a video, or a comment, sorry, the user says, this leads me to believe I have control over my own feed, I don't. Facebook is constantly making things disappear and rearranging the timeline. I don't want to see something from six days ago when I can't see one thing I did want to see from an hour ago. And Facebook replies, hi, Darrell, in order to see stories of your new feed in the order they were posted, please follow these steps in our help center. We hope this helps. That one's kind of sterile. That response is very corporatey feeling to me. But the responses change depending on the types of questions that are asked. So another one is why do I keep getting old posts? And Facebook replies, sometimes the story that you've already seen will reappear at the top of your new feed, because many of your friends have liked or commented on it. This helps you see popular stories that your friends are interacting with most and joining conversations around posts. What's interesting is Facebook, and I'm just starting this, so I've collected the data, and I'm at the very beginning of this project. But looking at some of the posts that we've seen, Facebook uses very awkward languages to avoid saying that we do this. It's always you do this, or your friends do that. And so it's going to be, I think it's going to be a very interesting project. But as I mentioned, I'm at the very, very beginning of it. So the analysis that I'll be looking at through these comments is the user to Facebook. And I'm looking at questions as how do the users discuss the news feed? I think there'll be an opportunity. A lot of the algorithm research keeps asking for more understandings of how people understand algorithms in society and what they know about them. And I think that this is a unique space to actually look at and see people openly discussing this news feed algorithm, which technically is algorithms. So what is their understanding? I'm also looking at Facebook's positioning to the user in those responses that they give back. And I'm wondering, where do they place their responsibility? And then lastly, user to user. So towards the end, when Facebook stops answering, after about a week of the videos being posted, they answered most of the comments, and then they stop and you see a lot of the users trying then to step in and explain how the algorithm is working to each other without Facebook's mediation. So that's the gist of my project and I look forward to hearing your feedback on it because I'm just starting, so thank you very much. Hi, I'm Nathan. I'm a PhD student at the MIT Media Lab in Center for Civic Media, as well as a Berkman Fellow. And the question I've been asking, or at least I'm retroactively forcing onto my prior work, is this question of how can we hold crowds accountable to the public? And I'm gonna be sharing three examples and I'm really curious to hear your thoughts on this idea and how we might take it further. The first one is peer production. How might we hold Wikipedia accountable as the public for the power that it plays in society, as often the first port of call for information? And one way to do it is to create a petition addressed to Jimmy Wales, which is exactly what this group of homeopathy advocates did after many years of trying to convince Wikipedia's editors to be more permissive of homeopathy related information. So they said, Jimmy Wales, we really need you to take an executive action on Wikipedia's and they promise not to donate to Wikipedia. This is a model that we're used to seeing with corporations. You create a pressure campaign. You petition a powerful entity and you try to embarrass the company into protecting its brand. But does that really work with Wikipedia, which is a global movement of people who are contributing often pseudonymously? Another example was this case in 2014 or 2013, where a novelist noticed that articles about women novelists were being pushed out of the novelist category and into the women novelist category on Wikipedia. And she wrote a New York Times Outbed. She wrote her colleagues. And at the end of the article, she says, well, we talked about it a lot and we've noticed that some Wikipedia's are changing the articles. Again, it's this idea that if you raise attention about something that there's maybe, maybe Wikipedia's aren't quite structured like a corporation, but if you talk about it, something might change. And this is an area that I've worked on with design interventions, with projects to broaden the diversity of Wikipedia's content about women. You'll notice that there's about the same ratio of biographies about women in Wikipedia as there are obituaries about women in the New York Times. And often when people create infographics about issues like this, they'll do it as a way to kind of prompt outrage to channel attention towards these powerful individuals. With the passing on project, we thought Wikipedia actually is something that if you disagree with it, you can join it and you can participate to change it. So we created a crowdsourcing site where people could learn more about the amazing people who appeared in New York Times obituaries, find out if they appear in Wikipedia. And through the process of reading more about those people, the system would reader source, automatically create bundles of content that could go into Wikipedia from that, channeling that frustration into a way that is compatible with how Wikipedia works. It's a project I'm continuing to explore with the Wikimedia Foundation. Another question has to do with how we hold social networks accountable for their influence in society. And when I say that, I don't just mean companies. Here's a slide from a recent Facebook study that tried to quantify different sources of bias in the news that people access. This is a chart of the percentage of newsfeed articles appearing in this particular sample's news feeds that were from bipartisan or neutral, politically neutral sources. And they identified a number of causes or reasons why someone might have a bias newsfeed. One might be the media. The media is just publishing a certain ratio across available publications of content. Another might be your friends. You choose who you are friends with and they share information and you see the information shared by your friends. Another might be the newsfeed algorithm, which we've heard a little bit about. And Facebook was keen to note that it probably had the least impact on what people were reading of all of them. And then, of course, your personal decisions. And in my research, I'm less interested in the question of the newsfeed and I'm much more interested in all of those other social factors. And what it means to hold our networks accountable, which is much trickier to figure out than what it means to hold, say, Facebook accountable for the algorithms that it has. And one example of a design intervention is the project I created in partnership with Sarah Salavitz, which is follow bias that we're currently doing tests on with journalists, which shows people the gender ratio of who they pay attention to on Twitter. We can think of journalists as a crowd that do actually have a big influence on society and it is a matter of public interest, who they choose to pay attention to and who they use as sources. And we're trying to see if providing them with a chance to be more aware of the gender ratio of who they follow and make suggestions to each other actually has an impact on how they carry out their work. And we might also think about distributed decision-making. Coming back to Wikipedia, there was a controversy earlier this year where Wikipedia allegedly was taking moderation actions, topic banning some editors from very controversial gamer-gate-related topics. And actually local scholar Mark Bernstein noticed this and wrote a blog post about it in like a classic accountability journalism style saying this is a problem, we need to do something about it. Wikipedia isn't doing enough about it. And journalists started writing about this issue very much following that accountability journalism frame. So they made various claims that they then had to retract because the journalists writing about Wikipedia didn't actually understand the various groups and processes and structure of how Wikipedia worked and had to rewrite substantial parts of that article. And it raises this question, it turns out that there's this entity in the English language Wikipedia called Arbcom who handles various controversies and arbitration issues. They only have a limited scope and suddenly people found themselves mired in the internal politics of Wikipedia and unsure how even to talk to Wikipedia or Wikipedia's or the Wikimedia Foundation or Jimmy Wells to get this issue addressed. Although since that we've started to see articles in the press that are going beyond saying Wikipedia has a problem to talking about the specific politics of Wikipedia. As in this Newsweek article that was actually reporting on the Wikipedia arbitration committee describing it as the highest court in Wikipedia land and starting to think about reporting on it in similar ways as they might report on other forms of governance in society. And so for my work here at Microsoft and ongoing research I'm starting to think more about the people that play these profoundly important roles in our online lives who make decisions about what to delete, who to suspend, what the norms are in our communities. One example of that is a report I did with many of the people at Berkman doing an audit of how Twitter handles harassment and looking at the process of reporting, reviewing and responding to harassment on Twitter. It came about because the advocacy organization Women Action in the Media took three weeks to receive harassment reports from members of the public and then forwarded them on to Twitter. And we were able to then look at the data that they collected in that process, data that's usually hidden inside of companies to get greater visibility on this kind of process. So we looked at who was reporting harassment, the kinds of harassment that were being reported as well as the experience of handling this work, which is often very taxing. It's very time intensive. In WAMS case it was volunteer driven on sites like Wikipedia and Reddit. It's also done unpaid by volunteers and can have very serious mental health risks. So this summer I'm asking two questions. Firstly, I'm looking at the work of Reddit moderators. There are over 550,000 subreddits across Reddit and over 100,000 moderators who have this official role on Reddit. And I'll be doing qualitative work to understand how they see their work, what they see as their job and not their job and how they're responding to more recent controversies around Reddit's corporate role in moderating behavior and speech on the site. And I'll also be continuing to ask further about this gamergate controversy. As I asked the question, what does it mean to hold crowds accountable for the power they have in our public lives? Thanks. Okay, hi everyone, my name is Alina and I'm from Indiana University but before introducing you guys to me, I wanna introduce you to the work that I do in a virtual game. It's an MMO, a massively multiplayer online game called Eve Online. And this is what it's like to be an Eve Online player. Where are you de-clocking? I'll have to be the second there. I'm an snippet for you right there of this massively multiplayer online game. And I'm from Indiana University. I'm finishing up my PhD there and I'm also interning this summer at Microsoft Research. So a little bit about my dissertation. I've been training as a media anthropologist and my focus is on consumer culture. So I study gaming communities. So these are people who play games together. They hang out online. They talk to each other in person as well as online about their hobbies. And what interests me about these gaming communities is how they make sense of, first of all, their personal place and role in this massive world. Eve Online is a huge world. What you saw in that clip was actual player dialogue. So this is them talking on the comms. It's been edited by the company, but it's all actual people. And how they carve out their own little corner of space. My research shows that it's made sensible through spectacular branded rituals where these lights, these sounds, these human bodies and alcohol as well come together in just the right alignment for players to see, to hear, to touch, as well as to feel that they are truly part of something larger than themselves. So this happens at Brand Fest, but it also happens within the game itself. How do players make sense of the hours and efforts spent on this game and its community? Gamers, especially in this game, are really hardcore. They spend hours and hours every week building up their own little empire within the game. And it's made sensible through reward and reputation systems within the game itself but also within the wider gaming community. Some of these reputation and reward systems are designed by companies, but others emerge from the community itself. Finally, I ask, how do players make sense of this time that they've invested in the hobby of their identities as gamers and as Eve online gamers? Well, it's made sensible through discussions about work-life balance and through these expectations that gamers have, but not only them, but all kinds of hobbyists, all kinds of consumers who see their work of consumption as somehow productive. How do they create value? And by this, I mean economic value, but not just that, cultural value, social value from one's hobbies. And at the base of all these rituals, these systems, these expectations, are what I call compensatory drives where players use to get things to add up, get things to even out, and also to get what's coming to them. When you think about compensation, you may think about recompense or retribution or acquittal. And it's not the same thing as, but it's also connected to other ideas about counterbalancing of effects as well as the neutralizing of forces. At the end of the day, in the games that we play, we seek balance. At the end of the day, in our voluntary social activities, what we seek is a kind of moral equilibrium. And these compensatory forces, they are at work to connect players to this massive and yet somehow intangible world. And to give gaming communities a sense of fairness and justice. And finally, to give gaming practices a sense or a kind of aesthetic, economic, and social legitimacy. So my internship project for this summer is about Eve Online. And as you can see, it's a space game. You fly around in space, but you also work a lot with all these little, pop-up windows, almost like spreadsheets, right? And it's a hyper-capitalist world where the universe is really overrun by corporations. There are no governments, warfare, murder, theft, all these are sanctioned as long as you can get away with it. And within this really exciting world, there is a group of players who are consultants to the game company, yet they have been democratically elected by these players. These players have cost their votes to have these consultants represent their interests within the game to developers. And it's very interesting to me how this savage fictional world is somehow managed through surprisingly civilized processes. So on one hand, we have player representation. And then on the other hand, we have corporate consultation, excuse me. And it seems, of course, it makes total sense, but players have demands that are very, in a way, micro-level. They want this mechanism to be twerked in this way because it benefits their player group. Developers, on the other hand, have very macro-level concerns. They are concerned with the overall system and how it works for all the players. And of course, profits. On the other hand, there are informal systems of accountability. If these representatives or these consultants don't do their job, they don't get elected. They also get complained at all the time via email and the forums and at conventions. However, when they do this consultancy work, they have no real power to make decisions within the company. All they can do is recommend and the developers are obligated to listen. So what I'm doing this summer is that I'm gonna be looking at meeting minutes over these five years that this council has existed between the developers and these player consultants. I'm also gonna be looking at town hall minutes or meetings, excuse me, between the council members and the player base. Finally, I'm also gonna be looking at election campaign materials and responses to them. And what I'm interested in is how do these actors, council members, players, developers, how do they articulate their roles and how do they articulate their roles in relation to each other? Where does my job as a player end and where does your job as a consultant start? I'm also tracing how players learn to become council members over their term of office. Consultancy or the identity of a consultant doesn't just happen. And through these meeting minutes, you can actually trace how they become in a way socialized or trained into the language of game development. How they become trained in the language of thinking broadly for the company instead of thinking about player demands. And finally, I'm trying to map these feedback channels, not just the official feedback channels that all these online game communities have, such as forums, comms, emails, but also the informal feedback mechanisms. And most importantly, their directionality. Remember that feedback doesn't just run up the chain from players to consultants to developers. It also runs down. The consultants have this job, whether it's implicit or explicit, to in a way advocate for the company to spread corporate goodwill to the masses. All right, so finally a couple of points for discussion today. If we can think about these voting mechanisms within this player consultancy group as one kind of democratic mechanism across proprietary media platforms and consumer practices. It's not just this player council, but thinking back even to reality TV shows where there was audience voting. Thinking forward to different kinds on a smaller scale of player tribunals or player guilds that have some kind of voting mechanism involved. We can think about these in a couple of ways. We can think about them as market populism. You know, the same old story. This is neoliberalism at work, right? Consumption rituals have been replacing for several decades real, authentic democratic engagement. On the other hand, we can think about it as consumer co-creation. This is a different version of customer relations. Deliberation in this case is commoditized into a kind of very pleasurable and also very valuable branded experience, right? Giving feedback is not just to make the system better. It in fact makes your own experience as a consumer more exciting, more engaging. Finally, I think it's really interesting to think about the design of user experiences. Those of you who are designers here, you know that when you think about engagement, you don't wanna give the user what they want. They have no idea what they want. That again is a kind of narrative that developers use to in a way filter or discount popular will. Finally, the overall question that I'm asking is how are these kinds of democratic mechanisms, how are they changing the means and meanings of consumption, right? And I'd like to hear from you guys in the next couple minutes. So could I ask our speakers to come up here? I think that might be easier if you're all visible in front. Maybe bring your chairs around if that's okay. And I will hold all of my questions and comments because I get the pleasure of working with these people all summer long. So with that, let me open it up to the group. Questions or comments? Yes. Nathan, this is for you. I'm curious how you can, how crowds develop or how crowds are different from simply users or consumers or patients. You've selected Wikipedia where it's kind of obvious there's a crowd, but I don't know how to define that kind of crowd and distinguish it from other groups. I see that that seems like an especially important question that I imagine you've thought a lot about in your dispute resolution work as well as there are different parties that we think about as needing accountability. We might think about individuals or institutions. I think I'm using crowd as a placeholder term for now to refer to a variety of things that we don't quite yet know where to apply the lever to change things. And it might be like a cumulative effect of the social choices and the friendships that we have in a network, but it might also be something more identifiable as a group in the case of something like of Wikipedia's arbitration committee. I'd love to continue to figure out together what those different structures and forms are because I think that's important to understanding them as well as coming up with strategies for holding those different structures and groups accountable. Oh no, we're being able to catch the recording, sorry. And if I could ask folks to say their name and affiliation please. Hi, I'm Rebecca Wexler. I'm a JD candidate at Yale and a visiting scholar here in history of science. And my question is for the last presentation is voting or what is your model of the authentic democratic engagement that neoliberalism has replaced over the past series of decades? And I thought I heard from you that voting might be that model. Is that correct and could you expand on it? Well, my version or my utopia, if you will, would be a kind of participatory democracy, right? But I think not so much, my thinking is not so much towards official political systems. My thinking is more in terms of how can the media allow citizens to participate. And there's a lovely book that I've been reading by Nico Carpentier, right, which has a very nice framework for thinking about participation in the media specifically and how different levels of engagement and different mechanisms can be used, not just to engage users or to engage audiences, but to really give them decision-making power in the systems, but also in the content. And it's far off, but I think we work inch by inch, we'll get there someday. Hi, I'm Charlton Gillespie. I'm a principal researcher at Microsoft and I get them all summer, but I'm gonna ask a question anyway. So I was thinking, especially Nathan Alina and Stacey's project family, you can weigh in too. You're clearly looking at a place where you're trying to think about how groups of people think about their role in a place and negotiate the relationship between some platformer operator that's got a role to play. But then I think in a number of them, there's an additional thing, which is that the world is meant to be something. And it's most obvious in Eve Online where there's a kind of narrative, Wikipedia clearly has an idea of itself and Facebook is proposing ideas of itself, maybe much more quietly in the videos. So I was just curious to hear, because that's kind of a thread that goes through a lot of the projects, like that part about not just what should the group do, classic sort of governance problems and what's the relationship between that set of people and some institution, but the additional problem of like the narrative or claim or nature of the institution, is that narrative or representation? Is that masking the real problems of governance? Is it distorting? Is it facilitating? Is it shaping? What's the right verb for that sentence, that those claims and that picture of what it's supposed to be and its involvement into this problem? I haven't spoken yet, so I guess I'll jump in. You know, it's interesting because my project stacked against Nathan's project shows kind of two different layers of analysis of basically the same problem and I think more evident in your work is putting the responsibility and we had a brief chat about this, so I'm informed about how in the media there's this, well one of the word algorithm is though it's a single thing and that we can just identify this one thing and maybe tweak it and then everything will be fixed and that does assume that there's, that we know how to fix what it is that we're trying to fix and I think that that is not something that I claim to know and I would not go there but I think that for my work, transparency is a word that's used as something that's lacking a lot in the news feeds that we get, in the Google search results that we get, in the algorithms that kind of mediates the information that we get and so what I'm kind of trying to look at is the way that Facebook presents it to those users to see what reality they are trying to shape for them. It's really interesting if you watched Facebook recently had its annual conference where it spoke to marketers about the news feed and the news feed is all we, we, we and this is what we do as opposed to when you're talking to the users and it's you, you, you and this is how you tweak it. So part of my larger project is just looking at those different realities based on the audiences that they're speaking to. Sure. One of the inspirations that I keep binding myself returning to in this work is Arleigh Hoek's child's work on airline attendants and there was a case where like there was a clear brand driving how these attendants would handle like tough situations and Hoek's child looked at the training process that these people went through in order to not just learn what to do in tough situations but like how to be and how to emote and how to feel in those contexts and you know in that kind of situation where there's like you're working for the company and you might lose your job there's a lot, there's like a really strong tie at least in the hopes of the company and the expectations that those things will be very close. As I look at things like the onboarding process for new moderators, there are job boards on Reddit where people apply and post opportunities. I'm really curious because Reddit has some basic rules but each subreddit also has a process where it defines what kind of norms and rules are important to it. So I think I'll be paying like special attention to how moderators see themselves in relation to the kind of expectations and their vision of their subreddit as well as their like expectation of vision of what like a Reddit community would be like. Well to the question of what is that, what is that word that we use to kind of capture what's going on between the corporate layer and the users? I think there are a couple of things going on. First of all, that's the classic sort of you know customer feedback, you know, let's gather all these ideas, filter them, see what makes sense, right? And then incorporate it into our systems. On the other hand, there is this idea of application. We want to persuade our customers or users that you know we're all on the same page. This is a win-win situation, you know? There's no us versus them, so that's part of it. And then finally there's this cultivation, right? They don't just want, it's not actually really about the bottom line. There's more things going on. Saying that is kind of a flattening out of what's going on. They're trying to cultivate an engagement that creates a sort of dynamism. They don't want people to just be happy, that's not it. They want people to fight, they want people to you know, to discuss and even to rebel against the game itself because that discussion, that's energy, right? That creates buzz, okay? So these three things I think are going on. And I hope more of it will be going on that I can find out about, you know, throughout the course of my research. Ifyoma, did you want to respond to that? Well, I guess the question wasn't directly touching on wellness programs, but I guess I'll speak to wellness programs in terms of governance. So I guess some of our preoccupation with wellness programs is the idea of whether they are truly voluntary because they are part of the workplace. Such that, you know, people are not necessarily voting, right? To say whether the workplace can have a wellness program or not. And people are not voting to say what shape the wellness program will take. And also a lot of the wellness program is really about shifting the responsibility, right? For losing weight, for living a healthy lifestyle onto the individual worker. There's not really a real discussion of whether the work infrastructure itself can be shaped to achieve the same thing. So it's basically, a lot of it we're seeing is really the corporation advocating its responsibility. I mean, I'm sorry, abdicating, right? Its responsibility for a healthier worker. And essentially forcing that responsibility onto the individual worker. And we're worried about what that means, right? If the individual worker is in a wellness program, is there then a sense that that worker must become healthy even given the structural constraints to becoming healthier. Even with those present and not really addressed. And what does that mean for workplace discrimination based on data, based on health isn't, right? Based on obesity, based on smoking. So in my course of my research, I actually found that in about five states in America, it's perfectly okay for your employer to fire you if you're a smoker outside the workplace. So you're not smoking in the workplace and perhaps it doesn't impact your job, but if they have this data on you, which they can acquire through wellness programs, they can, that is actionable data for them to fire you and you would have no legal address. So these things are concerning. And I think it's important that we continue to discuss this because they perhaps is an issue of coercion attached to wellness programs when you think that a lot of the ways they get people to join are through incentives. And the EOC recently passed several roles saying that they can do up to 30% of the premium as an incentive and up to 50% of the premium as an incentive for smokersization programs. So obviously this becomes not a negligent amount of money if you're thinking, you know, $2,000, $3,000. Can I jump the queue? Because I had a question just popped in my head. It seems like in every one of your cases, there is maybe an implicit appeal that a social need or desire work outside of market demands. So each of these corporate settings has a particular vested interest in being able to keep players playing in a particular way, keep the news feeds functioning as they are. And I think for both your cases, for Nathan and Ifeoma, there are these clear cases where market demands say the corporation has a vested interest in doing something different than what the players or the participants might want. Yet there seems to be this sense that we would, why is it that we're still seeking a certain kind of social prioritization or social good from a corporation? What is it that sends us in the direction of the corporation as kind of hopeful citizens of sorts? Rebecca, your question is really interesting in that regard. So for your different projects, have you gotten a sense of why it is that the participants see the CEO or other the company as the site of recourse for this desire for a social really extra market need? Yeah, I think that's one of the most interesting things I've seen so far in my very early analysis of the comments is that there is this inherent expectation by the users that Facebook somehow be truthful or somehow what they're doing isn't when they're not showing them all of their friends in their feed. And this has been shown by research that has been done on the news feed by Christian Sandvig and others. We're very early in this work and that's why it's a Tarleton's question. It's like, we don't know what we have yet. And so what we do know is that the work that's been done is showing that users are somewhat confused. Some of them feel like they're being lied to when they don't know that not all of their friends are showing up and the title of that paper is great. Cause it's like, I just thought that person didn't like me, something to that effect. So this shows for me that these users in this study rely on Facebook to maintain personal connections and they have assumptions about what's happening and when they're not meeting those assumptions and they feel a sense of anger at what's occurring. Well, in the case of Eve online, users or players look to the company for a kind of social justice, if you will, because the company thinks that it makes good business sense. There's a very fine line between wanting to please your user, wanting to take their votes at face value and then redesigning the system by in a way adapting mass player will. So on one hand, they have to give them what they want, otherwise they would rage quit or they would just exit from the system. But on the other hand, they have to sort of come up with something new. So it's a fine balance that they have to strike in that way. I can go on. So I think maybe the cases of Wikipedia and Reddit are counter examples to Eve or Facebook in that my impression from what I've read so far is that in the case of Wikipedia, people who are participants and active contributors do have this sense that it is a public good that by donating to it and by it not being a corporation and having all of these structures of governance like elected Wikipedians on the foundation's board and other things, that it is to some degree accountable to its participants. Where it gets tricky is when people who aren't contributors to Wikipedia find themselves affected by the power of this very unusual thing, which is when they've sometimes turned to who they imagine they might need to turn to, like following classic scripts of holding power accountable. I think in the case of Reddit, there's a lot of tension in that tension that's become very visible in the last week when Reddit for the first time started banning particular subreddits for the kinds of conversations and behavior that they saw on the site. And for a long time, they've seen Reddit as a place that has more or less hands-off policy over what is appearing on Reddit. There are competing business models in the site, one which is advertising-driven, one which is based on users actually paying the company for various perks and to literally, when you pay for Reddit Gold, like if you meet the day's challenge, there's a new server added to the farm. So it's very much a kind of membership drive-style thing and it gives you the ability to exchange gifts with other redditors. And I think that this debate over the role of moderators and what power they have in contrast with the company is at the heart of this question of what it means for Reddit to be a corporation or whether it's imagined as something else. I'll pass this to Stacey so she can help. I wanted to add one other thing. I've been reading some of the comments and the ones I've seen have been, oh, why are you doing this to me? And I want to point out it's easy to fall in that trap of only looking at the negative. But there are a lot of comments as well that say this is great. I have to be friends with this person because of this relationship, but I don't want to see anything that they post. And so these companies are in this unique position of how do we best make our customers happy, give them the information they want. And so looking at how they do that is not unique. This obviously goes in the early days of journalism and throughout all that history of gatekeeping and so on. But what's different about it is we don't know necessarily how they're doing this process. And so that's where you get users confusion and sometimes anger, but oftentimes you also get users who are very satisfied with the product as well. So in fairness, I wanted to throw that out there. Well, to get back to your original question about how we're essentially relinquishing, right, the solution for a social good to corporations. I think it's really a classic example of trying to fix a complex problem with a simple solution and thinking that simple solution is some sort of panacea, right? And the complex problem that we see wellness programs are attempting to address really is that as Americans, our lifestyle is unhealthy. We sit a lot, you know, sitting as a new smoking, as they say. We eat a lot and not of the right things, right? We don't exercise a lot because we don't have time. We're working eight to 12-hour days. So this is a complex lifestyle problem. It's a complex infrastructure problem. The way our cities are designed, the way our suburbs are designed. So such that we're now trying to fix it with something as simple as a wellness program. There's going to be issues, right? It's not necessarily going to have the intended results. And what worries me most, I guess, as a lawyer, is the unintended results, which are loss of privacy and also new avenues for discrimination in the workplace. Because here's the thing. The anti-discrimination laws we currently have, they don't really address obesity discrimination. They don't really address smoking discrimination. So if we have wellness programs that are really centering those behaviors as problematic or as pathological, what does that mean for the worker? What does that mean for the rights of the worker? I mean, time check. Are we okay on time? Hi, I'm Nick Siever. I'm from UC Irvine. These are all super interesting projects, so thank you for sharing. And one thread that seems to sort of link a lot of them is an interest in what is happening to different sorts of collective kinds. So we have crowds and publics and councils and workforces and user bases and all of these different kinds of groups. And I'm wondering how much you guys see, and I think this has sort of been latent in some of the questions so far, but your role is being to identify the particularities of those kinds of things. What does it mean to be a user base? What does it mean to be a user base now? What does it mean to be a council? What does it mean to be a council now in Eve? That sort of thing, because what I've had a problem with my own work is not doing that and then landing in a place where I'm sort of implicitly endorsing some kind of collective kind that I'm just pretending is the natural way it ought to be as opposed to some sort of historically particular thing. I've been having that problem as well. This is a question of is it the same or is it different? What is the value of comparing it to things that have come in the past? For example, especially with reality television, I just thought about this yesterday. Oh wow, there's this phenomenon that's actually so similar in many ways and yet something feels different. And I think thinking about that feeling of different, at least at this stage of the research process, is really useful because what it gives us a chance to think about is not just, in my case at least, how is democracy changing or how is society changing, but how are the meanings of consumption changing? And it really is changing. Think about being a video game consumer 20 years ago and think about being it today in a game like Eve Online or even in a game like Habbo Hotel. You could do so many more things. You're expected to engage. You're expected to make friends. You're expected to build an empire, have a lot of money, otherwise. What are you going to do? You're not really playing the game. People are going to ask you, how long have you been playing? What kind of player are you? All these notions of identity are sort of foisted on us in a way. I'm a researcher and people ask me, so you play Eve. What are you doing Eve as if that's my second career? No, it's not. I have a career. I think that's a question that I ask. And from using the methodologies that I have, which are interpretive, that is the best kind of question that I can have and then build it out to the system level. How does this then from this perspective, how does this jigsaw piece connect to the rest of it? Well, I think the historical context is really important. And for me, I think of that in terms of defining what a worker is today, right? And that's also defining the group I'm looking at. And I think what is a defining thing for the group that we're looking at is the technology available. So workplace surveillance is not new, right? From when there was division of labor, there also arose the need to surveil the workers to make sure that they're doing what you want them to do. And in a timely manner, they're not malingering. But what is new is the advances in technology, right? That have now enabled us to survey and track the worker in ways that were previously unimaginable and that is collapsing or keeps collapsing the line between work and non-work. So recently, there was a case where a woman was fired from her job and she sued because she was fired from her job from her job for deleting an app on her phone that was tracking her. And when she signed up for the job, she was told about this app. So here's the thing. It's supposedly transparent. What she was not told was that the app could never be turned off such that even when she had finished work and turned off the app, it was still on. And her boss could track her every movement. Not only could he track her every movement, he felt it was a power play to tell her exactly how much of her movements he was tracking, including how fast she was driving, where she went over the weekend, et cetera. So obviously, this is a huge intrusion on privacy and a previously really unforeseen one, you know, before we had such technologies. So now, really, we have to redefine what it means to be a worker in our society given all the technological investments. So your question about, you know, what is it when we talk about, like, crowds or groups, I think is one that really plagues the kind of research that I'm doing. Kevin Driscoll and I, who share an office, have been having this debate around BBSs. Like, we tend to talk about technologies and the users of a particular technology as if they are some kind of, like, group that we can identify with common interests and goals. And people, you know, talk about Twitter in that way. People talk about Wikipedia in that way. People talk about Reddit in that way, for sure. And, you know, within those, you know, platforms and in the conversations that people and relationships people have outside of them, like, their identities and communities are constituted in a huge variety of ways. I think what brings them together at the point of moderation is that there are some of these common expectations that the moderators are having to negotiate. There are common tools that they're using. And they're having to figure out how to, like, work at that intersection between what, you know, a company with less than 100 employees is defining as the space of their work and the vast multiplicity of communities that we're relating to. And, you know, I think we have yet to find the language to, like, talk about those things really clearly. And I'm very, very open to, like, hearing people's suggestions. I think there's also an aspect of work in this project as well. Like, in Postigo's research on AOL community leaders in the 90s found that the more AOL did to kind of control and track and standardize how its community volunteers did their moderation, the more those people thought of themselves as a, like, collective, the more they felt like they were workers that were uncompensated. So there's a sense in which these volunteers, like, are able to do this volunteer work or, like, feel, perhaps feel motivated to do this volunteer work precisely because they don't think of themselves as, like, reddit employees. And those are questions that I'll be untangling as I look at their onboarding process, their gripes, the challenges that they face in this moderation work. For me, a really influential work was Marvin's when old technologies were new. And it's an amazing look at technologies through history and how people reacted to them. And so for me, that's what I'm interested in. I feel like we're at this lucky time where, as you mentioned, nothing is new. The more you do research, the more you learn nothing is new. This is just a new iteration of something. And I feel it's a fortunate time to be alive in a researcher and looking at how people are starting to make sense of these things that are mediating so many of their communications. My larger dissertation work moves beyond Facebook and looks at algorithms in a bigger picture. So whenever you're getting information presented to you, Twitter, Google, Facebook, and so on, how do we make sense of these? For me, the user base question so far, I haven't necessarily answered, but I'm more interested in society and looking at it from a different, you know, broader picture, I suppose, is to a specific user base. But more or less, this technology and this mechanism, this mediation that's happening, how do we understand it? Okay, perfect. We're good on time. So with that, thank you, everyone. Thank you.