 Good evening, ladies and gentlemen. I'm glad to welcome you all here in this wonderful room. I'm glad to welcome you in the name of the Alexander von Humboldt Institute for Internet and Society and the Federal Agency for Civic Education, the Bundeszentrale für politische Bildung. Despite the weather like summer outside. Thank you for coming. This is the fourth lecture in our series, Making Sense of the Digital Society. With this series, we want to reflect upon an ongoing transformations due to the accelerating processes linked to digitalization. While general processes of digitalization started some decades ago, depending on whether you see information technology in general, or the internet in particular, a starting point, we can see a noticeable speeding up of establishing new technologies and trends. While it took the telephone or the computer, for example, many years or even decades to hit the 1 million users mark, some apps or new devices today reached this number of users of participants within hours and all these devices, or more and more of these devices, are today linked to the ability to collect and analyze personal data about our behavior. And it's not always clear or transparent who has access to this data as lately Cambridge Analytica and Facebook famously presented. In this context, the Federal Agency for Civic Education aims at helping people understand ongoing processes, want to create them in the broader picture. And together with the Humboldt Institute, we want to explore the ongoing transformation and focus on the European perspective. Only by understanding which implications come from using new technology apps, devices, platforms, and feeding them with information about ourselves that was long and for long strictly held private or simply not tracked, we can start to understand what possible benefits, but also which dangers are connected to them. And afterwards, or with this knowledge, make decisions as informed as empowered citizens. That is why I am happy that Marie-Anne Foucault will speak tonight about social order. In her work, she shows how various ordering mechanisms shape everyday life and how everybody is subject to many ratings and scoreings based on numerous tracking tools and mechanisms. And while these ratings are often justified with positive or even caring arguments, the question remains, do they in the end not simply divide people into two categories? Those that are in and those that are out? A question famously articulated by Karl Marx, whose 200 years of his anniversary was just celebrated this weekend, and Marie-Anne Foucault took part in a conference that discussed these changes in this anniversary. And I hope that maybe we can today identify possible steps to take in order to make sure that more people are counted to be in, rather than to be counted out. With these questions, I will now hand over to our moderator of the evening, Tobi Müller, who will introduce Marie-Anne Foucault properly. Thank you. Thank you, Sasha. Thank you for this wonderful turnout on an evening like this. Now is the time for the last touches to your hair to clean your glasses, because this is kind of a surveillance context. We are being filmed, but we're not being streamed due to technological problems. That sometimes happens even with a series like that, that those things do not work. I apologize for that to the people who can't see me now because we're not being streamed. But there is the chance that it will be shown the whole talk and part of the Q&A on the respective websites of the Federal Agency for Civic Education and the Humboldt Institute for Internet and Society, and I think YouTube as well. That's where our previous talks can still be viewed, actually. And I guess in this case, the whole hashtag at Digital Society has become obsolete too tonight. So, sorry again for this. After the talk of our distinguished guests tonight, there's going to be about a 30-minute conversation, I think, between us two. And then it's your turn, not via Twitter, but with live questions in the audience. There's, I think, about a couple of microphones that will be passed around, and you'll have the chance to leave your comments and pose your questions for another 30 minutes, half hour, I guess. So, our guest is a comparative sociologist. She studied and lectured at the Sorbonne in Paris. She then moved to Princeton with a stop at New York University, and she got back to Princeton again. And in 2003, she went to University of California, Berkeley, where she is now a full professor of sociology. She is also an associate fellow of the Max Planck Sciences Post Center on coping with instability in market societies. This is a Franco-German center in the social sciences. Her first book at Princeton University Press won numerous prizes. It is called Economists in Societies, Discipline and Profession in the United States, Britain, and France, from the 1890s to the 1990s. So, we see she has a keen eye on cultural or national differences, and that really showed when we had a little preparation talk on Skype, and she kept asking me, what do you think? Do we have to translate this? What's the situation in Germany? What's it in Switzerland? I come from Switzerland originally, so we talked a little bit about that when it came to social scoring, credit scoring, tracking, all that, of course. So, she really has a keen eye on those differences, and it really shows in her work. Her upcoming book she co-wrote with Kieran Healy, with whom she authored many articles. It's called The Ordinal Society. This is, I guess, in a nutshell, what she will talk about today. Social stratification and morality in digital economies. Please welcome Marion Fourcade. So, thank you very much for this invitation. It's a real pleasure. I love Berlin. And, oh, there it is. So, it's a real pleasure to be here and also to be part of this very interesting and distinguished series. So, we typically relate to digital services the way we relate to public goods, like education or national defence. We take them for granted and we don't think often about how or how much we pay for them. So, you can think of a lot of things, you know, search services or location services, you know, that come just with an internet or cellular connection. And then we can think of a whole range of services that, you know, used to be housed in our computers or in separate programs and increasingly come under the label of cloud computing. You know, they are stored in the cloud. So, you can think of social networking, communication, storage, all kinds of collaboration tools, the synchronisation of your computer and even increasingly office tools. So, you know, the question for us is, you know, we used to pay for these services. We used to pay for the software, we used to pay occasionally for the services and not anymore. Today these are off the shelf programs that come under this cloud computing label. So, we use Google Docs to store documents without giving us second thoughts and in fact many schools encourage their students, the kids to store all of their work online. And of course, you know, Facebook, Dropbox, Twitter, Instagram, LinkedIn and so on also work this way. So, you know, this is the cloud. So, the question for us is, you know, what does it mean to have everything going through this technological infrastructure? Now of course, you know, companies had early on an incentive to offer free services. You can think of Google Earth or Google Maps. You know, they were expensive for Google to develop but they allowed it to draw people to spend as much time within its online ecosystem as possible. Facebook's business success by contrast depends on an internet structure that was more gated and segregated into proprietary realms. But so there are differences in the way in which different kinds of companies work, you know, find their way around this into this economy. But by and large, what we have is, you know, we have a sort of foster bargain, right? Of course, states never provided education or national defense for free. We pay taxes, right, and likewise cloud computing or search or all kinds of internet services are not free. This talk is really about the implications of that situation for the way we should think today about processes of stratification and inequality. I know, you know, when we talk about, you know, the new data economy, we worry a lot about privacy. We worry about questions of, you know, freedom. But, you know, we don't often ask, well, what kind of society are we building? You know, what kind of social order? How is inequality going to look like in that society? So, you know, this is what I'm going to try to present today and of course a lot of this is speculative because the society hasn't yet fully come into being. So let me begin with a quote. After a question that me, my research assistant and myself asked about mentioning people who express dismay about companies like Facebook, collecting personal data and building out personal profile, Jake Ford, CEO and founder of EagleNet exclaimed, hey, dealbag, you're getting it for free. They have to make money somehow. And of course, you know, I should mention this, all of these are pseudonyms, you know, all the quotes that I'm going to give tonight are pseudonyms. They're real people, but they're not correct names and they're not correct company names either. So you have to admire the creativity that we put into producing this. But then he proceeded to explain his business model. He said, the big picture is that our goal at EagleNet is to have tens of millions of business professionals using the EagleNet platform for free or some form of free providing the data. And then we go and sell that data and those insights to enterprise customers. That's our business model. Here's another one. This is Max Buck, co-founder and CEO of another company, Elliptical. And he says, Elliptical is basically a free card source. I can't tell you what it does, database. So the way that it works is that it's a completely free product. Again, this language of freedom and users sign up in order to get this free thing from our database. Essentially, it's a gift to get model in order to get this from our database. You need to anonymously share your other thing with us. And you have lots of examples like this, obviously. And the examples are both for the relationship between companies and users, but also in the relationship between companies themselves. So for instance, a good example that another interviewer mentioned to us would be a company offering free or discounted services in order to run A-B tasting on a client's ad or in exchange for information about the company's final decisions. So for instance, a company wants, say, Google or some other advertising behemoths to run advertisement to a large audience for jobs. And then in exchange of the company providing information on who was hired in the end, you will discount the price. So again, this sort of gift to get exchange. So the client, say, may pay for 20,000 people, but the ad is run on 100,000 people. And then Google or whichever other company manages to get this additional information that allows it to refine this algorithm for the next time this kind of request comes around. Okay, so there's a lot. The modern digital economy is built upon an implicit foreseeing bargain. On the one hand, companies provide services for free, right, tempting us like forced with universal knowledge at the tip of our keyboard. But in order to access this information, we have to give away our soul, you know, to continue the fossil metaphor, leaving behind little bits of data that are so many indications of who we really are or what we really do. How interesting is it that in Latin, data may be translated as thanks given or gifts. In fact, the first English language definition in 1587 defines datum as a thing given, a gift delivered or sent. So how does that happen? Of course, on the technical side, there's a whole Internet infrastructure that enables a circulation of certain kinds of data. And you can think about it as, you know, phones and collars and geolocation services, JavaScript tracking mechanisms in browser and devices, cookies deposited on people's computers that facilitate the identification of users over long periods of time. And they help website build longitudinal files. And then, of course, identification is increasingly precise and individualized, you know, more and more sites require a login, IP addresses, anchor also the individualization. And then increasingly other techniques like digital fingerprinting, which is a way of sort of tracking, you know, without depositing anything into the computer. But it tries, you know, it looks at the specific patterns of typing of, sorry, the specific configuration of the computer or increasingly the pattern of typing or using the mouse. So there is this whole infrastructure that, you know, that allows for things to be taken, if you will. But then, of course, the technical infrastructure would not deliver data if there were not people to populate it. So the power of digital companies also relies on a choice infrastructure that helps draw people in. Now I will discuss two fundamental aspects of it, of this choice architecture. The first aspect is what we could call the two-sided market problem, which is, which refers actually to an economic theory which sort of looks at the way in which this particular infrastructure might, might allow the creation of a network effect. So how, you know, how to, you know, how to make sure that your platform, your standard becomes the standard platform. So you have in two-sided markets, you have to cater to two types of audiences. On the one hand, you have your users, and on the other hand, you have, say, your developers. Or, you know, so for instance, the example that economists Tirol and Rocher give is, you know, Visa. So Visa has users, so they want users to adopt a Visa card, but it also has business owners, you know, store owners, and they have to also have Visa to, for the standard to sort of take on. Okay? So the way, and a lot of platforms work that way. The way that you will draw in users is typically by offering free services, but the way you draw in developers is by offering users in a sense. And so you add to the attractiveness of your own platform by attracting developers who will produce lots of additional functionalities. And, you know, of course, the app model was pioneered by Apple, and it was associated with hardware, you know, with the iPhone. But, you know, you can think of, you know, now all of these APIs that allow, you know, for this connection to take place. So the more third-party apps, the more application programming interfaces, the more users and the more users and more developers. So that's the sort of, you know, that's how you create this network effect. And, of course, you know, if you think of, say, Facebook, you know, which is now cracking down on these third-party apps because of the privacy problems. But, you know, in the early days, this is how Facebook and a lot of other companies managed to sort of expand rapidly. It was by offering lots of developers opportunities to create apps that would add, as one of our interviews said, functionality to the Facebook platform. Okay? So, you know, developers have to gain something from this, you know, and something is usually data. So that's the first problem, you know. So, you know, if you look at sort of the development platform rules, you know, this is typically what, you know, you would look at. You know, you would, so, for instance, for users, it's in the third paragraph here, you know, for users it says, now, you know, we enforce the same set of privacy rules for your information as when you're on Facebook, but now you can choose to use your information in the applications developed by other people through your platform. And that's typically the way that this sort of network effect was produced. The other, of course, element in this choice architecture is, you know, getting people to consent. Right? So, on the one hand, you have to attract users, you have to attract developers, and on the other hand, you have to attract users, and you attract users, you have to make sure that the users will actually release the data. Otherwise, the developer's side will not be, you know, will not be flocking to your platform. And so, how do you do this? Well, essentially, and here again, there are economic theories to explain this. You know, this is the theory of Richard Teller and Cassenstein in their book Nudge, you know, you will find, you will have a choice architecture that will sort of bury the information about consent. So typically the way that, you know, websites have done this is through default setting. And the importance of default settings is very well documented in the economics literature. It's documented as a tool for policy. So for instance, if you want people to give organs, you know, what do you do? You have a default setting in which they give their organs unless they opt out. Okay? If you want people to take health insurance, you will have a default setting in which they are enrolled automatically unless they opt out. Okay? Well, this is exactly the same thing. You know, you have a default setting in which you are enrolled unless you exit. Okay? So it's, you know, this default setting where a critical element in the architecture, if you will, the choice architecture that allowed for this process to take place. So the outcome, of course, of this is that individuals now release personal data into this sort of largely invisible material infrastructure. And then the usable data automatically feeds back to the original site owner and also via services through third parties. So the things are taken, right? You know, that was the technical architecture. There's a choice architecture that allows for this sort of gift to get model to take place. Another aspect that is really important is the way that things are being sorted, right? The question, you know, the question often with information is, you know, it's not so much whether you have information or not, all that information is. And, of course, we have, you can see, I mean, for those of you, many of you are quite young here, but for those of you who remember the early days of Facebook, you just liked, and that was it. But now, you know, you have all of this range of emotions that you can express, right? So the information is increasingly refined. So, in fact, these servers increasingly nudges to tag, to label, to share, to sort, in other words, to spontaneously refine the data ourselves. And, of course, this is on our end, but there's also a lot of it that is happening on the other end on the side of developers, where, you know, again, the interface between the site and the apps, the servers and the apps have also been refined. So that's one example, you know, the expansion of the possible, the range of possible emotions of Facebook. So the way that we produce more and more refined content is precisely by sort of, you know, making your users more engaged. And this is, of course, this is the way that the industry talks about it, more engagement is essentially more refining work. Here's another example of this, you know, from a write share, a writing service, a taxi writing service. It used to be that you would just write your driver, you know, on a four, on a five-star scale. But increasingly, the driver, you know, they have all of these aspects that are being rated, right? So these are, in part, of course, designed to, again, obtain data that is more refined, but also they are designed to help encourage competitions among drivers, right? So you both help Uber refine its knowledge of the drivers, but you also create something in a place where you can at least excel in one of those things, you know. You can get like a badge, you know, for one of those things. Now, all of this work, in turn, serves to power the development of advertising services, predictive analytics, and increasingly in artificial intelligence systems. So, you know, in addition to things taken, things given, things sorted, we have things automated, right? So tag your own photos, and Apple makes giant steps in facial or image recognition. Click on the tense rather than the first link on your Google search return, and next time it will be optimized to better fit your preferences. Correct the translation, as I'm showing here, you know, this is the translation of my talk, and you are doing your part in helping Google develop automated translation through machine learning. And, you know, I guess access, you know, one of the consequences, you know, putting millions of transcribers out of business. You can even, you know, go further in this, you know. So things become automated that way. And then finally, you have things that are multiplied. That is, you know, once the data has been harvested, right, guaranteed you have that it's going to be used for the purpose that it was originally harvested for, collected for. Here's an example from an interview that we got. He's an interviewer who started, had built an individual level database from getting permission to access company email metadata. And he says, one of investors says, actually the most valuable data is people's purchase history. And that was not the purpose, you know, that was not for purchase history that the database wanted to, that was constituted. Ah, if I am blatantly honest, our terms of service is broad enough that we can do that if we want to. You know, that is, we can mine inboxes for purchase history, but eventually someone is going to find out and then it would be uncomfortable. So you can see that on the one hand, you know, there's a, you know, clearly he knows that, you know, for his company not to, you know, to run into problems. This is something that he shouldn't do. But, you know, there are other pressures, you know, pressure in this case from the investor to do something else to repurpose the data. So the data can be multiplied and transformed. Another aspect, how do you multiply? Again, how do you, you know, increase the amount of data is quite simply through addictive design. There's a lot of discussion right now around this. There's been lots of books and, you know, Trace and Harris started an initiative towards schools, you know, to, to, because of, to, to sort of try to. And make companies more aware of the problem, the addiction problem. But this is from a recent interview where, you know, we heard about last week, a few weeks ago, I've seen a guy in a keynote at a tech conference recently and I was pretty shocked. Because he was pitching a book about how to make your apps more addictive to other developers who are building these applications that my kids are going to use. And I hate my kids to be addicted to cell phone applications. Right. And then finally, of course, there's the continuous tracking, right. The, the, the fact that, you know, increasingly, the data is not simply there for, you know, on, on the occasions where you are logged in, but in fact, you know, continuously being produced by all kinds of devices that follow you. So, you know, why does this matter? Okay. And as I said earlier, there are legitimate worries about freedom, privacy, democracy, as in the, you know, Cambridge Analytica case. But my own view is that it matters because the Fustian Morgan is enabling the rise of a new kind of society, a new kind of social order. We've had this term, the new oil. Nili Kroos, the president of the European Commission, famously referred to data as the new oil in 2012. People are now saying the new electricity. You know, what he meant and what people mean by this is not simply that data is a profitable sector, but that data generation, refinement and use is actually the new fuel powering the end-house. It's a new fuel power economy. And, you know, we can legitimately ask ourselves, well, is this oil really new? You know, are we really in a sort of new stage of sort of economic development in which data really becomes, you know, launches us, if you will, in a completely different kind of regime? And to some extent, it isn't. You know, it might be worth posing at this point and ask ourselves, you know, how did we get there? You know, how did we begin to see personal data as the new oil? You know, where did we begin doing this? After all, if we look back, surveillance in the economy, that is, as Oscar Gandhi puts it, the capture or information for the purpose of producing intelligence or strategically useful knowledge is nothing new. Businesses have always had what Gandhi calls a legitimate business interest in collecting data about their consumers, their employees or their competitors. So, you know, when did this legitimate interest in data, you know, sorry, what did this legitimate interest in data look like before Google? And of course, there are many examples. If we look back in history, we can think of the rise of credit, the rise of insurance, the rise of marketing industries as, you know, having been enabled by the emergence of personal profiling in the name of precisely that kind of legitimate. Interest. So what I will do is I will actually develop the example of credit because I think if we think about the history of credit, we see encapsulated in it some very fundamental processes that can help us understand, you know, what our future may be. You know, what is it that, you know, credits, if you will, might stand in for something much bigger. So, if we look back in history, in the United States, we begin in the, you know, the rise of collecting information about persons began in the 1840s. The mercantile agency, which became done in Brad Street in 1933, sought to create a national centralized system of credit-checking for merchants. And interestingly, they relied upon the gift-to-get model. Interestingly, the information was obtained by third parties. So most of the data collection on these merchants, on these individuals, was done for free by local attorneys in exchange for referrals to prosecute debt collection cases. So, you know, so already then you had a model in which, you know, I give you some information and you give me additional information that will be useful for me. And so you had these sort of two parties on the one hand, the credit registry, and on the other hand, the local attorneys that were sort of working together. In the 1870s, we saw the beginning of consumer credit reporting which originated among retailers that was fairly decentralized. But by the 1920s, you know, you started having a more national credit reporting infrastructure that was in place. And that whole history has been described by Josh Lauer in his book, Credit Worthy. So, you know, the concept of financial identity was born someone who wanted credit. But of course you can think of insurance as working pretty much the same way. Had to subject themselves to intrusive probing of their character and behavior. In fact, Josh Lauer argues that a quote applying for credit was the original sin of modern consumer surveillance. So the difference with today, of course, was that then it was done by individuals often relying on rumor and gossip rather than, you know, on the mechanical operation of computers. Now the 1970s and the 1980s in the United States, so the rapid concentration of the credit reporting business. And now, you know, by the 1980s, you had sort of three companies that emerged as the most important ones. And of course, you know, with growing computerization, it became also a lot more efficient. The companies also diversified towards other types of data collection. So today, in effect, a lot of company-like experience, for instance, is holding much more data than credit. It is actually a data broker. So they started purchasing retail brokers and marketing data. So from then on, the way that individual data was used changed. So you started having, you know, very... So these profiles started moving from being individual profiles, from being sort of qualitative, right, with lots of verbal information to increasingly numerical. So what is a profile? A profile is essentially a primary list of categories that have, you know, this is from Gandhi, that have been determined to be relevant to some administrative decision that must be made by an organization with regard to an individual group or a class. Individual categories are variables of the dimensions on which an entity may be evaluated. And then, of course, subset of categories may be combined into an index score, sort of a grade if you want. The fundamental purpose of a profile is the assignment, this is an important aspect, is the assignment of an individual into a class or a category that represents a decision. So this is a process of identification with a consequence. So we have, if you think about credit, we have moved from a situation in the 1960s where essentially you were inside the credit market or you were outside. If you were inside the credit market, the conditions didn't vary too much across consumers, right? And if you were outside, you didn't have access to credit or you could maybe obtain credit but with sort of loan sharks and sort of more shady kinds of suppliers. By the 1990s and even more the 2000s increasingly, you know, the market expands massively and what we have is we have an increasing differentiation of people on the basis of these numerical information. So the purpose of a profile now changes and it's not so much about deciding whether the person should be given credit or not. It becomes, you know, at which terms should that person obtain credit. So the purpose is now to assign on the basis of that information the individual to a prediction. What are the chances that such a person will repay her loan? And you can think about this in every domain, right? What are the chances that such a person has diabetes? You know, that they will have an accident and so on and so forth. So this is, you know, this is the rise of predictive analytics, okay? So that, you know, we have the development, as I said, the development of the credit reporting infrastructure but of course that's true of other kinds of infrastructure allowed for centralization of surveillance. You know, now, you know, armed with this instrument you could expand your business to all kinds of populations on the strengths of the records held by the credit reporting agencies. And the real important change in the mid-1980s is the rise of pre-score, a statistical scoring tool using credit bureau data which was developed by the Fair Isaac and Company in the mid-1980s which now allowed banks to increasingly seek consumers that were pre-screened, okay? So scoring, you know, facilitates this assignment of an individual to a class or a category for the purpose of decision making. And the development of credit scoring was a decisive step allowing lending decisions to be now largely automated and of course this is one of the things that should, the massive expansion of credit and debt in the American economy. The FICO score, which is the credit score, was developed in, as we know it today, you know, is a score roughly between 300 and 900. It depends on different kinds of scores, but that was unveiled in 1989 and it is today entirely based on financial behavior, data, also FICO claims because this essentially comes from the FICO, that company, you know, this is essentially the information that they give but of course the actual algorithm remains secret. But by and large, this is what it looks like. A lot of, you know, the largest portion of your score depends on your payment history. The length of your credit history is quite important too. How long have you been using credit? You know, if you've been using credit cards for like 20 years, you know, you have built a history as opposed to, you know, the last two years. How much you owe and how much of your available credit have you used? So it turns out that, you know, you may be allowed to, say, borrow $10,000 on your credit card. You have a credit limit of $10,000, but it turns out that you shouldn't be using $10,000. You should be using, say, $1,000. And this is, you know, so, you know, the percentage of the credit that you have used relative to the credit limit. And then, of course, the different types of credit are also important. So this is what it looks like. And then, of course, you're probably quite familiar with this because in Germany you have Schufa, right? So I don't know much about Schufa, but this is essentially what it looks like. So, you know, I showed you, you know, the different components of the score, but then once you have a credit score, then you have an assignment of an individual to a decision, right? And the decision, again, is increasingly different terms of service. So increasing, you know, by the 2000s, the evaluation of credit risk was fully individualized and really based on individual behavior. Now, this is important. You may think, okay, all of this on credit, you know, it's not exciting. That has nothing to do with the digital society. It has everything to do with the digital society. Because that is the basis of essentially the kind of social order that is being produced in the digital society. That is sort of, if you will, the original model, the original, seen as Joshua puts it. The other reason why it's important is that in the United States today credit reports and credit scores have become wired into many different industries. So not simply credit, but also, you know, if you want to rent an apartment. If you want to apply for a job, you have to show a credit report. If you want to rent an apartment, you have to show your credit score. It becomes wired into insurance products. So, you know, it has these effects that sociologists in the United States call tumor performativity. That is, the score, you know, comes to have effects outside of the financial system proper into people's life chances on the labor market, on the insurance market, on the housing market, and so on and so forth. Now, so, you know, we've talked about predictive analytics and, you know, assignment of an individual to risk category. But predictive analytics not only enables better risk predictions, it can also be mobilized to harness value. So in other words, you can actually score conditionally on the risk level. So the question is now different. Given someone's score, or they're fitting into a predicted category, how much value can I possibly extract? Okay? So there's two questions that are quite different, right? One is, you calculate the risk that an individual represents, or the likelihood that a certain outcome will be realized. Two, you know, on the basis of your knowledge of this outcome, how much money can you make? Okay? And of course, how much value can you possibly extract? You know, you know, depends on a lot of other considerations and behavioral data. So for instance, you may be scoring on the likelihood that someone may be tempted by a crappy loan offer or, you know, a crappy insurance plan. You know, now it becomes, you know, about trying to evaluate a lot of other things about who they are. Okay? This is where, you know, the financial behavior meets all kinds of other types of behavior. So the point now, okay, is very important. The point is to estimate the value or the profits to be made from particular individuals in known, that is in predicted situations. So from an economics point of view, for those of you who know a little bit of economics, this is trying to manage at the willingness to pay. Okay? So this means that prices and services will fluctuate quite significantly across space and over time, and that they will also be increasingly differentiated, not simply across groups, but across individuals. Okay? So now you can design systems that will offer different terms of service to different individuals, not simply on the basis of the risk that they represent, but also on the basis of these other considerations. Okay? So of course, in that situation, the better and more abundant the data, the better firms can predict, is it, the desired outcome. So, you know, for instance, it turns out that your Facebook likes can predict, as you know, a whole lot of things about you. And that, you know, this was used quite profitably in the last presidential election. But, you know, it can predict a lot of things that may be relevant to the kind of things that I mentioned earlier, like what is the kind of product that you may be tempted with. So what we have is that, you know, we constantly have, we have a situation in which, you know, people are being constantly tested upon, so as to refine, further refine predictability. The other thing, of course, that is happening is this fusion of information across the system. All data is good data, even that, which appears irrelevant, and the kind of data that I just mentioned, you know, your Facebook likes, which looks like it is irrelevant, right? You know, what kind of movie you like looks like it is irrelevant. Well, it turns out it says a lot about the kind of person you are, the kind of social milieu you come from, the kind of family situations that you have, and so on. Now, of course, what makes these pieces worth collecting is an integrated, is a powerful cultural abstraction. You know, the notion that somewhere deep inside lies a knowable actor, an individual who might be understood, followed, and manipulated efficiently from cradle to grave. So, now, how do we think about this process sociologically, which is really what interests me here, you know? What do we have here? You know, what kind of society are we preparing ourselves for? So, Michel Foucault said, sorry, he said in discipline and punish that visibility is a trap. In history of sexuality, he argued that people are controlled by the fact that they are constantly compelled to speak, to put it all out there. Of course, that was in the context of the Catholic or the psychoanalytic confession, but you can see the implication here, right? Saying a lot about yourself is actually, you know, puts you at the mercy of the expert, the therapist, the priest, whatever. Similarly, digital systems incite us to be visible, to engage, you know, the credit card, the loyalty card, the hyperlink, the social rules that incite us to like posts and comments, you know, something that, you know, I learned from my teenager and most of you probably learned by yourselves, you know? So, resisting is actually hard. And the question is that he may not even be desirable, right? Maybe, maybe invisibility is a trap too. Those who are invisible to the system actually are of little use to it or worse. So, there's some really interesting work on sort of Facebook feeds by Tanya Boucher, you know, who shows that those who don't engage on Facebook, well, rapidly, they sink into, you know, the bottom, if you will, of the list. They become invisible to others, right? So, you know, if you don't chat with your friends, if you don't like posts and comments, you know, the algorithm is not going to put you in a position to be seen. So, you become invisible, right? Sociologist Janesh Fertizi at Princeton engaged in a clever effort to keep her pregnancy invisible from online vendors. She decided to spend only cash, not, you know, she told her friends not to mention her pregnancy to anybody, you know, she wanted to see what would happen. And so, what she wanted to do is she said, well, she actually wanted to buy a stroller and she said, well, how can I buy a stroller and she wanted a stroller that was not available, I forget, in her, you know, in a store right next to, right next to her. So, she tried to buy on Amazon, right? But she decided to use sort of gift cards, you know, in order not to tie this to her credit card. Okay? But it turns out that she went to the local, you know, convenience store and tried to buy $500 worth of Amazon gift cards. But it turns out that when somebody tries to do that, they will look suspect. And her husband and herself, you know, they were reported to the police. Because it turns out that if you actually try, you know, in today's non-cash economy, right? To the credit card-based economy, well, you look like a member of what I call, with Kieran Healy in my work, the Lumpens Corritariat. Right? You know, the bottom of the scoring scale, you know, the people who only work with cash and people who only work with cash typically in our society are coded as criminals. So, invisibility is a trap too. Those who deviate from behavioral expectations raise flags, signaling possibly illegal behavior. Okay? And then finally, the technological infrastructure of the digital economy embeds all kinds of truth-telling dispositives, you know, leading us to reveal who we really are. Now, the big nine side of this identity, you know, revealing process is of course just verification, you know, authenticating who you are. And in many ways, this is the basis for a lot of very convenient transactions, you know. You press on your app to call your taxi, you know, immediately they know who you are, you know, who they are, you know, it makes everything very, very smooth. But the more troubling side of this is the reconstruction by the digital society of what Irving Garfman called the backstage. You know, data-rich algorithms are taken to produce a truth about us, you know, that part of truth actually that we don't often, we don't control, you know. For Garfman, you had a backstage where you relax, you were really who you are, and then you had a front stage that you were presenting to the world. But now, now, algorithms have access to the backstage, right? Because, you know, Facebook can predict when you're going to break up with, you know, your partner, for instance, before you even do it. So, after Foucault, of course I'm French, you know, you're going to get some bourgeois. And the reason is that I find that bourgeois is really useful here too. We can think of the totality of one's interactions with the digital economy, if you will, as a sort of form of capital in the sense of bourgeois. That is, as it calls it, accumulated labour, which has a potentiality to produce profits. We call this Uber capital, okay? And the reason is that it is a form of capital that results from everything, right? So, we have to find a trend that sort of captures that sort of status, right? It overlaps with the traditional forms identified by Bourdieu, like it overlaps, for instance, with your cultural capital, but at the same time it departs from them. You know, it has a clear materiality and it could take, in principle, a numerical form. It is accumulated over the long history of a person's recorded action built up from traces left on everything, from social media to credit bureaus to shopping websites and fidelity programs, courthouses, pharmacies and the contents, of course, of your emails and chats. It incorporates your social ties, which are now measurable, right, through the value of your social network, and, you know, some measure of your moral worth, okay? So you can think about this as a potentiality, right? So, we call it Uber capital. We have a larger term for it for the computer scientist among you, which is Eigen capital. And, you know, it's a little bit, Eigen capital, it's a little bit as if each system in your life was doing something like the credit scoring system, right? So on the financial side, you know, you have a credit score. But your social network data can be also subject to the same kind of scoring processes, right? And maybe your health data can do the same thing, you know, measuring your sort of fitness and so on and so forth. So you can imagine this as a sort of, as a vector, right, in a vector space among the thousands of dimensions of data that all of these companies are keeping about you, okay? And so if you think of a God's, you know, of a God view on you, that would be Uber capital. If you think more about the individual protections, that's the, you know, that's the Eigen capital. So, like cultural capital, Uber capital may exist in three, under three principle forms. The first form is that it is embodied, you know, it is expressing some durable dispositions about yourself, your fitness, your sociability, your social influence, your character. You know, that's the Foucailian truth. At the same time, it is also, it is objectivated or objectified. It is realizing the forms of access to goods and services. So when you have the capital, right, a certain kind of measured worth, if you will, you get access to certain goods and services. Better social consideration, better prices, you know, you bought early on the plane. You don't have to wait when you call customer service, right? And it is finally institutionalized in the sense that it may exist as a quantity that is widely used and, you know, the kinds of, of label used that I mentioned earlier, where, you know, it goes from credit to housing to insurance and so on. And of course it can circulate, okay? It can circulate also in, you know, very far away corners. So, you know, dating websites today increasingly are using your credit score. Okay? So the way that I'm thinking about Uber capital is if you will, as a potentiality, right? The technology is driving us in a particular direction, right, to the creation of sort of these increasingly measured qualities at the, you know, of the individual on different markets. But it doesn't exist as a fully coherent institutionalized thing. It doesn't exist as a number. I couldn't tell you, you Uber capital is like 596. But in China you can, right? So in the US it's only integrated very well on the financial side, but in China it is a reality. Okay? So as you probably know, I'm sure, you know, people who are coming to the Syria are familiar with this. In 2014 the Chinese government gave a license to a data company to develop social credit systems. And so the way that this works, and it was, you know, this is the one developed by the Alipay, the subsidiary of Alibaba, and the way that they produce a score is by scoring you on your credit history, on your ability to fulfill your contractual obligations. There's of course a verification element, which is always important, but it scores you also on your interpersonal relationships. So if I'm connected to you and your score is lower than mine, then that will lower my score, right? So I have an interest in actually breaking off the relationship. And then it scores you on all kinds of preferences and behaviors, right, that are deemed useful or not useful, right? And indeed the scoring system is also connected to third-party apps. So when you have a high score, for instance, you get access in other apps you want to rent a car, for instance. You get access to certain kinds of services, right, or certain kinds of preferences. And then depending on your behavior in that app, you know, say you run a red light or something with a rented car or something, you could imagine the system feeding back into the original credit score. So there is again, you know, the ability, you know, from the original score to sort of multiply through the third-party app in the same way, if you will, that originally the digital companies multiplied themselves with the developers. Well, that's exactly the same process that is happening, but for scoring. Okay? Now, originally the system was developed by a private company, Alibaba, but recently the state council has called for the establishment of a nationwide tracking system to rate the reputation of individuals, businesses and government officials, okay? So now there's increasing integration, at least planned integration, between public and private sources to develop a credit system that covers the whole society. Now, what is important is that then, again, it will help regulate pretty much everything, you know, that's the plan at least, your broadband speed, your foreign travel visa, your social benefits, your access to elite restaurants, your insurance premiums, possibly even the quality of schoolings offered to your children according to some reporting might actually depend on that. So you can see that, you know, you have a ranking system of individuals that will be connected to a system of reward and punishments. And there's a plan also for corporations, this is from the China Monitor last year, which sort of details exactly what the kind of data might come in for corporations and what are the effects on a certain number of outcomes. So you know, the condition of credit that corporation might have access to, the access to public contracts, travel privileges of the corporation, executives and officials and so on might depend on it. And then, of course, you can imagine also an integration, and this is part of the plan, an integration between the individual scores and the company scores so that if you are, you know, if a company is made up, if you will composed of individuals with lower score, that will also lower the score of the company. So, you know, you will have an integration. Why does this matter? Well, of course people, you know, worry about the political consequences of this big brother type of system. But perhaps more profound and more interesting are the consequences in terms of social stratification and inequality. So, you know, the road to increasing efficiency and profits will be now to match, right, individuals to what algorithm determine they deserve, right, that will be, you know, the process of value extraction, if you will, will capitalize on people's behavior, their dispositions, their habits, you know, refracted, of course, through the very particular classificatory architecture of the digital economy. Now, this is not new. Inequality has always been moralized, right. All form of nominations have been buttressed by distinctions between the deserving and the undeserving. And, of course, this is the base that we have constantly about the welfare state. You know, do people deserve the social benefits that they have access to, right? This is how systems of power always legitimate themselves. But in this particular case, you can see that this is a system that is going to be harder to contest politically. The reason is that first mobilization in this kind of system doesn't come about naturally, right? Because rather than, you know, we will have a graded, you have a sort of graded scale rather than groups, you know, say workers versus managers or, as Mark said, you know, since it was his birthday, you know, the proletariat versus the bourgeoisie, right? The social collectives that actual practices produce are just sort of aggregations of people. Right now, you know, there's no natural solidarity among them, right? They're not solidarity communities bound by categorical status or by a voluntary choice. That's the first point. You know, it is hard to develop a politics in this kind of system. But second, in this system, outcomes appear to be legitimate. In China, sesame credit is quite well accepted because it is seen as increasing trust, as Shao Jinqin, and I'm quoting from an article from the Zhezhechou Statue, actually, as Shao Jinqin from the Shanghai Municipal Commission of Economy and Informatization, in charge of the Hone Shanghai app, put it in a response to an interviewer. It is all about bringing order to the market, and ultimately, it's also about social order. So that's, you know, and this more, this remoralization of the whole system comes from the fact that differences in outcomes are seen to emanate only from behavioral differences, right? Rather than some other sort of difference, like categorical differences, you know, men and women, discrimination, whatever, which are protected by law, in this case, it's just you. It's your fault, right? And therefore, those who, whether individuals or corporate entities who are outside, are really truly outside. They are truly outside because the principle of their exclusion seems to lie truly within them. And, you know, as an article in Wired puts it in the end, it's just you. So, you can see now the social force of this kind of system, right? And of course, it's not lost to the designers in China. In the city of Rongcheng, oh, sorry, the other quote was from the Wired article. This is from the Zhezhechou Statue. In the city of Rongcheng, a civil servant tells a journalist from the Deutsche Zeitung who has come to inquire about his city's pioneering role in this domain. We want to civilize people. He proudly cites the founding document of the Office of Honesty of the City of Rongcheng, allow the trust worthy to roam everywhere and our heaven while making it hard for the discredited to take a single step. Now, of course, whether the system will work the way it's supposed to, whether it will be something totally fictional, garbage numbers that people ignore or whether it will be something in between like it kind of works, but not exactly how it's supposed to, we don't know yet. But if the Chinese situation is a guide to the potential appeal of universal scoring and remembering that similar if more decentralized tendencies artwork on this side of the world, there is little reason to think that this kind of design will not be part of our future. The way the FICO score is already embedded in our present. I will stop here and let you meditate. Thank you very much.