 I'm very excited to introduce this lunch talk and our terrific speakers today, but before I do that, first of all, I just wanted to encourage people to feel free to tweet about this event on the hashtag BKC Harvard, and there will be opportunity for questions when our two presenters finish up, so I would ask that you think about your questions and hold them until they complete their presentation, and we certainly encourage and welcome those of you who are new to the Berkman lunch talk series to feel encouraged and empowered to ask questions. We love when new people come, and you're very welcome, and so please ask questions when the time comes. Now, algorithmic accountability and governance are things that are really important to the work that Berkman Klein has focused on for quite some time, so I'm quite excited about our presentation today on Algorithmic Consumers. Our two speakers are Michal Gal, who's a professor and director of the Forum on Law and Markets at the Faculty of Law at University of Haifa in Israel. She's the author of several books, including Competition Policy for Small Market Economies, and she's been chosen as one of the 10 most promising young legal scholars in Israel and is one of the leading women in competition law around the world. Neva Elkin Coran is a visiting professor this semester here at HLS, where she's been teaching digital copyright, and we're very pleased that she's been a faculty associate at the Berkman Klein Center. She's the founding director of the Haifa Center for Law and Technology, and the former dean at the University of Haifa Faculty of Law. Her research has focused on legal institutions that facilitate private and public control over the production and dissemination of knowledge, and she's an expert on digital governance, and as you'll see in the next few moments, legal oversight of algorithmic decision-making. So with that, thank you, and I'll turn it over to you. So thank you very much for this opportunity to share this paper with you today. It's actually a paper. We forgot to upload this version to the website, but we'll do it right after this presentation. But it's also part of a project and a few projects that we have been collaborating on, and therefore we are very much looking forward to your input and reaction to some of the ideas that we want to present. We have actually presented this in various formats in the different worlds in which we reside, and we should probably thank you for just bringing us together. This would be the first time we present it together. So let me say a few words on the challenge that brought us together. Michale is a world expert in competition law, working on digital markets and two-sided markets and markets of free goods, and I've been working on algorithmic and decision-making, and especially on developing the legal institution that could help us oversight this type of decision-making. And in the course of a co-supervising PhD student of us that is working on the future of data markets and behavioral advertising, it actually occurred to us that maybe we have been focusing on the wrong issues. That is, a lot of the discussion that relates to behavioral advertising look at behavioral advertising as the problem, and in behavioral advertising we're looking at practices where some services are provided for free in order to attract consumers. Consumers are then provided as a resource where the activities are being monitored and documented, and then the data collected on consumers is being used in order to sell services at a more tailored to cater what is assumed to be their needs. And the question is whether this model is already outdated, whether it's at all efficient to use this system and the tool that we are looking at in order to convince consumers to buy where we can actually use data and machine learning and artificial intelligence in order to put together directly suppliers and consumers. So when we look at the literature looking at algorithmic consumers we often see the dark sides and we get some serious challenges and concerns regarding consumers autonomy and privacy and hidden discrimination and some of the risks for a collision and power in the market, but in fact a lot of this literature and some of it we've heard in this breakman talk that we heard last week, a lot of the literature focus on suppliers. And the question is whether we can actually focus on consumers, whether we can imagine a technology that would actually work for consumers and more importantly whether we can imagine a market, whether we can envision a market that would actually produce these type of algorithmic consumers that would actually work for promoting the values that we care about. So that is basically our project looking at the affordances that are provided by some of the tools that we're all using and that we see are coming and better understand how they work, what type of affordances that they offer and especially look at the market dynamics that they create. If there is one inside that, you know, that or bottom line that we can give you from this lecture, it is that the way these affordances will come about would probably depend on market dynamics and to some extent on the way our regulatory intervention would shape them. So if we take a closer look at some of the tools that we're using, some of the tools are actually here, you know, we use a lot of tools that would assist us in consumer, in executing online transactions, the ranking applications, recommendation applications, they always on devices, some apps that help predict the price. But when we talk about algorithmic consumers, we talk about a new generation of systems that actually go beyond this offering of more information or using information more efficiently and we look at systems that would actually help us identify our needs and help us select the optimal transactions and in many cases would also help us to execute these transactions. And our argument in the paper is that actually, even though these technologies do not reflect any major technological leap, they actually might be a game changer in terms of the market behavior, but also in terms of the legal challenges that they are posing. So if we take a closer look at this transaction or the way algorithmic consumers systems would perform, we see a lot of data sources, internet of things, a lot of sensors, wearables, detecting, some of our behavior signaling a refrigerator that gives a signal of that we are out of milk. The use of data analytics in order to provide some meaning for this type of data that is collected by these sensors and then the use of AI in order to make decisions regarding the best transaction and then the use of shopping bots in order to actually execute the payment, arrange the delivery that goes back to the consumer. So if we think, for instance, of a sensor that is linked to a pat and to a food bag and the sensor would detect some information about the health situation of our cat and also the quantity of food that we have in the bag, that could be communicated and linked to a lot of other, lots of other data that is collected from other cats and food bags around the region or the country, but also with other data sources that would provide some meaning to whether the temperature of the cat signal a disease or not, whether they need more nutrition and what is a price situation, whether we should switch from the cat's food that we bought in the past or something new. Once this information is being processed, a selection of the best and optimal decision could be made and the transaction could be executed by the system. Consumers in this scenario would only have to opt in to a system of that sort and then a lot of the other activities will be taken care of. So there are, of course, obvious advantages of a system of that sort. They're really trivial, but lowering the cost of that transaction and making it more quick and speedy, enabling us to address a lot more transactions that we need, enabling consumers to take advantage of systems that allow a more sophisticated analysis of some of the parameters and to take into account a lot of different parameters in their decision. I think some of the advantages are less obvious. For instance, the way in which algorithmic consumer systems can help us overcome some of the biases created by systems that are using all sorts of manipulation, helping us to be more rational in our choices, buying what we need rather than what we crave for. In some cases, not to go after the colorful package or the attractive commercial, but actually make a decision that is true to our needs. And I think also not trivial is the ability to defer some decisions to the system and leave a consumer according to their preference with the decisions that really matters to them. And so when we talk about information overflow, that could be a really important option. Not all of us want to make specific choices regarding paper towels and what type of, I don't know, the brand of sugar that we want to order. There are of course some downsides to that. We can see these systems becoming more vulnerable, make a bigger chunk of our life more vulnerable to cyber attacks. We could talk about it more in the Q&As, but the system could convey our preferences. But at one point, and the system will actually shape our preferences, taking nudging into account, how do we secure against that? There is always a fine line between the two. Of course, some reduction in autonomy. Consumers would opt for that, but to what extent they can leave some discretion, how much discretion consumers would have in using such systems once you buy into it. If we give up our decision-making capabilities and the ability to choose and select for the small decisions, are we going to weaken our ability to make the big decisions? Is this like a muscle? That's more for the psychologists in the room. There are some cognitive effects, of course. So if using every new technology, some have been documented in the literature just to write the shift from maps to GPS and the way it affected our brain and sense of location. Of course, the big elephant is privacy. We don't want to live in the matrix and we need to be able to protect against that. But the argument of the paper is actually that a lot of these downsides actually depend on what's going to happen in the market. In what type of market these technologies will evolve and how is market dynamics is going to facilitate diversity and creates capabilities for overcoming some of these challenges and others. We did not mention collusion, discrimination and power, but for that I will turn to Michal to continue and focus with you. Thank you, thank you so much. Okay, so what I want to talk about is to look a bit on how this technology might shape market relationships. And then we start, I mean, we talked a bit, Neva talked a bit about how it would shape consumers' welfare. Of course, it's much more in the paper, but let me talk a bit about suppliers. How would it affect suppliers? So one of the things that would change is in what to invest. What would suppliers invest in a world where we have algorithmic consumers making some of the consumption decisions for people? So part of it would be investing less in marketing, which caters to irrational biases. Some of it might be less physical stores and investing more in virtual ones. Some of it would be investing less, maybe in translating your websites because the algorithm doesn't really care what language the website or the offer is in. Part of it would be in reducing the level of risk that buying from you would create because maybe the algorithm might be able to look at that parameter as well. Another thing is that it might affect how much to invest because if decisions become at least a bit more rational with regard to some products, then you can make different decisions than the ones that you made before. And another thing is that it might even create fairer contract. And why is that? Even today, there are people who are working on algorithms which can actually read the contracts which are offered by suppliers online and suggest how fair the contract is. And if you know that the algorithm is going to give points to how fair your contracts really are and put that in the decision algorithm, then it might lead us to maybe a bit more fairer contract than we have today in a world in which we usually just accept the contract which is online, not really reading what we are agreeing to with regard to many transactions. Now, let me talk a bit now about the interactions between consumers on the one hand and suppliers on the other. Now, I think that these interactions lead us to some of the most important things that come out of the paper because as Niva said, most of the research about algorithms in the marketplace has focused on suppliers and it has created or it has focused on some of the problems that algorithmic suppliers can actually create in the marketplace and are actually creating. So one of them is, for example, discrimination because if the supplier has an algorithm that knows what your preferences are, what your past choices were, who you are, if it has information about your digital shadow, that it might be that the price that different consumers are offered through this algorithmic supplier are going to be very different depending on the level of elasticity of demand of each and every consumer. And actually this is created, this is already created in the marketplace. You saw last week it was about almost perfect price discrimination with regard to at least some of the algorithms that are operated by suppliers. So one of the things that we suggest here is that algorithmic consumers can actually fight part of this discrimination. How can they do that? They can do that in two ways. One is that now you have an intermediate which buys on your behalf. And if that intermediate, if the supplier does not know who the intermediate represents, then it would have to change its pricing decisions. Another one is the aggregation of consumers by an algorithmic consumer because the way we envision the market or the way these algorithms already operate is that it's not that there's one algorithm per person, but rather there will be several algorithmic consumers which are offered in the market. They're already offered by Syria and others. And each will have, each will represent a lot of consumers. Now if that is true, then when you buy something through the algorithmic consumer, it can aggregate the choices of many, many buyers. So it can buy, for example, a thousand books of a certain kind. However, the supplier would not know who these thousand people are. So it cannot really discriminate and change the price according to each and every one of these consumers. Another thing that many people are talking about, this is actually a very big issue for competition law people. There's going to be an event in the OECD about this many competition authorities, including the American FTC are looking very closely at this kind of behavior is that algorithmic suppliers, sorry, have a tendency now to engage more in, I wouldn't even call it collusion, I would call it parallel conduct or in coordination. And algorithms actually make it much easier to coordinate between suppliers. And why is that? Because you can do it through the algorithm, you don't actually need to meet and sit together and decide. The algorithm, once it has a model, which reacts to other models in the market, can create coordination, a very high level of coordination in the market. It's immediate. Somebody reduces the price, there's a millisecond and the other algorithms also of suppliers reduce the price. So why would the first supplier reduce the price? In the first place, if everybody else is going to react? Okay, so things that for many years we assumed were done through human interactions and took time and sometimes we had, we could find traces and limit them today or can be done through algorithmic suppliers. And so algorithmic consumers can counter this at least, at least partially. This is really a partial answer here. But part of it comes from buyer power and calculation power that can counter some aspects by, for example, deciding that no matter what the price is, you're always going to buy a certain amount from a newcomer in the market. Or if the price, it rises above a certain level, then you would wait and not buy beyond that level. Now, if there are several algorithmic suppliers, algorithmic consumers, which many consumers buy, then they can overcome some of the collective action problems that arise in this, in this world. Another thing that is on the agenda today. And last week, there was a big conference about this in the University of Chicago is concentration of many of the markets. And here again, we believe that algorithmic consumers can solve at least partially part of the problem of this concentration in important markets. And I want to talk especially about certain markets, because today, there's a handful of digital intermediaries in the with mega platforms, which control effective points of access to potential users in many markets. And if you think about it, think about smart devices. So we have the iPhone and we have the Kindle, for example, we have operating systems, we have iOS and Android application stores, we have Apple Store and Google Play. And I can give several more examples of these mega platforms, which control a lot of information and create highly concentrated market. And once access to such platforms is essential for suppliers. And consumers, they have a lot of power, which can be translated into harm to consumers. And that is part of the work that was presented here last week. And part of the problem is that they control big data, they control a lot of the data, because we do a lot of our searches, or a lot of our digital activities through these agents. Now, this has led some authors to decide that to have a bleak world view in which these mega platforms are also going to control the digital butlers. And they are going to operate in their favor, rather than for consumers. And what we're saying is that we should not be so bleak, and we think that technology is a bit like a phoenix, which reinvent itself time and again, sometimes, of course, with the assistance of correctly structured regulation. And so that degrees of power and methods of control might change. And let me offer you just a bit of the thinking that we, just a bit of what we're thinking here. So part of it is, of course, the counter power of algorithmic consumers that can create other sources of supplying consumers with at least some of the things that they look for in search engines, for example. Part of it is the locus of data. Because once you think about search as the main gateway to consumers and to suppliers, that creates one world view. However, we're not there. We're already in a world of IoT. We're already in a world where we have sensors, billions of sensors all over the world. And it is suggested that we are going to have many more. And so we're going to have many more sources of data. And some of them are going to be to provide even better data than we have today. And if that is true, then it might be that the control over data and over digital shadow and information about our preferences is going to be much more dispersed. And so algorithmic consumers can use that information and operate in a world which is much less concentrated than we see today. Some regulatory implications, I'm going to go very fast through this because I've already taken a lot of time and we want to hear your questions. Because there's so many things, interesting things that this new technology creates, questions, many intriguing questions. For example, in contract law, issues like can an algorithm operate in bad faith? Like in tort law, who is responsible for and harm, which is created by an algorithm? For example, in consumer law, what should be considered manipulation in such a world where algorithms talk with each other? Corporate law, for example, when is an agent within his duties when he's not following an algorithm's advice? And of course, one of the most important things here is competition law. So let me talk a bit about some of the challenges that arise in competition law, which is my area of research. So one of them is access to users. Because if the way to access users is through these intermediaries, and they are concentrated, then algorithmic consumers might be good technologies, but they might not be able to access users. That might be one thing that we would need to think about. I think more important is access to data. Where is the data located? Who has control over the data? Will the IoT create more data which is more dispersed? And so the data which is created by searches would be of less importance in this in this world, just envision the world in which we have a sensor on our glasses, which allows the algorithm to know what exactly we're doing with every minute, and then a sensor on our body, which allows the algorithm to know some of the reactions that we have to the real world. Okay, we're already, you know, the technology is already there. Exclusionary conduct, I think is a very big challenge. And this is something that we would need to think about. And part of it, I can talk about it more and we can talk about it more in the Q&A, but part of it is what happens if an algorithmic consumers decide not to buy from a certain seller, or for example, exclusionary conduct by intermediaries, because they would like to control the algorithmic consumers. If you think about it, and if you look at their, at their investments, these large firms eBay and Apple and Google have invested a lot of money in the past two years or so in creating these digital butlers, so that we should use their digital butlers rather than other firms, digital butlers, because they do not want to lose the gateway to consumers. And finally, buyer power. What happens in the world where these algorithmic consumers might be very big and strong? Should we rethink some of our policies? I would say yes. If you want the answer, let's wait for the Q&A. Why? So thank you so much. And actually, the paper is going to be published. We are, we made a mistake not putting it online, but you can definitely find it in S.R.N. And it's going to be published in the last stages of being edited for the Harvard Journal of Law and Technology. Thank you. I wanted to thank you first. This was really interesting. And one question that I was thinking about is that it sounded like a lot of the benefits that you were talking about rely on having a robust and independent market of algorithmic consumers. And it seems like in some cases, the, the algorithmic suppliers have recognized that possibility and have tried to sort of also create the algorithmic consumers. Like if you ask Siri for a ride, it will only give you the options that they have allowed people to plug in. If you ask Alexa to buy you cat food, it will only search on Amazon for the preferred cat food supplier. And so my question is, you know, what are sort of the necessary ingredients, or the next steps to try to create this more robust market of algorithmic consumers so that people aren't necessarily locked in to into Alexa, which is then tied into the Amazon ecosystem or like, you know, examples like that. I will start with your concern. Do you like it that you can only buy through Alexa, the things that are available on Amazon? Probably most of us don't, right? And so I think that in a lot of these discussions, we underestimate consumers demand, right? And so that we look at, for instance, some of the privacy issues. And we think about the situation where a lot of data is collected, and there is no no one is catering for privacy. But some companies do cater for privacy. Maybe we don't like the idea that privacy is for sale. Maybe we want privacy to become a public good that everyone would have it. But there is a market for that. So once you create some demand for appliances that could talk to different consumers, and that would be very beneficial. And I think our prediction is that as we move to the Internet of Things, there'll be more pressure. I mean, right now, the choices are very limited that you can make, right? So when Kindle for Amazon is only governing your books, and then Alexa would also govern your music selection. But once it will become a market of commodities. And as Michal mentioned, a lot of the information, the data, the collection of data would be more dispersed and would be owned by more of the locations on the ground. It would actually matter not only whether you hold a lot of data, but also whether you own the infrastructure. And we assume that that would encourage more competition. So one is from the demand, and one is from the sources of the data. Let me just add to it from a point of view of competition. Because I think that the question you raise is an important one. And I think that's where regulation comes in. Because we think about, we should know what technologies are out there. And ask ourselves, what are the kinds of barriers that firms are likely to put up so that we would not get to where we want to be? So one of the methods of course, is for firms who are currently controlling the data and have a lot of power, like Google, to try and stay in the market game and not lose a lot of the power because of the IoT, just like Niva said. So one of the things that they're doing is trying. As a Siri, what it does, it ties in all the free services, which you are already using, and a system which you're already aware of, and a lot of free suggestions and services, together with this algorithmic consumer. And the question becomes, I think that we should look again about whether this kind of tying should be allowed under what circumstances it should not be allowed. What are the pros and cons? Should we ensure that there's more information? Would information solve the problems? Consumers knowing that the algorithm does not work in their favor. And so all these questions are questions that I think are important to put on the table. Adrian Gropper, I work for an advocacy group, patient privacy rights, directly in the space. And I'd like to hear how we, you think we might solve the regulatory capture issues that I face all the time to sort of oversimplify it. The consumer side is not organized. And a lot of what happens with respect to the regulations you're alluding to is effectively unable for either the patients or the physicians, in my case, to participate in. So I don't think that there is a ready-made solution for capture. But I think that the argument that we are trying to make is that consumers can also use the technology on their favor. And here it doesn't necessarily have to be market players. That could also be like NGOs, right? And so some of the information that Michales mentioned could actually come from NGOs. And you're right. They're always underfunded. They're not well represented. And they're sometimes not even invited to the table of the regulatory decision making. But using this technology would not make them weaker. That could make them stronger. So use it in collective action. That would be one option. It's not going to solve the problem of capture. There is no, unfortunately, at least not in this particular technology we can see a future with no political capture. But I think I'm not necessarily make things worse, maybe a little bit better. So Urs, I'm the executive director of the Berkman Center. So thank you so much for the talk. My question is also linked to last week's presentation in one way or another. And that is the takeaway last week was, well, competition law, because that for look at privacy. And now I hear a little bit, well, but you know, despite all these privacy issues, look at competition law, and we can help on that side. So I'm puzzled in a way. And my question is, as you were addressing Ryan's question about the market conditions on the one hand side, then looking at the regulatory responses that can be helpful to create the market conditions. Where do we stand with some sort of repertoire of frameworks and theories, either understanding markets or regulation, and how up to date are these frameworks and theories as we deal with some of these opportunities and challenges that you mapped out. Are we well equipped to make these determinations where the markets work and where not, and what types of regulatory tools we can use and whatnot intervening in these very new types of ecosystems to describe. It's a bit of a meta question, but I think an important one. Very good question. Do you want to start with the competition, Lanie? Sure. One of the exciting things from an academic perspective of this world is that many worlds come together. You can actually not speak about competition law and disregard privacy or consumer protection or other things. It all has to be combined. And this is one of the challenges that we're facing, that how do you combine all these different issues? And one of the problems with competition law, especially in the US, but less now in Europe, and actually you think that Europe, in a way, is leading here over the approach taken in the US, is that Europe is starting to think about non-price factors which come into a competitive decision because we all talk about a comparative advantage in the market. A comparative advantage doesn't necessarily come from price. Price is only one of the parameters that we take into account as consumers when we make our decision. Another one, for example, might be the level of privacy which is offered by a certain firm. And so one of the things that the Germans and the French, the German and the French authorities have started to do, is to look at privacy as a quality issue and then take that into account with regard to competition law decisions. And another example is, for example, with regard to free goods. This is an area which I've written on with Dan Rubenfeld. And there one of the issues is, if it's free, then it doesn't affect the market price. However, it might affect the quality of the goods. So we need to change our tweak in a way, what we already have, and start thinking about new tools. Sometimes all we need is to change a bit. For example, for anybody who knows competition law, there's a small, there's a SNP test which focuses on price. And one of the things that we can do is change the focus, not only on price, but quality parameters there as well. However, some of the challenges will have to completely be rethought and you can't work within the framework. And one of them is what I alluded to before, is a legalistic coordination or a legalistic parallel pricing through algorithmic suppliers. Here I think that we would need to really rethink our tools because the ones that we have, the assumptions they're based on, do not work anymore. So we assume that most markets are not concentrated. But we don't need concentration anymore for coordination in a legalistic world. We assumed that reaction would be slow so that if somebody changes the price, it would take time for others to change the price. That's not true anymore. So some of the assumptions that are, that our regulation is based upon, would need to be changed. So this is a bad practice that both of us are answering each question because we're not going to hear a lot more questions. So I'll just be quick. I absolutely agree with you. This is an excellent meta question and I think it's true for all the shift to algorithmic decision making. And I think it's just to, the way you link privacy and competition is also important and I think that here, again, Europe was leading in its thinking. So for instance, the idea that you bring back the power to the people, the privacy probably in Europe would have to be reconsidered. Just the idea of having an autonomous choice. I mean, in the system that we are describing, it would be meaningless to talk about an autonomous choice to decide what to do with your information because once you sign up to Alexa or to the algorithmic consumer that would work in your favor, you have given up. The use of data, at least by the provider that is collecting it in order to refine your preferences and make sure the shadow of your, the algorithmic shadow of ours is actually making precise prediction of what it is that we need and we want. But I think in Europe, there are some thinking, there is some thinking about, for instance, data portability that is built into the new privacy reform, data protection reform. That is really important. That could actually create competition that is what we want, right? That people could, when they are fed up with Amazon and want to switch to the company that we don't, we cannot even name because competition will emerge, then they can go with their data and that actually that data would be compatible to any of the standards that would be available. This is something that, this is how the regulation can actually take care of competition where the data is the engine and the asset that is actually working this economy. So my question is to what extent should suppliers be able to discriminate between algorithmic consumers and human consumers? And as I asked that I think about, for example, ad blockers where developers of ad blockers try to seize consumer surplus against what publishers wish. So I could imagine the same way a large supplier like Amazon may say, we're not going to allow algorithmic consumers on our site, only humans, as a way to protect their own profit margin. That's an excellent question, you know, for regulatory authorities, right? This is like probably something that you don't want to allow, right? If your assumption is that this is good for competition and for consumer welfare, you don't, you know, you would probably hold this type of practice illegal, this type of exclusion. So thank you, this is fascinating. But I do think that, you know, going back to Ryan's question, there is kind of a lot of optimism in the way you describe this brave new world. And I'm just kind of wondering, so you present these algorithmic consumers as a counterweight to the algorithm suppliers in kind of balancing the playing field. And I'm, you know, I can't, you know, I can't stop looking at the examples that you put up on there, and this is, you know, Alexa and Siri, and these are all these kind of super powerful butlers, as you call them. And, you know, and maybe I'm just more pessimistic in nature, but I see them controlling the markets. And so, so I'm wondering where this optimism comes from, on the one hand. You know, on the other hand, there are some kind of regulatory responses to this, so we know when we're talking about intermediaries in other contexts. So for example, there are intermediaries in other countries that are supposed to say help us consumers choose between cell phone plans, which is a very complicated choice. And in some countries these intermediaries have to be certified by the government, and the certification is based on the fact that they are not beholden to the suppliers, to the cell phone companies. They have to be independent in their business model. So maybe there are other regulatory ways to separate the algorithmic consumers from the algorithmic suppliers, but I think if we don't do that by regulation, then it's going to be very difficult. Just the other kind of final point, going back to these cell phone decisions, it's really difficult for consumers to choose the right plan. And now I imagine that this market for algorithmic consumers will work well. As you imagine it, if consumers will be able to choose between different algorithmic consumers, between different algorithms. And I'm wondering, do you think consumers have the ability, the sophistication to distinguish between the different algorithms to choose the one that is better for them? The choice of algorithm seems to be several orders of magnitude in terms of dimension more difficult for consumers to figure out. So how they deal with my preferences, what information they have, who are they beholden to, what is their business model. I can think of at least 20 different dimensions, and I can't imagine how a consumer would be able to choose effectively between these different algorithms. So fascinating, but I'm still pessimistic. I mean I don't think that, I think it's well, first of all, I think it's always better to be more optimistic, especially nowadays. And so I think, and also just in terms of strategy, right, to be, this is a technology, it's out there. The genie is not going back to the bottle. The question is whether we can use it for consumer benefits, or whether we give up on that. And in some, you know, you'll see that in the paper itself, sometimes we call it algorithmic wars. And we think about this as a competition between algorithms, you know, pulling into different directions. And I think that, you know, some of the problems that you mentioned are valid. I think that it's when you think about consumers making a choice ex ante, thinking what would be the best way, you know, which would be the best algorithmic butler that could serve me, I think that could be very complicated and sophisticated. But then if you think about the experience and the ability to check out and in, you know, to this system, then you think about a competition that is more vibrant, and actually could be more effective, and actually that's what we have seen in self-encomment, you know, between mobile companies, right? Once there is competition of that certain people can just go in and out, you know, with a certain provider, then you know, that created some pressure, competitive pressure. Let me, I'll just be very quick and point a few things. I envision a world in which algorithms help us choose the algorithmic consumers. So you will actually have an algorithm making all the comparisons and telling you, or in, you know, this, this is best, sorry? Yeah, yeah, yeah, definitely, definitely. So this is one of the things that might be in this world, and another one is of course information. There, as long as you have enough information about different, and you have different examples of what is the best choice for you in that world and what the algorithm chose for you. If there might be some method of doing it, maybe through algorithms as well, of providing you with this information, then you'll know, Siri is not really good for me, or it's not good for me with regard to these decisions, but it's good for that. And you're completely right that right now the firms that are investing in these algorithmic consumers are not necessarily maybe the ones that we would have wanted to invest in that. But does that mean that others will not invest? I don't think so. Hi, Vivek Krishnamurti with the Cyber Law Clinic. So a question and a comment. So the question is this. Do you envision this working across all markets for all products, or are there certain kinds of markets that are particularly well suited? So I think about the market for toothpaste versus air travel, right? There are 36 varieties of toothpaste at Target, right? And how do I choose between them? It's bewildering, but it's also hard to get to reveal my preferences, right? Whereas airfares, it's a market where information is a lot freer, where the goods are more comparable and that the distinction between the goods can be more readily drawn. So I can more easily see an algorithmic consumer acting as my fiduciary, that's the comment, on airlines than on toothpaste, right? So the second is if you're going to interpose a new kind of intermediary that's going to make these choices and collect a lot of information, from a regulatory perspective I would think that it would have to be a fiduciary. You would have to have a legal responsibility to optimize my welfare and not someone else's to prevent capture and to also have the benefits to consumer welfare that you're proposing. But then the question is of cost. How do I, how does this fiduciary recoup the cost of the service that it provides? Is it going to be skimming a percentage as a fixed fee? I mean there's some interesting economics behind how one pays to get to this utopia, which I agree is a better place than we are now, but there's a bootstrapping problem seems to be there. And just to piggy back off that, there was a question online on Twitter, do we need a fiduciary duty for algorithmic consumers, otherwise they will make profit-maximizing decisions? For example, Alexa would focus on Amazon. Okay, good. Let me pick up on your first question first and do we envision this operating for all markets? I think you can take it to even, you know, you talked about toothpaste versus airfares. These are easy examples. Let me give you, I think, ones that I think are more problematic. Would we allow the algorithm to choose, for example, our president or our representative? Maybe it can make a better choice, I don't know. But anyway, or our business partner, I would think that in some spheres that would be problematic. And I would even say that in some spheres we ourselves are not going to exercise that. And let me give you another challenge here. Would we allow children, very young children, to make their decisions through algorithms, algorithmic consumers? Or is it skilled making a choice? Is it really something that we want the children to learn? And so that we would prevent the use of these algorithmic consumers from a certain age with regard to at least certain products. So this choice of where exactly we apply these algorithms that make choices, autonomous choices for us, is, I think, an important one. And there's the next paper, which is already written, but not online, which deals with these issues. So if you want, you can send us an email and we will send it to you. Now with regard to fiduciary duty of the algorithm, some people have even taken it one step further and said that the algorithm itself should be given identity, given a legal identity. I think that's taking, I would not go there. I mean, I would say, I mean, I think about who created the algorithm, who operates it, who uses it, who controls it, but not necessarily the algorithm itself. Because I think that it completely glures our roles as humans and using it and creating it. However, I think that we can think about systems where the ones who operate at least algorithms in certain risk, high-risk markets, would have to have a certain kind of insurance. So I would go with the insurance rather than the fiduciary duty of the algorithm. Can I just say just quickly that fiduciary yes, and I think that that's a very good framework legally and that goes to a lot of, you know, my own thinking about data and the way it has to be handled. And I think that with respect to the cost, just very quickly, we're talking about algorithms. We sort of so, we became so cynical that we forgot about open source and that this is something that can actually be developed, bottom up. And so we look for the Amazons of the world to create these things for us, but this is something that actually has advantage to the crowd in combining data and creating the code. So I would also think about, you know, the role of the society in here. Hi, I'm Kristoff. Actually we could interrupt you just for a second, Kristoff. We're going to stack three questions. We're going to throw three questions, the last three questions. They're going to come from you, Ron, and you, and then answer them. Can you do that? Do you mind? Okay, thank you. Hi, I'm Kristoff Graber, faculty associate at the Berkman Client Center and visiting from the University of Zurich. So first, thanks very much for studying what happens when the genie is out of the bottle. So you are very optimistic, as it has been said already, and it appears to me that your optimism relies very much on the work of regulatory authorities, including competition authorities. But as Urs has already mentioned, the problem, one of the problems is to make sure that we have a comprehensive view of the values and interests that are at stake. You mentioned also an example of how privacy issues can be reformulated within a scheme or a logic of competition law. But beyond privacy issues, there are many more issues. First, I mean within the competition law framework, in a broader perspective, you have the company, the business interests, you have the consumer interests, and then beyond that privacy interests. But that is not all, as we have seen last week, there are also freedom of speech interests and even democracy interests. So how can you make sure that all these various and heterogeneous interests are in a way taken into account in a comprehensive way in your model? And to take it a slightly different direction, my name is Caroline Troh and I'm a researcher up at the Fletcher School at Tufts. It seems like the assumptions that we're going to be basing the algorithms on, or the algorithms we're going to be working on, are going to be key. And just turning it over to machine learning might seem more attractive than relying on psychology studies or current science, and I'm thinking especially because of the current replication problems in various psych studies. But machine learning can sometimes just entrench current biases rather than moving things forward in a positive way. So have you seen good mechanisms or systems to interject when algorithms might be quietly, maybe not obviously veering off course? And can they even do this without divulging some of the private details of the people that they're trying to help? So they, you know, just, do you want to take another question? Okay, so with respect to machine learning and the use and the use of data, the collection of data, the basic principle of these systems is that they would work as shadows, right, of they have to have access. And actually for us users to make this more useful for us, we would want to have it, we would want to make more information available to a system of that sort. And it has, and that actually creates also a market dynamic that would require some trust and that is a force that I think again tend to underestimate to what extent consumer trust is important for the companies that are providing some of these devices. And I think that, you know, if your business model is based on trust, because that would make the machine better, because it has to be trained according to your preferences, I think that sort of ideally would create some partnership right between the, you know, consumer and the the companies that are serving this type of need. To the extent that I heard you, your concern, and I'm not sure that I understood you correctly, with respect to some past dependency in machine learning, I think that's a big issue. And I think I would assume that people would have different preferences in that, and that is a type of thing that we see now with the consumption of news, right, and the extent to which our filter bubbles are actually created based on their consumption of data, you know news and blog posts in the past, and some people have some preferences to open it up and would look for apps that would enable them to do that and to see some news that they were not aware of before, and I think that we would see the same with respect to consumer preferences regarding commodities. Do you want to relate to that or to to the first question? Yeah, I would agree with everything that Niva said, and I would think, you know, last week I gave this presentation or something relatively similar in room, and a computer scientist stood up and he said, we computer scientists can create an algorithm which does not rely on past dependence. It might be that, at least with regard, that you can maybe devise some algorithms that can do that. Right now, if you're only relying on the existing data, I think that would be problematic. However, I can envision, you know, I'm just, you know, thinking about the technologies that we already know, I think that part of it can come from other, the solution might come partially from other sources of data sensors that provide information about your reactions to the real world in real time. That's a different kind of data than data about what you did yesterday, and that might change at least or at least limit, okay, part of the past dependence. So that's my answer here. Okay, it's a big question of democracy. How do we address the crisis of democracy? We started the semester with your presentation on this particular issue, right? Well, as I think that that was, you know, as I address the first question, algorithms are not going to solve the problem of regulatory capture, and they are not going to solve the problem of democracy. I think that there is, that we need to adjust our thinking and I think that a lot of our discussions regarding algorithmic decision-making looks at the way it is being abused by the big players and is sort of in the defensive. One of the purposes in this paper was to start to look at this as an opportunity and make and get and take a more proactive approach to the, to some of the challenges. And I think that always to think about how we are going to limit the way Amazon is going to use it and how are we going to limit the way in which Google is searching for our data is good, is good. Now that I, you know, object that, I think it's insufficient. I think that in order to save democracy, and if we look at the big picture, we need to really think about proactive strategies and one of them is to offer alternatives. And let us just be reminded that 20 years ago, if we had this conversation, we would look at Microsoft as the evil. And Google was a very small startup, it was not even a startup company. And, and so I think that we need to bear in mind that a lot of these market players are changing in their status and power and there is a role for us in creating more dynamic competition in order to facilitate this type of change. And so it's not a solution, but it's a direction or a strategy of how we think that we should go about this. So thank you again. Thank you so much. Thank you.