 So, as some of you know, I recently had my first child, which means 21st century rite of passage, where in the middle of the night, when you really should be sleeping, instead you're wandering alone through the wilds of online shopping. And you, as a new parent, are buying things online that you never imagined you would buy online, like paper towels and bizarre sleep contraptions and crayons for birthday parties. And all the while you imagine you're alone, but in fact, as Professor Stuckey is going to tell us today, companies are collecting data on you and running experiments. And so what he's actually here to tell us is why on earth Amazon thought that I was willing to pay $10 for like a tiny box of crayolas. I suspect it had to do with my zip code, and maybe what I was willing to pay for one of those bizarre sleep contraptions. Professor Stuckey is a professor at the University of Tennessee Knoxville College of Law. He's also a co-founder of the Data Competition Institute, and together with a co-author, he has written this terrific new book, Virtual Competition. And he is here today to talk to us about how algorithms are changing online shopping, and how competition law may or may not be up to the job of regulating them. We are thrilled to have him here to talk about these important issues, so please join me in welcoming Maurice Stuckey. Thank you so much to each of the fellows, and we were noticing about how much now you're seeing this shift from brick-and-mortar stores to online. And we also noticed that with that shift, there's an increase in the use of these pricing algorithms. And we asked, well, what would happen to these pricing algorithms could somehow collude with one another? And that started then the whole process, the whole inquiry that led to our book. So the first thing, and my co-author, Ariel, always says, this is sort of the warning, is that we recognize at the beginning that a data-driven economy can really provide us a lot of benefits. And if you look at it, we've got Einer here as well, that all of these factors would suggest that the economy would become more competitive, because by increasing the amount of data and the algorithms, you can have dynamic pricing that can promote allocated efficiency. You can lower entry barriers. You can lower the search costs in order to identify products in your life. And you can have significant efficiencies, and we don't dispute that. The second thing is that we're not anti-technology. We believe that big data and big analytics can be good, they could be bad, or they could be neutral. It depends on several factors on how companies employ these technologies, whether or not their incentives are aligned with our incentives, and certain market characteristics. So that's our disclaimer. But the one thing that we came upon is that we started hearing that, well, with these online markets, because they're so dynamic, that they rarely, if ever, will be anti-competitive. And we were wondering, is that necessarily true? And that led us then to say, well, what would be some of the anti-competitive scenarios that might result? The key thing here is that competition in the online world may appear to you competitive. There may be the facade of competition, but you're not necessarily going to benefit to the extent that you would think you would in a competitive market. So these are the three scenarios that we're going to address today. And the first one involves algorithmic collusion. And when we looked at this, we came up with the following four scenarios. In the first scenario, the messenger scenario is the easiest. Here you have competitors that are agreeing to collude, and they're using the algorithms to help perfect their cartel. And in the antitrust world, that's considered a no brainer, because the illegality inheres with the agreement itself. So when we were writing this, the DOJ prosecuted its first case involving algorithms, it's the Hopkins case, and there the co-conspirators were agreeing among themselves to fix the price of posters that were on the Amazon marketplace. And because the illegality inheres with the agreements, the humans agree, they go to jail. The second scenario is a little bit more complex. Here you have an agreement, but it's a series of vertical agreements. Basically, you're allowing one entity to use its algorithm to set the price of multiple market participants. And we use Uber as the example. So with Uber, the drivers could conceivably compete with one another overpriced. But they don't. Uber sets the price. They also determine whether or not there's going to be a surcharge, where the surcharge will be, the extent of the surcharge, and how long the surcharge lasts. And that in and of itself is not anti-competitive. Because let's say Uber enters into a new market and it has its first driver, not necessarily an anti-competitive effect. But what happens as Uber's market power increases? And so now it's no longer necessarily responding to market forces. It could start setting the price. In fact, it is determining what the market price is. And here, now, each driver that is added to the platform, they understand that no one else is going to necessarily undercut them on price. So to what extent does that resemble, then, the hub of spoke and spirits? The third scenario, we call it a tacit conclusion on steroids. This is even tricky. Here, each firm unilaterally decides to adopt a pricing algorithm. But by doing so, they understand that with the speed in which these algorithms can respond to one another, and the increase in market transparency, you can have an anti-competitive outcome, namely tacit collusion. And tacit collusion in the United States is not illegal. But the outcome can be as bad as express collusion. It's the interdependence you had a classic case here in Martha's Vineyard where the First Circuit had to address this claim of price fixing. And what the court said, no, no, no, the gas stations in Martha's Vineyard could all agree, could all elevate price above competitive levels without agreeing with one another. They just see what the others are doing, and then they respond in time. And that by itself is not illegal under the competition law. So with our third scenario, we don't have an anti-competitive agreement among competitors to fix price, but we have evidence of anti-competitive intent. And the question here is, is that enough? Is that by itself enough? And is there a tool that the agency can use to go after that? And there might be section five of the FTC Act, but that could be popular. And then the last scenario is the most problematic. Here, we don't have evidence of any agreement among two different factors. Each company decides to adopt the app. And we don't have any evidence of anti-competitive intent, which you have the confluence of two factors. We call one God view and the other artificial intelligence. God view is a term that we borrow from Uber because Uber has the ability to see on a screen where all of the drivers are and where all the passengers are. And it gives them a view then to see the commercial icon. So one thing we ask is, what happens now with the increase of the internet of things and the like, the increase in data? Can you have such an increased transparency in the marketplace that rivals will now know not only where their customers are, but also what their rivals are doing? And they could respond then well before price. They could see a rival seeking to enter into a particular market by building warehouses and the like, and they can respond to the problem. And the second component would be artificial intelligence. And here, from machine learning, the computers learn how to arrive at the optimal strategy. And one of the things that fascinated us is that Carnegie Mellon recently came out with this program that defeated some of the world's best poker players, Texas Holder. And the remarkable thing about it was is that they didn't teach the computer how to play poker itself. I mean, they gave it the basic rules, but they didn't tell them these are the optimal strategies. What they basically gave the computer was basic game theory. And the computer from trial and error playing multiple, multiple, multiple hands was able then to define the optimal strategy. And the human players wouldn't always know why the computer acted that way. They sometimes thought, oh, the computer made a mistake. And later they discovered that no, it was actually that they had made a mistake. And the second thing was once the players, they started ganging together, let's find the weaknesses of the computer and we'll try to exploit it. Overnight, the computer would identify itself, its own weaknesses, and then correct them. So when they tried to then exploit it, they lost as well. So this last scenario is causing the most heartburn for competition authorities. It's not there yet, but now the executives themselves might not necessarily know why the algorithms are behaving in such a manner, but it may be some strategy in order to maximize profits and they may be tasked with colluding in ways that the executives don't even know. So that's our first set of anti-competitive scenarios, algorithmic collusion. Then we shift into the other world, which is very different. In algorithmic collusion, it's likely involved homogenous goods, you're gonna have high entry barriers, and the like. Here, this would be the example that you mentioned going on to Amazon, whereby companies are devoting a lot of resources to tracking you to collect data about you, to identify what your likely reservation price is, how much you're willing to pay. And also to induce you to buy things that you ordinarily may not want to have purchased before. And so we love this quote on the internet. It's to get you to buy things that you don't need with money you don't have to impress people you don't like. And this is how behavioral discrimination differs from price discrimination that we all, anyone here who's taking college tuition is a victim of price discrimination, but we don't complain about it. But this type of price discrimination is different. First off, it moves closer to first degree discrimination. And secondly, it's shifting the demand curve to the right and getting you to buy things ordinarily you might not have purchased. And that could be good if that product is under-demanded like dental services. But it can also be bad if you're exploiting these various biases. So when we presented earlier, like a draft of this, the one the competition authorities, one of their chief economists said, there are over 100 biases that the behavioral economics literature has identified. So it wouldn't be that hard for the company to identify one to exploit consumers. And the thing here is that you may not necessarily be aware of it. You're thinking you're just an ordinary shopper making an ordinary purchase and the reality that you're presented is the reality that you accept. But what is presented to you is carefully orchestrated based on all the data that's collected about you, about how much money you're making, where you're living, what you're reading, what are you watching and the like. And the price that you're charged is gonna differ than the price that someone else is gonna be charged. And the products that you're offered can differ from other products as well. And we also explored the welfare effects of that. That the economists, when we were presenting this, the economists were equivocal about price discrimination. Lawyers and judges seem to be more concerned about price discrimination generally. When we talked about behavioral discrimination, there was a greater concern among all the parties about that because the welfare effects are less clear and they can't be potentially troubling for us. And this take us then to the third scenario which we call frenemies. So we wanted to have a unifying theme. How do we connect the collusion scenarios and the behavioral discrimination scenarios? And we came to say, well, where does the power lie now in this new economic system? And what we found is that the power lies with the super platforms. The Europeans call it GABA, Google, Apple, Facebook, Amazon and one Wall Street analyst said it nicely that apps are worth millions but platforms are worth billions. And what we find is that the super platforms can have what's called a frenemy relationship with the websites and the apps of the life. And the way to analogize this would be as if a dental clients were coordinating among themselves to track the gazelle as they go across the savannah. And so there they're friends because they want to better track you, they want to better identify you. But once the gazelle is killed, then there's competition over who gets the choice cut with gazelle. And the power we find here lies with the super platforms. And one of the things that we bring about is the tail of two apps in order to see what the incentives are in this ecosystem. So I've got this one app that's offered for free on your phone, you turn it on, it converts your phone into a flashlight. But unbeknownst to you, that app is also tracking your geolocation data and is selling that surreptitiously to advertisers in order to target you with ads. This company then is fined by the FTC because they've engaged in deceptive practices. The other app is Disconnect. And Disconnect's purpose is actually one of the co-founders was one who worked for one of the super platforms. And they said, we're going to allow you to better able prevent yourself from being tracked if you don't want to. And to control who can track you within this ecosystem. So we asked, which of the two gets kicked off of the Google Play Store? The one that engaged in deceptive behavior or the one that enables consumers to better protect themselves? This one is still on. And we want to understand why that is. And it was really interesting when we presented this. One person from the industry said, well, this is like inviting an arsonist to our house because their incentives are counter to our incentives. We want to encourage the tracking of consumers. So the key point here is that these gatekeepers can have a lot of power in controlling the ecosystem and affecting the incentives within that ecosystem. And this is not really the end of the story. What we're now going to emerge to is the digital personal assistant. So when you have a new baby, one of the things you can now, I believe me tell this actually, is a digital butter that will sing to that baby. We'll talk to the baby. We'll learn from the baby, right? We'll communicate with the baby. And you will now conceivably have a digital personal assistant that could be with that baby before they were born all the way until they die. And what are the implications then of digital butter? And these can be profound because now when you still have the ability to go onto Google and search, you can search other sites and the like. The more you rely on these digital personal assistants, the less likely you're going to search outside of them. And now they can become the key gatekeeper of your ecosystem. And their power can increase significantly. And, oh yeah. So the effects here of the digital assistant isn't just economic. It's not likely going to be just the ability to price discriminate better. But the digital personal assistants can also affect the news that you watch, the entertainment you receive, the suggestions and the like. They can really start controlling your worldview. And there was this one interesting study you may have heard about that Facebook conducted where they altered the newsfeed of the readers. And they wanted to see like emotional contagion. If I have more positive stories, what impact does that have on your postings? Or if there are more negative stories, what impact does that have? And what they found was a statistically significant effect on the postings. More negative postings you receive, more likely you're going to respond to it. More positive, more positive postings. And just think that's just one manipulation. How much more manipulation can be the more that you rely on these digital buttons. Now, you're going to hear next week on the problem. And the pro author have come up with a paper about a pure as butter. And I think this is really exciting. And one of the things is will the market provide an alternative? So let's say if you were exploited, if the digital butter is not necessarily aligned with your interest, can you easily switch to another digital assistant? Yes, we hope so. But we can't necessarily assume that market forces will deliver a pure as butter. One of the reasons that we identify these data-driven network effects. And network effects aren't necessarily bad. Think of the telephone. More people that have a telephone, the more utility you have from the telephone. With these data-driven markets, we find other types of network effects. The more people that use, for example, a search engine, the better the quality of the search engine itself. It tapers off at some point. But nonetheless, there are these enormous network effects that can then allow big firms to become bigger, collect even more data, which helps improve the quality of the phone and helps separate the gap. So one potential entry barrier are these data-driven network effects. The other is the leverage of the super platform. That you're going to interconnect with one of these digital butlers. You want something that might be able to coordinate well with your calendar, with your mapping technology, with your driverless car, with all of the products, the smart technology you have in your home. And the super platforms already can offer many of these services that put them at a competitive advantage to drivers. So the key takeaway here is that the effects aren't just going to be economic. They can then be potentially affecting our democratic ideals and they can affect our well-being. So, is the future bleak? No, not necessarily, sorry, just want to keep reminding that it's not necessarily deep. We can, through a combination of competition policy, consumer protection and privacy, still get the benefits of a data-driven economy that ultimately improves our well-being. But we can't assume necessarily that this will invariably happen under the current antitrust policies. And one of the things is, there's an interesting conference going on at the University of Chicago today, tomorrow, and yesterday. And they're looking at, is there a concentration problem in America? And this is sort of ironic, because the University of Chicago helped bring about a change in antitrust policy in the 1980s. And what we had over the past 35 years is what's called antitrust life. Believe that markets normally sell correct, we shouldn't really be as concerned about vertical restraints, nor should we really be concerned about monopolies as much in the life. And what we've seen just in the past year is a lot of economic literature coming out from the Council of Economic Advisers, from the Obama White House, that there are warning signs, there are red flags that our competitive market isn't working necessarily the way it should be. What we're seeing is that there's increasing concentration levels. And profits are accruing to a handful of firms. So there are fewer firms, greater profits, less mobility among workers, fewer startups, and the like. And the other concern is as we shift from the brick and mortar to go to the data-driven economy, our current antitrust tools won't necessarily solve the problem. That there are various problems with our current tools, that they're very much price-centric and many of these markets are multi-sided, the products and services are offered for free. They defy the antitrust paradigm, not always, but in several different ways. So we can't ignore the issue. So what are we gonna do with that? And we love this quote, it's by Barry Nielber. He's a game theorist at Yale. And he says, when the masses get mad enough, perhaps they'll elect the new trust-busting Teddy Roosevelt for the digital era. And this was before the elections. He was reviewing our book for Science Magazine. He was like, yeah, this would be great. And now we have Trump. And the question is, what's the Trump administration going to do? And here I do have some good news. One of the goals of our book was to alert competition authorities that there may be competitive problems, that there may be this facade of competition down the surface, looks really promising, but underlining their real problems. And then just the past few months, you've had, particularly in Europe, senior policy makers that are now engaged in the subject. They really are interested in this. We presented this book to both the US and EU competition authorities. Terrell McSweeney of the FTC is very much engaged in this issue. So that's really promising. But it's going to take more. And I think one of the things that's going to be on you is to hold the politicians as well as the antitrust officials accountable. What exactly are you doing to address these potential risks? So I leave you then with the following prune for thought. To what extent does the invisible hand still hold sway? We've got now a digitized hand. You look at Hoover's algorithm, it can determine the market clearing price. Now, does that open the possibility for smart regulation? I mean, that's one possible. And to what extent is there going to be any more a competitive price? So when you go onto Amazon, you say, this doesn't seem right. What is going to be your framework in comparing that price to what might be the competitive benchmark price? All of you are going to have your own unique price. You can have your unique experience and then the behavioral discrimination. And then this goes to the digital assistants if we have these super platforms and if these network effects lead to one or two personal digital assistants, what are the impacts going to be on our privacy? It was an interesting case in Arkansas where the police tried to subpoena the information of Amazon's digital assistant. So what does that mean for surveillance by the government? What does that also mean about private surveillance of your behavior? And to what extent does that impact our democratic ideals and our well-being? So I'm very interested in hearing your thoughts. You are the future. So let me go at it. I'm going to turn it open out for a question and answer at this point. Bill's ready to talk and a naughty set of problems. So I was wondering what sort of concrete changes to antitrust policy would you recommend given this? What should we, you know, assuming you could persuade the FTC to do what you wanted to do, what would that be? All right, I mean, there isn't, first off, there isn't going to be a silver hole because some of the problems are just the current tools may not reach, like tacit collusion. So there it's going to be very hard to say that the agencies are going to, you know, analyze the firms just because they're acting interdependent. So one of the things that we're thinking about is what are, rather than doing ex-post enforcement, are there necessary conditions that could help promote privacy competition? So the UK's consumer markets authority, they're looking at privacy competition as the canary and the coal mark. And to the extent that you're not getting necessarily the privacy benefits, that might be some sort of market failure. And then it might not be an antitrust solution, but it might be greater coordination with the privacy, consumer protection, and the antitrust officials to put mechanisms in the marketplace in order to increase that data for the company. So that would be one aspect. Second would be, for the super platforms, would be to be tougher on section two enforcement. We were talking about that beforehand. The DOJ brought one case in the last 16 years under section two. Just to give you an idea, they brought more cases under the Bird Migratory Act in one year than they have under the last, I believe we have the statistics, it was like 20 years under section two. So the Europeans have been basically leading on this one. I think the US authorities need to step up on that as well. And then for behavioral discrimination, I think it's greater control over your data and your ability to be tracked and your ability to opt out for being tracked in life. And we have that in terms of COPPA for our children, that there are much greater privacy protections that are for children than there are for adults. We don't necessarily need to extend all of that, but there might be some measures that we can do so we can avoid being tracked. Yes. You said earlier that digital assistance would influence people in a way that we may not want to. But how does that differ from more traditional ways? Like a local newspaper, if there's only one newspaper and you have to read that one or the radio, for example, if you always tune into the same station, how is that, how is the digital assistant different? When I was at DOJ, that was one of the things that we looked at was at press monopolies. And that would be a concern, right? That in a less robust marketplace of ideas, you could have censorship. And the problem is that you may not necessarily identify it because you don't know about the stories that the newspaper could have reported, which I was not to. Here, that's only going to be one parameter of that because the newspaper has to have some sort of guess of their readers. Here, you're going to have a highly personalized environment where you're going to be given articles that the super platform thinks you might enjoy. And so here, your worldview can really change significantly, at least like in the New York Times, if you have a broad readership base, the newspaper has to find stories that somehow won't alienate one set of readers versus another. But under this scenario, they can be highly targeted and they can create sort of a worldview. So they know that you're susceptible, let's say, to fake news and of a particular story, right? Then they can provide you that stream of news. So the effects can be much more pernicious. Just a little question for you. So we're almost in, just to be clear, a student here coming from a tech company. But our worldview right now is shaped by the construct of this being an anomalous situation, what's going on with like an Amazon Alexa kind of scenario. I mean, in big dad and competition, you talk about dad, I mean, the cagger, as we look over five, 10 years, I mean, data doubling every year. This is the fourth industrial revolution brings us to a very different kind of place. And when we're just looking at antitrust policy in a way that views the construct are in our worldview today versus what it will be in 10, 15 years or in the heart of that. I guess the heart of my question might be how do we influence policy associated, I mean, this is a major change that's coming. I mean, Deming's research out of the education program. It's an overwhelming change to how we work, how we interact with society. How do we change our worldview? How do we change DOJ viewpoints as to, the major change that is coming, the role of data, the role of just the, that intuition liberates us, like the Carnegie Mellon experiment. It's imperfect information. I mean, that's gonna permeate into negotiation. It's gonna permeate into all aspects of society. How do we change our worldview based on that? That's what we would hear. And invariably, these were researchers that were supported by Google. And they were very superficial arguments that because data is free, consumers necessarily, because the service is free, consumers necessarily better. Data is like sunshine. It can't form an entry barrier. So the first level is to really analyze the myths and say deconstruct the myths and say, are the myths real? And there might be some element of truth in them for some markets, but they're not universal truths. So that's one way to explain to policymakers is that the issue is far more complex than these 10 simple myths. The second then would say, okay, going forward, what are some of the potential risks? And to start identifying the risks and the like and then to start looking at economic literature that helps support this risk. So already you start seeing some economic studies. There's two studies, whereby they created these gasoline apps. And the belief was that by having the gasoline apps, it'll be easier for you to find where gas is cheaper. That should mean lower prices for you. But consistent with our algorithmic collusion scenario, that actually had the opposite. That actually raised price because now the algorithms can see exactly what each rival is charging that can quickly undercut it and the like. So a second component then is to do more economic testing. And I think for you, this is a really exciting area where you can combine competition policy, you can combine game theory and you can start creating these pricing algorithms and start testing them yourselves and say under what conditions, for example, in coordination, if we want to create, let's say, a algorithmic collusion incubator, what conditions can help create that collusion? And what conditions can help destabilize that collusion? So you can start creating these programs yourselves and start testing them or using some of the pricing algorithms in the market and start testing them to see when is it likely to be pro or anti-competitive? So that's the second component. And then the third is to keep an eye on the big picture. The big picture right now is what are the macroeconomic trends? Is our competition policy working for us? And to start using your resources to start questioning, how are we benefiting from this competition policy? We've got growing wealth inequality. We've got higher concentration. We have greater profits accruing the fuel funds. And I think what you'll find is you're gonna have greater interest in both the right and the left on those issues. Hi. So I guess my question is you keep on referring to a thing. What exactly or who exactly are you referring to? Are you referring to algorithms? Are you referring to the data companies? And then specifically, what kind of legal mechanisms after we do say this incubator kind of experiment, would we be able to implement to adequately control or not control competition? All right, the second question is it's going to, some of the, we'll see what it would be helpful to see what type of information flow helps facilitate this passive pollution. And if it's the sharing of information that consumers may not particularly value, but could be very helpful in sustaining the algorithm and collusion, then competition makers, then competition officials can target that. We can limit the sharing of that information. If there are other factors that help foster this algorithmic collusion, such as the transparency of price and customers rely on that as well as competitors, then you don't have a ready solution for the competition official. So you might then think of other policies maybe outside the competition authorities to help destabilize passive pollution. This might be ways to foster entry by mavericks for the life. So I think there you have to have a broader framework of what creates the conditions for this algorithmic collusion and what are some factors that can help destabilize that collusion. Now, when I was referring to the thing, I was probably in marvel because there were many things throughout my presentation. So one important thing are the competition officials. They're there to protect you and one of the, there's this great article, whatever happened to the antitrust movement, they said antitrust originally was a movement without any cases. And then Richard Hofstetter said by the 1960s, you had cases but no antitrust movement. Now we don't have antitrust cases except for cartels. Move few merger challenges and we don't really have any antitrust movement. But this is remarkable because the first, this is the first time in a long time that you have populists like Elizabeth Warren that are speaking out about increased antitrust enforcement. You had it part of the Democratic platform, increased antitrust enforcement. And you have a populist Trump who has mentioned antitrust. So now the question is to start saying, have what we've been doing worked? Where has it been working? And start putting pressure on the competition officials. And then the second route would be on super platforms. And to understand, why is it that it's taking me so long to have an ad blocking measure on my phone when that technology has existed for so long? And what are my alternatives? And to start, it's gonna be hard to circumvent but maybe find ways to, first of all, accept the risks. If you take one of these digital personal systems, what are the likelihood risks? And then are there other ways that you can do it without minimizing the price of the system or consumers? This question is about timeframe. So specifically the things you're discussing seem like they're in the present and the immediate future. And I'm wondering how far out your work is projecting or considering and whether it's relevant to think in the longer term future about social or economic or cultural effects of these systems. Yeah, so let me just go. So right now, for the collusion scenarios, we already are in the messenger scenario. And Hopkins case is one, the European Commission is investigating another. The hub and spoke, they're starting out to look at that. And it's not, and one of the things is that as companies start outsourcing their pricing to companies like boomerang. And so you have multiple competitors that are using the same pricing algorithm. This is likely gonna become an issue. Uber is already now in federal court and there's this sort of hub and spoke conspiracy, they survive the motions of dismissal. This is likely to become more difficult as companies migrate online and start outsourcing their pricing to algorithm. This we're starting to see as well in the gasoline industry as companies migrate to pricing algorithm, I say in the next three to four years, this is gonna become a greater issue. This is longer term. This would be once we start getting the internet of things and really start the data is being collected. And you can have that a better image. And the artificial intelligence as it improves. I would say five to 10 years, this is gonna likely be on the horizon. For behavioral discrimination, you already have behavioral discrimination, you already have dynamic pricing. Now the question is that why isn't there more of it? It's because there's an unfairness. Customers are upset. But to the extent that we start accepting dynamic pricing, we're told that the price is different because the supply and demand changes minute by minute. Then it's gonna be easier to also price discriminate because there's no longer a fixed price. And you don't know is it a dynamic price or is it discriminatory price? So I would expect this is already happening. I expect this to grow in the next five years. The digital personal assistant, I mean already right now with the super platforms, the European commission is investigating Google for various offenses. There have been one statement of, I think there's been one statement of objections. There hasn't been any final adjudication, so the jury is still out. But this is likely to become an issue. Digital platform, digital bugger, I think it's just taking off. We just recently upgraded the paper that we're doing for the OECD. And it's just amazing how many apps are being added now for these digital assistants. You just looking at the ads now you've seen on TV, you go to your home improvement centers. So I would say in the next five or six years, they'll be interesting to see if one of the four super platforms is gonna become our primary butler. And one impact that that will likely have. I mean you're already starting to have that with the fake news, is that a antitrust problem? I guess what I'm wondering, maybe suggesting is whether there's any utility to thinking out even farther in the future, like 50 years with speculative design fictions about what may emerge so that we can start, because things are moving so quickly so that we can start putting in place sort of laws or policies now in anticipation of that future and whether that something at work covers. Yeah, I'm just such a person with cat dependencies that I just wonder that there can be so many different routes. I think one thing is, I can't project what's gonna happen in 50 years, but I think one thing is, you wanna keep the competitive portals open so you can allow innovations that can promote your interests. And doing so, that's the best insurance policy for like 10, 15 years. All right, I'll go. This is maybe not to the core of the presentation and thanks so much, it's really interesting. But in your work with the two of these books, have you worked at all together or talked a lot with the industry, so the major platforms and what have they been saying about your work? And second, just kind of anecdotal, after doing all of this, have you changed your own personal behavior in any way in terms of how you, what you purchase on Amazon and whatnot? We're the worst with Amazon because we're both, Arla and I are both prime users. I was with my mother, she needed to get a battery for her garage. And I could have looked, in any of the stores in the neighborhood, if I said, let me just go on Amazon. I found exactly the battery, had it sent to her and the like. And we're Amazon prime members as well. I think the one thing is that, yes, I've been much more sensitive to my privacy. Like one of the things that really earth me is when Uber changed its privacy notifications so that they can continue to track you even when they don't use your, you're not using the app. And I thought that was just Uber's. Why do they need to track me when I'm not using it? So I turned that off. So that's always problematic because I don't want to use it or have to go turn on my location. The response from the industry, it's interesting. It's mixed. You have some, particularly when we go to the frenemy scenario, is that here people like Uber are starting to get concerned because when I spoke with the folks at Uber, they were concerned about the taxing conditions, right? They said, those are the problem. They're keeping us out of the market. And I said, well, what about Google? And they said, no, Google is an investor in our company. Now there's a member who sits on her board. But now you can see is like Google controlling the super platform, controlling the phone. It has the mapping technology on which Uber is provided. And it's also now going into driverless cars. So you can then see how a frenemy relationship might arise. And I think now what you've seen out over in the last year or two years is they've hired a lot of professors from Carnegie Mellon to step up their driverless cars. They're also investing in mapping technology. So one of the things that we're hearing, our concerns, are the ones that are reliant on this platform. And they're coming to us and saying, these are weird dependent. They control our oxygen supply. Google, it's an evolving debate. We go to conferences and we present our book. I think now they're starting to, they're starting to move the game post. They're no longer necessarily wedded to the 10 myths. Now they're raising questions about, well, what are, one of the things as well, privacy can be a non-priced component of competition, but that's going to be rare in the life. So, I mean, but the other thing is that we're not anti-Google as well. Google does a lot of wonderful things in terms of internet connection with its fiber, the technologies that they provide. But the issue is that, you know, I remember I asked, this was way back when I asked Hal Varian, he's their chief economist. And antitrust used to be a mechanism that would protect both the strong and the weak. And the strong realized that if things ever changed, they would have the protection from the antitrust laws as well. And so there was this belief that these were the rules rolled bound by these rules in the life. And I asked, I said, it doesn't matter if I was Google, I mean, because now if you're going into other markets there might be anti-competitive factors weighing against you. And I don't know. I mean, I'm hoping that like Microsoft sort of changed its tune after the DOJ. That possibility might happen with them as well. Because there's no reason that, you know, we, they're not really the villain here because they are producing great technologies. But it's not necessarily always our interests that are in their primary. Thank you very much for this wonderful talk. I wanted to ask a question about the normative premise informing your reasoning, which is we should protect competition. I guess my question is, why do you think we should do that? Because setting the democratic question aside, I think there are two reasons we would go for that. It's either efficiency and well-being, at least the more efficient economy and more benefit for big ones and small ones, for consumers and companies. Or it's the ontological reasoning in which things like freedom, choice, autonomy are intrinsically valuable. And so the second half of 20th century, the Soviet America competition has shown that good competition, proper functioning market, somehow reaches both goals. But if you think more into the future, like this happened last 10 years and you already mentioned cyberbattler, which is in for like automatizing our consumer choices. But we can also think one, two, three decades ahead where also what we do, our professional life will be automatized. With all this data, when we know who is good at doing what, who is where, what are the societal needs, we can imagine that we are getting to a world in which central planning will be possible. And so it is possible that at some point we'll face a choice between more efficient economy where we have no freedom or a world in which we still have choice and autonomy, but this will lead to a less efficient outcome. I'm wondering what you think about that. No, it's a great question. And so we went back to Hayek on this one, right? And we actually have a chapter on this because we were thinking about this. Uber doesn't own the cars, doesn't employ as it claims the drivers, right? And yet it can determine the market clearing price for its services. And one of the problems at Hayek says is the dissemination of information, that the market serves the function of accruing that information in the price and that leads then to efficiency, right? That there's no central planner because the central planner will never be able to identify all the relevant information in the marketplace and effectively determine what the right price will be for the right price. And one of the things that we're thinking about is, well, if Uber can set the price for cars, well, why can't the government do those through smart regulation? Can we develop algorithms that can take all the market data and determine the market clearing price? Now, there are problems with that because you have capture issues, capture and move up. But one of the things that you're starting to see is that municipalities are now doing that as well, that we're metering in San Francisco. They have sensors. They can see where the cars are being parked, where they're not being parked and they can price differently when there's no excess supply, they can lower than the price as a result. So I think one component is that there may be an opening for smart regulation. The other component is, what does it mean, what does competition mean in this future market? And I don't think necessarily it's going to mean efficiency because I think in any democracy, you have a certain amount of inefficiency. So there are these trade-offs and the economists may not necessarily know how we're going to then make these trade-offs. So one trade-off here is that by engaging in this behavior, they can lower the search costs for advertisers to contact you. That could lower the cost of advertisers, that could yield efficiencies. But on the other hand, you have the privacy concerns of consumers being tracked. And how do you then, and I think those are going to be invariably policy issues. That it's not going to be one that we can necessarily measure what are the costs and what are the benefits. But ultimately, what do we want and how do we decentralize economic power to prevent some of these issues? We have time for one more question. All right. Thanks so much for your talk. I have a question about price discrimination. You mentioned that, over here. Hi. That it can be either good or bad. And I was wondering if you could tell a bit more about that assessment because I imagine that. So on the whole, it produces, it increases total output, but it does lead to the seller capturing more of like the consumer surplus. But on the other hand, there's also perhaps a kind of a redistributive element to it to the extent that most likely the people with the most money are going to be paying the most, right? And then how does that also tie in, as you mentioned, to like the public perception of these practices? Because you say that people think it's unfair, you know, should we? And when does that change? How do we assess that? Right. I mean, so I'm a parent. I have a daughter now who's a freshman in college. And I'm a victim of perfect price discrimination because they know exactly how much we're willing to pay, right? And I'm okay with it. And I think the reason is that most of us are okay with it is because it yields a greater goal. First off, in that setting, price discrimination creates opportunities for people who otherwise could not purchase the product. And then secondly, even for the people that pay more, they benefit because the educational environment is enriched, right? It would be very different to be here at Harvard where you only have people that can afford to pay the full price or even above the full price. So you're benefiting as a result of this educational or enrichment societies benefiting as well. Now, if we take that, so there are these greater social goals that come from and the quality of the product itself is enhanced by the price discrimination. But with the behavioral discrimination that we identify, first, the quality of the product may not be improved. Second, it's inducing you to buy things that ordinarily you might not have purchased. And so that can be in some ways wasteful, finding ways to get you to smoke, right? And then the third thing is that not necessarily the rich are gonna be soaked and the poor are gonna pay less. It could have the opposite. And what we found from some of the online behavioral discrimination is that the poor who don't have an outside option are charged actually a higher price, like the office depot of the stables and the like. When you live in a poorer neighborhood, you pay a higher price than if you live in a wealthier near home. And then the other thing to take into account is what are the costs in order to implement the price discrimination? Here, they have to collect data about you and they have to basically prevent you from exercising your privacy options to have greater control of your data. And once you factor into those costs, you can have great ways of doing so. So we're not necessarily condemning price discrimination overall, but we are showing that with behavioral discrimination, it could be distinguishable from online discrimination. Thank you so much, Professor Secki. It was a great work. Thank you.