 Welcome to Episode 7 of law.mit.edu's idea flow, an opportunity to go deeper into the various facets of the emerging field of computational law. And today we're going to learn about, and maybe even in our own particular way, help to brainstorm about future directions for this new field known as computational antitrust. And who better to lead us in this discussion than Thibault Shrepple, who I think it's fair to say is the recognized global heavyweight thought leader in the area and who has organized the very successful recently launched Stanford Law School computational antitrust initiative. Which I hope that you'll tell us a little bit more about, especially the impressive list of governmental regulatory agencies and and other experts and stakeholders who have jumped right in in a very public way for this initiative. Thibault is an associate professor of law at VU Amsterdam and also faculty affiliated Stanford, among many other affiliations and he and I know each other. I think, you know, in a sense, I would say through legal hacking circles and adventures and I just couldn't be more thrilled than to have you join the MIT computational community today and to have I see brought some of your, your posse with you as well to help us understand this idea and co think it with you. So, I also want to signal that our own Camila from Brazil is also going to be have a more formal role today as what we call a discussant and that's because she is in her legal practice area. Got quite a good mix of experience with antitrust and competition law, as well as with computational law and technology law in general. And of course, she is the founder of San Pao legal hackers. And so with that, I'd like to hand the baton to you, Thibault, and with a warm welcome, I invite you to maybe start by making a few remarks to further introduce yourself. If you like and talk about the project, but especially introduce us all to this. What is this idea and what are the problems and prospects for the very idea of computational antitrust? Of course. Well, first of all, I'm delighted to to be amongst friends today. I like very much the words adventures. I think it characterizes exactly are the context of our meetings every time we had a discussion. It's always an adventure and we learn. I mean, I learn a lot from you. So I'm very much looking for our conversation today. The thing I wanted to say also is that I've been listening to the previous episodes. And so it feels a bit like being, you know, trapped in the box today because I'm on the other side of the screen, although I'm not too sure if it means anything nowadays with the digital era, but that's a different topic for a different day. I'm here to talk about indeed digital antitrust. And most of all, to hear from you and to gather your thoughts and see if we can make some progress together in just a few minutes, but I'm sure we can. So what we thought we will do is that that first I will give you a 10 minute ish presentation about what I think computational antitrust is, but feel free to disagree with me. You'll see I'll be asking you a question right after the start and then we can keep the conversation going. So for that, not so surprisingly, I do not need to share my screen, but hopefully you can see my screen. My video in full screen. What I will do is that I will use a presentation that I hope you can see right now. So as you see what I've decided to do is to actually open two different tracks and depending on the answer that you will give to the first question I'll be asking you, I will then take one track over the other. So this is, I hope, very much an interactive presentation. But so before we get started, I need to ask you the question that I want to ask you, which is the following. Should we fight fire with fire. And here in the context of computational antitrust what I mean is that, should we use computational tools, such as AI and I'm about to be a bit more specific than that to tackle for instance the tech giants. So for you to answer the question. You can take a picture of the QR codes that appears on the screen, or you can go to slido.com, and then on the front page they will be asking you for the even codes, and I've chosen the only even codes that I could choose today, does that. If you just enter that you will get to answer the question. I'm not going to wait for for you all to answer the question. As I speak, I just, I'm just leaving that on the screen for a minute or two. Hopefully at the time to take the picture of the QR code or the slides.com and the even code. In the meantime, what I want to mention is that the idea of computational law and the idea to be able to compute the law is not entirely new, of course, what is new, and it's the pace of technology and all the technologies that we are about to discuss. But going a bit back in time, you may be interested in the work of the Japanese, and in his opinion, and I pretty much have to confess, share the same view. It will be possible with the right tools to calculate legal outcome. Now, I don't think we have all the right tools to be able to calculate all legal outcomes, but potentially one day we will reach that. I mean, this is pretty much a philosophical believe so I'm not going to insist on that today. Something which is closer to the field of antitrust is Richard Posner, who in an article published already 20 years ago. Notice that the mismatch between the law time and the time for markets is troubling to say the least because indeed one goes much faster than the other. Of course, there is a question do we really want lawyers to go faster than markets, not necessarily, but potentially at the same speeds. That might be something that we may want to work upon. So, that is the context of my talk today, and being a bit more aligned with what's happening on today's markets. If you look at some empirical work, you see that we detect pretty much between 13 and 17% of antitrust infringements, which you could say is not too bad but it's also not too good. It means that 90% of the time, if you infringe antitrust you'll be fine, and your practice will go undetected so of course this is not optimal. And if you look at what's happening and what's coming, unfortunately there are reasons to be pessimistic, but hopefully today is a space for an optimist conversation or the realist. But what we see according to the OECD is that competition agencies are reactive, meaning that they wait for companies to go to them and to complain, when in fact, they potentially should be more proactive which is to go out there on the markets and detect practices on their own. This is the question as to the means we give to those agencies, the two agencies in the US would get a budget increase, but I don't believe that this is on the map for the European Commission and any other agencies all over the world so we do have a problem here the one of detection. As I mentioned already, it's unfortunately not about to become better. I mean, don't please don't care about the exact number that appears on the screen right now, those are predictions so it's certainly wrong, but the idea is very important in 2020 so this one you can trust. We've produced all over the world, 44 zettabytes of data, I had no idea what a zettabyte was to check it out, a zettabyte equals 1 trillion gigabytes which is a lot, which is already 40 times more bytes than there are stars in the universe just to give you a sense of what we are talking about here, but in five years we'll be producing 175 zettabytes of data, 600 and over 2000 in just 15 years from now. And that's if you don't have the right tools as an agency to be proactive, then your job will become increasingly complex and potentially this is not great news because we want agencies to be able to detect more practices. So that's issue number one and issue number two is the one after you have detected a potential practice. What do you do. You have the issue of the quantity of the data you need to analyze to give you some number in the Google shopping case, almost 2 billion I mean 1.7 billion search queries were analyzed by the European Commission. The CMA access to Google and being search queries for just one week and they ended up in between three and 4 billion data entries to analyze. So you can have lots of interns but it won't do the trick. Despite the quantity also the nature of data is something we may want to question. What do we analyze do we focus on static competition and just prices, something that we know how to analyze more or less, or do we want to take into account business open the firm black box of bits computer science behavioral insights, etc, etc. So those are a few of the questions that we have to discuss today and a few of the challenges in a bit of a proactive way. I would say we have two solutions we can give up if we give up we could say well you know what the market will take care of it, which is true most of the time but sometimes the market needs 50 or 40 years to take care of something so what do we do in the meantime that's a good question I think. Another one is to say well we regulate everything ex ante which is what's coming in Europe and in the US. But even though you may be doing that you still end up with potential infringement to ex ante regulation, and therefore the ability to detect that and to analyze the infringement comes comes back and so you haven't solved anything by just just working on on such regulation. So if you don't give up, you can work on computational antitrust, and so that's the subject of my talk today. So now looking at the answer that you that you give me gave me, and I'm here putting the answer on the screen so to show you I'm not lying and making the results myself. 75% of you answered that yes it's a good idea to fight fire with fire, and that's therefore computational antitrust is a good idea or at least an idea we should explore. So, going back to my presentation I'm going to explore the yes track. So first of all, thank you very much. As you can guess, I was a bit biased and I hoped that you will answer that it is indeed a good idea to explore computational antitrust. Two things I do want to mention though, it might be that you answered that to please me. That's something that we see a lot in behavioral science so I'm not going to take it for granted that you think computational antitrust is a good idea but I'm going to try my best to convince you even more and we do have 25% of you, thinking it potentially is not a good idea. So the question becomes, if indeed we should use fire, fight fire with fire, which type of fire do we need or to put it differently. If antitrust without computer science is like Sherlock without Watson. The question is, which Watson's do we need. And so here what I briefly want to do is to mention three fields of competition law and policy or antitrust. First being anti competitive practices. The second one being merger control and then antitrust policy to be very brief I mentioned already that we have a detection issue here to have the computational tools, which I will define as computer based problem methods that we could be using is to develop API's and I believe this is something that we may be discussing including with a DASA we juggle with the idea for quite some time that agencies and companies will create flows of data to be able to better understand the markets, something else that could work and is actually already used by some agencies and not to show if I can disclose the names, but is to use natural language processing or understanding. And you could do that on the basis of the documentation, every company as to publish, and there are quite a few of this documentation available as we speak. They do have to send information to financial agencies to data protection agencies, etc, etc. And we know, including from the work of Nicola Petit that using this financial information you can actually detect potential infringements to antitrust. So should you be able to analyze lots of those documents using natural language processing, you could potentially detect some infringements to antitrust and something else that I mentioned is the ability to compare millions of documents data entries, etc, etc. This ties with what I just discussed. A space where we know every antitrust agency can can could start with is with their own case law, because indeed, every antitrust agencies has access to at least its own case law. Regarding the subject Bill Kovacic the former chair of the FTC wrote a paper in which he argued that it will be a good idea that sometime we look at what happened in the past and try to better understand the practices of competition agencies and try to analyze if the decisions were the right call or if we potentially made a mistake. And so, on that idea, we published within the competition I title project the paper by two researchers from Europe, in which they trained a machine learning algorithm boost supervised and unsupervised on top of the FTC case law, especially in the farmer sector. And what they were able to do is to detect patterns of anti competitive practices, and to see that generally speaking, when companies implement one type of anti competitive practice, they also implement another type of anti competitive practice, which is something which was invisible to the UNI. So that's ID number one ID number two merger control. So here we don't have the detection issue. Most companies will notify the merger, but we do have the time factor, and also the fact that potentially agencies have to deal with the loss of data, and they have no choice but to take a decision within 90 days, most of the time. Agencies complain. Those are two of the screenshots that I took on the European Commission website. They complain because they say they don't have all the data companies send misleading information or incomplete databases. A solution could be to force companies to register some information onto private blockchain and ask access to those private blockchain knowing that's if the company wants to get rid of some information before notifying a merger. This will appear on the blockchain and at least the agency could ask why did you get rid of half of the database. So that's another solution that we may want to discuss. And something in here I do have to mention that I'm mentioning and putting on the screen several of the MIT researchers including several of the leading researchers in the MIT who published a paper for the Stanford project in which they argue that potentially we may want to analyze the technology when it comes to merger control instead of simply relying on thresholds and the size of the competitors, etc, etc. So that's another idea I wanted to put on the table. The final one is the one of policy. So here we could be discussing how to do risk-respection analysis or how to predict the future. But something I just want to show you is the use of agent-based modeling, not that I have the time to explain agent-based modeling, but let me just show you how it looks like. This is cellular automata. So the most basic type of agent-based modeling. Here you see that when agents forms a group of 15 agents, they go to the right. Similarly, you could design a simulation in which you would say that if the price of a product goes over the value of 10, then companies move to the right, which is they leave the markets. And of course you could be a bit more sophisticated and use AI and try to see if we implement certain policies, if we allow a merger, what could be the reaction of companies on the market. This won't be perfect, but it seems to me that it is better than relying on the idea of the average consumer, whatever that means, because I never met anyone average, and I'm sure you haven't as well. So those are the things that I wanted to discuss. But of course where I need your help and where agencies and scholars need your help is to address some of the limits of computational antitrust, which is why we have created this project at the Codex Center. So what we do in just 20 seconds is that we have gathered 55 as we speak competition agencies. They agree to receive our publication to exchange with us and receive lots of email from them, including from very small competition agencies, asking very specific question. And then we organize a annual conference which is coming in December where all agencies gather to discuss the advancements in the field of computational antitrust. So we publish one article every month, there is a podcast episodes and the annual workshop that I just told you about, knowing that after the workshop, agencies will send us a two pages report explaining the implementation they've been doing of the computational tools within the year. But as I mentioned, there are some limits. Limit number one, what, which fields of antitrust law can we compute. It might be easy when it is related to prices it might be, it could be much harder when it relates to privacy or the quality of a product. This is a question I know does I want to ask you challenge number two, which tools are we talking about. I've mentioned a few agent based modeling natural language processing, but potentially, we may want to explore all the tools. But of course designing the tool is not the only thing you need to do what you also need is the data to fit the tool. And so the question is, who should actually design those API is to to improve data flow etc etc. Challenge number three, what's the role of computational antitrust. Let me just draw a parallel with economic science in which George I can love Nobel Prize in economic theory, argued in a paper published last year that they are important topics that cannot be approached in a way, meaning with mathematical formula, and in a way which rely on the classical economy theory. Well the same is true for antitrust. Some things cannot be computed as we speak, it might be different in 500 years. But as we speak, some of the practices cannot be computed so what do we do with those practices what's the limits. The very last one is to know the limits. Of course there is no theory of everything, although I do personally believe that one day we will reach that but we'll see about that and I'll be dead for sure. But knowing that they are black swans and evens that we cannot predict, then what do we do, and if we have a computable results, what's the limit and what's the weight that we give to that result. Those are a few of the limitation. Something I do want to mention though is that the same is true for weather forecast. It's not perfect, there are limits. And yet, if you rely on it, then you might be able to predict when there is a storm coming and potentially move people away from the region and save lives. And in fact, if you look at empirical evidence, weather forecast is much better today than it was 40 years ago. So I imagine 40 years ago, it probably was a nightmare, knowing that it's not perfect as we speak, but still it is improving. And it seems to me that the same could be safe for computational antitrust. It would improve. And although it's not perfect, this is not a good reason enough not to use it. Because indeed what we see today is that some of the big tech companies are using advanced tools, while some agencies are not and it creates as Richard Posner was saying a mismatch which is troubling to say that if so potentially we may want to equip agencies with some sort of fires. But again, I need you to to discuss which type of fire that we want. If you are interested in the issue for after the talk, you could go to computational entitles.com you have all of our papers, podcasts, everything is open access, as it should be. So thank you very much and very much looking forward to our discussion. Here, thank you so much, Tebow for that tour de force on computational antitrust and especially for taking time to customize some of, you know, what are the really present bigger questions the conceptual breakthroughs that need to happen to get to the next level. And I wonder if you'd be willing to screen share again and just put up those questions so people can refresh their memory. Because we did take some time in advance of today to help to craft questions to for you, really. So usually what happens after a talk in a traditional academic, or, you know, whatever computer industry setting is that the sage on the stage, you know, says things and then the people and the people in the pulpit, or rather in the audience, ask questions. We want to flip that around today and to encourage idea flow, we have questions for you. And those questions are one more time, just to frame the dialogue. So you mean the four questions is alright. Exactly. And while you're doing that I'll just riff a little bit. I mean, what there's a theme here which is with the technologies we already have today, like if we don't assume new breakthroughs and blockchain or quantum computing or whatever. You know, where are the best fits to people think the the ones where there might be really good value, or where, where, where, where things are really ready to adopt. We already have data we already maybe know the API's or the this or the that. And if we just one little change, we could transform a field of antitrust like the cartels and conclusion and collusion stuff or the merger review has a certain kind of shape to it. And existing data or all the monopoly power and the, the price discrimination and the, you know, and the tying of products and stuff has a certain shape to it. So, I mean, we're, what's right for a breakthrough. Oh, and now we actually have the real questions. So do you want to just state the questions one more time and then and then with the help of Mila as our primary discuss and perhaps we can dive right in. Of course, I mean, I think you framed the question exactly how it should be framed so it's hard for me but no indeed the question is, I mean, if you've been involved in a field of antitrust you know that prices are easier to compute and we've been obsessed with prices, potentially, for good reasons but maybe now is the time that we leave prices aside a bit. And we also care about other metrics, such as the quality of a product or whether you know they are dark dark patterns and all of those new issues. And so the question is, if you can more easily compute practices which are prices related. Should you go this way because that's the easiest, or should you try to, should you develop more tools so that you are able to tackle other issues, which are more qualitative by nature. That's the core of the question. And I know we have some judges in the room so I'll be delighted to hear you know if not just your your feeling when hearing the presentation what do you, what do you think would make sense. Is there something you want to try within the courts or something that you've tried already because some judges have been experimenting a bit so this is the question. Hi, well since you've mentioned the judge in the room, I'll just jump right in. And I would actually love to see some artificial intelligence working on some nuclear terms, or maybe on the 10 areas of the antitrust questions and cases so that we could actually harvest this data on a more concentrated frame. And therefore, we could actually work these data a lot better, because sometimes we can think about price and we can think about collision but today we must must think as well as like data concentration and other aspects that aren't really focused on a specific area but can actually be very relevant to the market. So, I think that we should start maybe with some machine learning. And actually analyze the decisions and the criteria and the criteria used in the decisions to evaluate what was actually decided as an antitrust or anti-concernation practice. And after that, we could work with blockchain to have a very specific way of monitoring this information, not only the criteria but if the criteria has been actually applied from the decision on and I would actually start from that, just first extract the data, I wouldn't actually choose one area, I would just assert every area because when we evaluate the market, if we only look at the price sometimes you can have like not very complete analysis of actually what's going on so I think that we should have just as much information as possible and after that interpret and then we would go on from there. If I may ask you for a question then I'll leave the floor to Mila because I'm very curious to hear your thought but I mean first of all you mentioned the analysis of terms and concept that's a paper that we are about to publish on Monday, in which they analyzed whether when undertakings use the term gatekeeper or dominance whether they mean the same thing, and they found out that this is not the case depending on the size of the company they see concepts in a different way which is troubling, and very interesting but let me ask you this question. In the paper that we published, they tried to train a machine learning using FTC and DOJ decisions, but because it's the case law is, is so different, the machine got confused. And so they couldn't train actually the machine learning and they had to say well we're going to choose just the FTC because for now it doesn't work so let along you know using the European Commission case law. And that's something which was curious to me because I always thought well first we have to make the substance coherence, and then we can design the right tools but it might be the opposite, because if you want to make the tools function properly, then it would be more incentive so that the case flow is more substantial so my question for you is, do you think this is the type of arguments you could make within the court, you could say, we need the case law to be more coherent because I need to be able to train the machine learning in a way which is more consistent, or do you think this argument will never work because this is too geeky for for judges to to accept. Well, I think if if it's a very clear argument and a very in actually proposed in a very clear way, it can work out, but it would work out for me. So I can actually speak for my fellow colleagues and sometimes judges can be very different from each other and they have like, for instance, the whole data concentration may not be something that they will care for as long as maybe a horizontal market the case has to be very well exposed for the court, so that this substance can actually show what the case is actually about. Fascinating does the floor is yours. So I was just about the back channel with me a little bit. So what I'd like to suggest is that we let's get another idea on the table. So that we've got some, you know, some some things to play with and then bring meal in as our discussion to help us start to process process this with, you know, together. And that person who I think ought to be next is summit. And so if you'd be kind enough to come off mute and introduce yourself. And maybe you can help us raise some questions which we haven't really been in focus yet, but how does this all look from the perspective of a consumer and how does this play out for like, what are the issues and the options and the opportunities and maybe the, again, the problems and the prospects from a consumer perspective. It seems like things here could be quite these some of these tools and approaches could be really transformative to achieve some of the aims I've heard you talk about with respect to antitrust from a consumer perspective. But again, when we start automating things, there's perils and, you know, potentially as well for consumers and I just wonder if you could say a few words from a consumer perspective to help us bring that into the conversation. Yeah, thanks. Thanks. So I'm summit. I'm an economist and I work for consumer reports in the advocacy division in DC Washington DC. And this has been a very interesting presentation table. Thanks. Thanks. Thanks for the presentation. And I have a couple itself from the consumer perspective, one of the, you know, the biggest challenges for us is to explain these issues to to consumers and to the average user. So I have some hesitancy in sort of fighting a black box with another black box. So I think the question of explaining when we are into wheeling in markets is important for us as a policy organization to get support to adopt some of these tools. I did like you mentioned the folks and prices and these other aspects of services like privacy and quality. I think they're usually important in some of the markets. We all know, you know, on Google and Facebook, these services are free. So if these tools could be used to analyze and produce concrete evidence on how privacy has changed with competition and how, you know, having just one monopoly provider or two or three providers affects privacy to an analysis of I'm not an expert in computer science, but you know, for analysis of various privacy policies and see how they've changed over time, etc. I think that would be great. Another thing is, I think, is there a possibility to use some of these tools to just provide to study what's happening in some of these markets today. You know, like prices on Amazon and price discrimination on Amazon. These are against I think that there's only questions I feel that these tools to be applied to before we go into enforcement. It would be very useful to have my final comment from the viewpoint of adopting some of these tools. It would be very useful to hear how some of these how computational antitrust would require fewer sources from the agencies. It's not easy to get resources for agency. So if he's saying let's adopt these new tools and this requires another $50 million, you know, that's that's not a good sell. But if we can say, from a policy perspective, just help you save money. Then I think we have a much better chance of increasing adoption. Thanks. If you could help us process some of those are three, you know, like we can do all day on any one of those three, but maybe get us started and then let's bring Mila in to help us process that. I would love to react to that because I mean, I've been a bit obsessed with those points. Not that I have a perfect answer far from that but I've been thinking about it. So, one thing I wanted to mention regarding the first point is the Google shopping decision, which I think is very interesting. And that's the same. The same history for all of the Google cases before the European Commission because it will be very hard to argue that all practices are pro consumer. You know, some of them you get feeling that this can't be good for consumers. And yet if you read those decision the European Commission does not explain how consumers are being hurt by those practices, which is very troubling. Well, let's say that we can actually come up with network analysis and show that the the layer that Google owns or any other company is actually necessary for businesses and and try to show the growth and the dynamism of the markets. In fact, in the earlier we could quantify not all of the consumer damage but part of it. And so it seems to me that the European Commission wanted to go this way. They didn't have the right to let the time and therefore try to say, less choice equal equals bad for consumer, which is a bit of a shortcut because sometimes you have less choice is actually good for consumer because it avoids choice overloads. And that is not as easy as the European Commission in my view, argue in the decision and yet the practices seems to be anti competitive so I just wanted to mention one concrete example but as you say I mean, can you measure privacy issues for instance so that for me reminds me of a study that that that was conducted two years ago by Princeton University researchers, they have analyzed. If I'm not mistaken, 9,000 different websites and identify dark patterns on those websites and of course what you will need to do is to conduct the same analysis all the time. And I've, I'm involved in a project right now where we try to do that, but website change the structure and they change where the information is located so of course you come up with all of those very practical day to day issues. But I do agree with you that it will be very interesting to see the change on the market and policy consideration but again it involves to conduct the study for for quite a long time. So that's the type of study that we are conducting regarding, and I won't give name but some of the shopping websites, try to see if we can come up with some some good results, and the very last point regarding the costs. That's the very first question that I asked in the very first episode of our podcast to the researchers, I said how much, and they said well it took us a year. And on top of doing the teaching and all that so it was quite lengthy for them to train the machine learning, but the cost they say we're zero, they were able to do it using the their MacBook. And so it seems that some agencies could actually start there again it seems to me that coding their own case law and by coding I mean just putting some labels and practices and industries is where they could start potentially just allocating two employees to the task and see if the results are indeed satisfying or not. It will be way more expensive and does that knows a lot about that when we talk about some other technologies, but there are places where we could start and try to see if it works, and we are working with some agencies to try to implement a few of those ideas. So that are again are just some initial thoughts but far from being complete, unfortunately. I'll just I'm going to put in one and then write to Mila but what one thought to vote in hopes that it's helpful is to think in terms of time span when we talk about the adoption and the applicability of new technologies to transform a field of law. And so, when you look at, you know, like moment of look at is like adoption curve of a new technology. And the questions differ depending on and what when in that cycle of time we're talking, maybe initially the question is, what is the suite of close to zero or incredibly low cost existing tools, just to kind of pluck the low hanging fruit of data we have and, you know, analytical models and other sorts of things that we could just do on a lot like on a laptop. And then that was a raise the question of, you know, like, what what might it look like for you know changing the back end systems and for more industry kinds of things and maybe registries and other stuff is now we're talking about a lot of new technology and new systems and integrations and then you can almost start to imagine but once we get past that hump and some of that hump will occur as industry and society and the economy transforms anyway that's happening with or without computational antitrust and things become, you know, digital first. And then what are the opportunities to do a little tweak or to add another call to an API, or to, you know, tweak a model where you can, you know, maybe get huge value for for achieving the aims of antitrust toward the end of the adoption curve but anyway I just want to throw out there the idea of thinking in terms of time when we address these questions. And with that, it's time for Mila. Well done. What a pleasure to be here. Thank you so much for the invite to go. This is long due right so many emails back and forth different initiative so it's a pleasure to be joining you on the floor. And also Hannah summit and everybody who is in here so I feel lucky enough to be the one to speak at this point in time because I was observing and something that really caught my attention was that first from a research slash administrative side of things we have to both saying, should we wonder about like how does competition take place. And then we have another, like from the judicial side of things and saying, Okay, we may be able within the case law or the institution to rethink the way competition takes place. And then we have some it and the first thing that it says is that also from a consumer perspective, what's needed to be done is to understand how the competition takes place. We are in the fourth industrial revolution, we are facing the web 3.0 like effects and how things reorganize. So there's a question there's comment to everybody who is in this like, you know, conversation and I can put my head lawyer on for a second, and you know, maybe of course whenever a new merger control opportunity shows up, or like an antitrust like analysis case, it comes to a law firm we always wonder like how flexible the judiciary or the administrative bodies would be like to take a look on that so it drives us to a very like a common point of action but then extrapolating that and trying to think about this common field but place it in time in a time of action. I would say that when people was a very, you know, a detailed when he pointed out okay there are three areas of competition law that we could think about like doing this innovations which is merger filings infringements as a whole right policies. So if we look at the problem that was stated as the most relevant product problem I would say that the place to look for is data driven policy, because if we have data and data driven policy that means that we understand that we break that riddle of being like a proactive or reactive that has been joining and accompanying like competition and antitrust for like ages and forever. We break that paradigm for the first time in history, and we are also able to learn to understand why we are doing this and also have like transparency, transparency to like also showcase that to the consumers. So I would say okay we found the holy and the golden grail but from a technology implementation standpoint, and base of like by what I learned from that not only for today but I'm lucky enough to be pursuing the guy for the past three or four years is that this kind of things because we're talking about like different, you know, databases and so many different, you know, competition areas, sorry, and industries. This is like a ginormous problem to handle so what if we look at the other spectrum of this angle and to the low hanging fruit and think about like merger analysis. I consider that today, the whole world would need to talk to an antitrust agency needs to type in a form. This form is typed and like most of the countries, and it's just like a simple word document in which questions they're very similar sometimes are posed with very different like ways or shapes or forms. There are some countries in the United States, I don't know, like Europe. Have you seen a form CD is just like the questions that you asked in a, in a filing form in Brazil so isn't it such a low hanging fruit to consider that the same structure should or could, you know, be put in place by the authorities when they're requesting information by the players. So I'm not talking about like gigantic like AI analyzing transparent markets or not in public domains I'm saying the companies are subject to do it and they need to do it and they pay to do it they pay huge fees. So going to the question on who is going to pay for that well the company's pay already. So what if there's a simple math that says okay the cost for doing a computational filing forum is X. The filing costs are X, Y and Z and the different countries let's all pay like a small fraction of it and boom, you have your computational global system with privacy in places and respecting like country sovereignty is one very like easy way to address a problem. I would like to stay more in here but since I've talked a lot I just wanted to make time for like a second very small thing so the place where you have the same type of transparent database and you can see the dynamics of price fluctuation privacy and quality and ease of access and the competition is called blockchain. The APIs they are there you just need to scrape information and start to understand how that you know organizes and with this defy initiatives. It may be even easier you know for us to understand how consumers relate to two fundamental questions of antitrust, which are price and quality. The other point that you both brought to this conversation was privacy and it's a much more modern approach, but when we talk about classical antitrust analysis please master correct me if I'm wrong. We're talking about like price and quality, and then we have this add up of like privacy perhaps or not. We're talking about that and we want to do something like very quick. We also have a way using public data and API so this is kind of what I was looking for. And thinking while hearing you speaking like, okay, now that we see what the golden grail is, what are the two quick things that perhaps we could do together to have a little bit more visibility into computational law in the future. There's a lot there. So Tbo, any reactions. Sure. I mean, I'm not sure if we've discussed that together Mila but I very much agree with you that merger control might be the first place where to start. So I didn't want to say that in a way that was too obvious in my presentation, but I'm glad we agree on this one. So again, I mean it depends on which level you're talking about, but it seems that the forum and indeed the basic analysis that we do is something that I have to teach every year to my students doing the same analysis showing them that it's not complex, although there is a bit of mathematics but come on this is quite easy. And the computer could compute all that in a way, which is way more efficient but of course, if you then try to discuss whether we should keep on relying on neoclassical theory and the idea that there are equilibrium in the market when there are not or very often no equilibrium for a long lasting basis, and try to see how to use complexity theory and agent based modeling you talk about something which is way more advanced, we could go step by step and indeed start there. So, in my personal view, if I may share I would say merger control defining forms and train machine learning algorithm on the case using the case law of competition agencies, maybe a third will be to use publicly available documentation to try to to to come up with an LP analysis and detect more practices. This could be done. Again, with just a laptop. So this was my my first reaction but again, I mean we could talk about that for ages but I'm more than happy to to leave the floor so I can hear your thoughts on that. To think sorry. Oh, pardon me, I didn't I was just saying that back to you but you were talking at the same time so least useful comment of the day so far. Um, I wish we had more time. I actually like to discuss this in advance. I know I probably went like all in my idea was just to like spark some, you know, opportunities and possibilities, and not that I see like a way, and I know how to push this forward, but putting I think one of the main roles here is to poke this type of intervention so I would be also glad to hear from the broad audience how does that resonate if they do also feel that there was like this link amongst like the different players as regard like I need to understand how competition takes place and then perhaps like leave the how to do it for like perhaps the next section together. I don't know. And if not, I could talk about blockchain for ages. So please don't get me started. Okay, so what I will throw something in there just to confound us a little bit and so this is in the spirit of speaking of hats. Okay, so let's get a little hacky and think about some topics that maybe or some technologies that exist, but they haven't been configured yet to plumb the value of them. Blockchain we must return to and part of the reason why it seemed like such a great match to have to go and Mila in in guest and discussant roles was because of your thought leadership for and blockchain and antitrust. But I want to, but you know, there's there needs to be a fuel for computational methods, whether those methods and mechanisms be expressed through the technology of blockchain or machine learning or, you know, automation type stuff or other stuff, which is, which is the data, you know, just like where's the where's the data that is the, the, you know, the inputs for for this, for these tools, where is it, where's it created, where does it reside how can it be accessed. How can it be used, and the raises a whole, you know, legal layer of ownership and control and whole bunch of other stuff and a lot of technical questions about the format and accessibility. But but so what but one thing that I feel is lurking just below the surface for all of this. We start with like the securities filings and and publicly reported data of companies and market data. But if we just kind of tipping the creative had back towards summit for a moment. A lot of it really does have to do with the with consumers and and people that operate within the markets, you know, and, and they we have data. And so I guess one thing I wanted to throw in the mix also is what if what happens when not even if at this point, people start availing themselves of their rights under GDPR and California Consumer Privacy Act in China, just enacted GDPR CCPA like act that gives the thing these things all have in common is people have a right to get a copy of their data from wherever it resides in the market from all of these same not coincidentally companies that we're trying to understand and get data about what they're doing. It is places like well, well, consumer reports would be one example of an organization that is seeking to play more of an intermediary role on behalf of and at the behest of consumers to help them exercise their rights. That's a project that summit and I have been talking about a little bit in the past and there's many other organizations around the world that are helping people get there. Let's just look at get my data and put it in a data store of some kind. So I have the health data, the financial data, my, my commercial data, my location data, like we have rights to this data as we start to put them into protected markets. Are there opportunities in a permission based way in a legitimate way to look across that data almost like epidemiology, but for markets to see things like. Well, that's this is where I don't know enough about antitrust to say exactly what but you know price is the obvious thing that I think about but other other trends like opportunities that I had or, or, you know, or like, you know, patterns in markets or who knows what insights we might get from that vantage point of data so I just wanted to mix things up a little bit by throwing that out into the mix as well. Floor is open. All right, I'll take a chance. First of all, and job is that only have one hat today, because you have so many so that's something I need to think about for the future. We're going to get law.mit.edu hats for our inner team and our collaborators soon so stand by because you have been. So we're going to get to that hat my friend but go on. That's great. Can't wait. Yeah. Thank you very much that's wonderful. But I mean, of course the question of data is is central sometime I think it's not so central by the way. Here it is central so a few just like to make a distinction between three different sub topics. First of all, when it comes to merger control, the idea that I mentioned that you could say of course it's not. It's probably not for today because you can't really say to a startup you should store all of your market shares in a private blockchain it might be a bit too complex for them but potentially in the future this might be a possibility. And here I'm not talking about studying late ledgers across across an entire market but just one at a time when a company wants to merge. You can imagine that this company will have to send or give access to the private blockchain, which which is non visible to the investors in a way to allow the agency to to see which data is available to make sure that it is not being taken away before sending the data set to to the anti justice agencies, because again, that is a common issue and the European Commission will complain about that every three or four months and will sanction the company, realizing that the data set was not complete at the time when the company sent it so that's number one so where is the data coming from here coming from the company, something else, which is a bit difference, but I've written the paper on that is how companies could use smart contracts to implement anti competitive practices. So here, the issue is much harder, potentially, knowing that the bite code of a smart contract without wanting to be too technical is available, but should you try to translate the bite code of a smart contract into the original source code. It's not actually perfect and potentially, this is not good enough for an agency to understand what's what the smart contract is all about so this might be something we may want to investigate. But if you start investigating that you quickly end up with one solution which I'm not too sure is exactly perfect, which is to create templates for companies to use every time they design a smart contracts. And I fear that this might be a way for agencies, even with the best intention in the world to interfere with the conduct of business activities so there is potentially a meal of ground but again I don't have perfect solutions as we speak and the very last points that you mentioned to to study data across the markets. So there are some patterns that indeed you could, you could try to to detect prices at one. You could say the sharing of markets you could, you could see that a company is selling product in Germany and France and the US and suddenly the company is not setting the same product in Germany, and a competitor is setting all of its product in Germany so there are a few patterns that you may want to detect. The beauty of unsupervised machine learning is that you only give the inputs, but the outputs is for the machine to find out so potentially you will detect outputs that where unthinkable for the human brain so that's the beauty of it and that's why I think it's very exciting, but just one, one thing in this regard. If you do and if you ever experimented with using unsupervised machine learning, you may want to say well I want you to cluster the data into five clusters or 10 clusters. And if you run the same analysis over and over again you will find indeed different number of clusters, and it might be that clustering the data into three clusters gives you certain types of clusters which are convenient as an agency or as a company. If you run the same analysis with seven clusters instead of three, then the results are not at all convenient for the sake of your analysis. So that's where human being are behind the machine, and that's where potentially we should be able to come up with some sort of procedural fairness, because thinking that with the machine you can solve it all, I think is is a big mistake, but pretty much no one thinks that nowadays so that's the good news I wanted to leave on the positive notes. That is good news and so I understand people are needing to drop off now because it's the end of the hour, however, for those of you in the land of YouTube we did get started a couple of minutes late so we'll end a couple of minutes late. Mostly to make sure that we have an opportunity to thank Tebow and to thank Mila so very much for helping us to sculpt the conversation and make it accessible for everyone to contribute. People didn't get a chance to contribute enough because we just didn't have enough time. And so I'd like to ask Tebow, without imposing too much on your time if you'd be willing to come back later in the year, or possibly until next year to do a kind of part two and take a look at some of these ideas, maybe after your summit that you're doing in December, and to maybe you know further sculpt them in some way we would very much like that opportunity if you're up for it. So, I only have one condition, I need to have beautiful law that MIT.edu hats. And if so, I will be delighted of course to come back to to continue the conversation. I'm going to count on it and speaking of wearing different hats. We also have to say a heartfelt, not goodbye but farewell to our own TMA, Rogier, who's been with us since the start of the, of the MIT computational law report and before that in our MIT computational law course and TMA. Come on, come on out. And she is, why is she leaving? Well, she's beginning her ascension to the law itself starting next week at University of Chicago Law School. She's going to, we're about to have a newly hatched attorney of computational law. Hey TMA, we're going to miss you. I'm going to miss you guys. I'll pop in here and there, I'm sure. Yeah, thank you Timo by the way for such an awesome discussion. I feel like we're going to be talking about anti-trust a lot during my 1L year so. Oh yes. Oh yes. Yeah, and how perfect it is that you'll be at University of Chicago so you can really, really get the foundations of the economics of this whole thing because obviously it is essential to understand it and to successfully transform. So with that, I want to thank everybody for engaging today in another idea flow and we look forward to continuing the collaboration with you next month on the last Friday from 12pm to 1pm Eastern.