 All right. OK, everybody, we're now going to circle up for this breakout session. It's breakout number two, computational law, a cool vision of the future. How could computational law work? How are algorithmic rules? How could we actually do this? I'm very happy to say we're joined by friends and colleagues and our cherished sponsor at the Media Lab, General Electric, and from the General Counsel's office. Chris Pereira. Thank you, Daza. Please introduce yourself and rock the session. I will. Hello, everybody. My name is Chris Pereira. I work for GS DazaSat. I'm the Chief Corporate Counsel and also the General Counsel for Business Innovation. So I'm here with my colleague Jay over here. We work at a bunch of innovation projects. And this session was really done on short notice. I had dinner last night with Daza, Sandy Pentland, and we talked about what would the future for MIT look like if you wanted to set up an initiative around technology and the law, and how could MIT be relevant in that space? And so division is kind of simple, right? How can we use technology to inform a better legal system? And I think if you go back of how other universities have distinguished themselves, you go to the University of Chicago, for example, which has, as you probably all know, distinguished itself very much in the field of law and economics, going back to the 30s really with Ronald Coase and the Coase theorem, transaction costs, all of that. And it's really widely used by the federal courts today, and it's kind of the prevailing way to apply law nowadays. So you could think maybe there is something behind technology where the next wave is not law and economics, but it's law and technology. How do we make the laws better by using technology to solve societal issues, right? And another precedent, I think, would be, and this is just in the context of our solution, is that there's something, as you all know, the uniform laws that's put up at this statewide Commission National Conference of Commission on Uniform State Law, where then they put out a model code and then each jurisdiction can decide whether they want to adopt a specific provision of a model code or the entire code wholesale. Now, our vision would be that MIT, Department of the Media Lab, could be formed, and it could form a platform that replicates a sovereign entity. And this platform would be operated by students at MIT, would have a steering committee, and I'm going to go to that on the next page. But the point of the platform is that it would be a way to basically model the impact of laws and regulations in a virtual environment. And thereby, you could play around with it rather than now where you really, it's a one-shot transaction. It's your best guess. You make a rule proposes getting changed 50 times and it's rolled out. It was so tough to get it done that it's not being revised. The effectiveness in some areas of the law is assessed, reassessed, but really very rarely do you have a subsequent change to legislation. Just hang around there and we all hope it works. So this platform actually would be more like a gaming platform where we would incentivize people to participate and generate user data and that we could then evaluate for a rulemaking. So this is a little bit more about the specifics about how this could potentially work. And again, this is a moonshot idea here, and it may seem ridiculous to many of you. It's also a little ridiculous to me, but it's fine. So we'll talk about it, and it's hopefully more of a discussion. I'm just laying this out here, maybe, and hopefully then I'd love to hear from all of you what you think about it, how you would structure it differently. But essentially what you have is you form a steering group, and it's a cross-disciplinary group of lawyers, economists, statisticians, ethicists, and obviously there's all the issues that came up in many of the panels earlier today and yesterday. And they would, small group, and they would advise really students that would run this platform. And we would also form a Magna Carta. And this Magna Carta, essentially, would that forth, what are the basic play rules? What's the kernel of the platform? Because it's not just a technology issue. In the end, you want to make society better. So they're tough questions, right? And some of them, again, came up today and yesterday. Do we think sovereign identity is important, or how we would do it? Would we do it through a blockchain, for example? Would you have a rule utilitarian approach to how you roll out rules, how you test them, what is your value framework? How do you define success in terms of economics? Or is it happiness? How do you define all of that? So I think you've got to have some early view of how you would run this kind of experiment or this virtual community that would be generated. Now, this platform would only be as valuable as the data you have in it. And hopefully you're going to have as much data in it as possible so you can statistically model the impact of various regulations. So there are two ways this platform would get data. The first one would be just users. Some of them are students. And hopefully, as I'll explain later, there will be some incentives to participate. But they would participate, like in SimCity, through their playing habits. And that would generate real-time data. Another one is you would have sponsors or people that would want to roll out some new rules. Let's say it's a city. It's a city of Atlanta just to pick on one city. And they want to change the speed limit and see what it has a statistically significant impact on traffic fatalities. And so the city of Atlanta would load all of its data in terms of traffic into the system. And then you would kind of power this platform with a data set that's actually real data set. So it's a combination of virtual and real data sets in this case. And that data, hopefully, you can anonymize it and aggregate it in a way that complies with all the regs. Obviously, it would be a little tougher to do in Europe. But that's kind of the idea behind generating data for the platform. And then ultimately, and this is obvious, I think why the platform might be valuable, is it kind of goes back to what I said in the beginning. This is really a way to model out the impact of laws and regulations the way you really can't do it today is very hard. And if you do a new aircraft is like work for G again, if you do a new aircraft engine, you would build a ton of models before you change the fan blades and see what's the air flow and all of that. And so we do it for stuff that matters like aircraft engines. But maybe the way we regulate society matters even more. And we don't quite do it in the same way. So that would be a way to do it. And then also the rule or the legislative process today, as I mentioned before, is not dynamics. In this case, it's kind of fast failure. It's iterative. You would re-evaluate it. You would have a statistical assessment made on the back end after a year how this actually worked out. We need to redo this whole thing. And so that's another way that you can fine-tune regulation. And you de-risk obviously new legislation because you can say, well, before you try and platform, there's some user data. It's a little bit like in biology, test in a mouse first before going to the human. And then also some of these solutions that were discussed today and yesterday, if you come to me saying, look, we don't really need NDA, we can do the NDA through the blockchain. Trust me, I'm this awesome company from, call it, whatever, California. And trust me, it's really going to work. I'm going to say, do you have any data that it worked before? And so I'm going to risk my position if I roll out a huge tool across GE and tie up a bunch of people's time and in the end, it's not going to work. That's the potential downside. So in this case, maybe some of these solutions you can test in this virtual platform and then say, well, we have some user data. Again, it's a model, but it worked in the platform. And so that would be the last batch. So I just wanted to go through an example where, let's say, New Tech City, in this case, let's call Atlanta again, wants to model the impact of a tax regime and New Tax Regime to incent economic activity in low income neighborhoods. And this is an example Sandy came up with. And can you change the sales tax in areas that you want to boost economic activity and does it have an impact on behavior of your citizens? So in this case, you would partner with the city of Atlanta. The city of Atlanta would put up an X-Prize and you're probably all familiar with X-Prizes. So they'd say, OK, I'm going to sponsor this with a million dollars. And you can decide how you distribute the money within the platform and how you incentivize to behave as the one winner or group or however you want to do it. But there would be some real money coming in that for the city of Atlanta is probably insignificant, but for the platform, this type of platform for model behavior is probably relatively enough to generate significant user data. Then you'd obviously have to anonymize the data, but you would try to get demographic data, socioeconomic data. And I talked about the participation. And in the end, you kind of set a time frame, right? And in the end, you would evaluate it. And I think the way you want to roll this out in the beginning, you want to just pick projects that are probably get generated a lot of publicity and where you can, you know, if this really is a department of the MIT Media Lab, I mean, you want to launch it with a splash and you want to be over the right targets, I think. So maybe tax reform is one, like corporate law doesn't work for a lot of corporate remunerates. What I do, you know, you see that in Silicon Valley where you have all these private markets for securities. Is there a way to change corporate governance a bit more long-term? That's an out of model you could run. So there are a bunch of ideas where you maybe want to focus on the areas where there's actually also some money to kind of get the platform started. And then if you wanted to roll this out, this is just a, you know, again, illustrative timeline. You put together this group. You have an immediate kickoff session, kind of planning it out. You would have set up an administrative office within the Media Lab. You do kind of a feasibility study in a platform because, you know, easier said than always, it's obviously a quite complex platform. Determine a budget, you know, you put together this kind of philosophy kernel. How do we want to operate it? And I think that's why the cross-functional group is important. Then, you know, develop the platform, your approach to city. You know, I always say, like, you know, I was in the search committee for GE when we picked, you know, we were in Connecticut performing. We, you know, did a, you know, a whirlwind tour across the country to meet with a lot of mayors and governors and see, you know, who would be, or where would be our new home, right? And you can definitely tell there are certain cities that have a chip on their shoulder, right? That they're like smaller and they want to distinguish themselves. So this is a way that cities could distinguish themselves with tech, right? And honestly, Boston is one of them, right? They want to be a tech city. Maybe Cambridge is a great city, right? So I think we could probably get some takers. And then, you know, once you sign up a city, you will probably want to, you know, announce this new department of the media lab to kind of jump on that publicity. And then you want to use the credibility from your first pilot with a city with you, I think would offer for free in exchange for user data. Get some data into the platform. And then the next client, maybe corporate client, maybe not a city, you would charge for them. So that's the moonshot idea. I'd love to hear all of your thoughts and tell me it's all dumb, but, you know, that's what I've got. Okay. Come up here, please, and sit here. So this in a sense, we're the same right there. Could be either of these two champion sessions where you want, we're all fixing to do some groping in the dark towards something like this very bright and articulate, well, particularly an achievable goal that you just put for us. So let's all rally around what Chris said, shall we? Okay. You're welcome. Now let's beat the stuffing out of it. It stars us either the name and I'm like starting this off. So credit where credit is due. So let's round over. All right, thanks. I appreciate it. I do like it. Computational Law, a fantastic name whose previous domain name owner must have let it slip, I guess. ComputationalLaw.org, yay. It's meant to be. Meant to be. All right, so, but honestly, let's talk about it. So I'd like to just start with these folks and then bring everybody, can I make one request? You don't have to do it, but come up here and join us in this dialogue if you're into it, come on up and let's see how much progress we can make in the time that we have. All right, either of y'all react. Okay, so I'm buzzing. This is so awesome and exciting. I've been dreaming about this. It's, I think that it's time has come. It exists to a certain extent when you think about the Congressional Budget Office models and the OMB and certain private models, but I think the idea of being able to run models that maybe don't have just how much money is going to be spent as kind of the output and the unit of measurement is also very interesting. So I think it's so exciting. Awesome. James. Yeah, so. I can look at what he's talking. I hate it. It's horrible, but there's some redeeming merit to it. So we'll talk about that part. I think immediately what I started gravitated to was simulations and things like that are fairly well developed in the areas of like econometrics or for economic analysis. People develop these, they run simulations, sort of bridging the space between, I'm not an economist, but I see them on television. But the idea that that kind of knowledge has sort of been reduced in digital form in different ways. So maybe they are working off of state of models and things like that where there's been a lot of econometric analysis in the space. There's good data sets. There's some science behind how you structure these. And so as I've started to think about how to do this kind of stuff, maybe over the last 15 years or so, one thing I get hung up on is a conversation I had with a friend of mine who's a game developer. And if you don't know about game design, a lot of times what happens is you have a fundamental game engine that gets developed. That kind of programming is really different than from the next stage of people to come in and fill in the world, if you will. And it's actually very different in a lot of ways. They assume a platform and then they come in and they fill in this world with all of these actors. Most of those are physics-based sorts of constraints. So if you imagine people in a video game or shooting at each other, they blow stuff up, the action of those objects interacting with each other is also typically scripted and they conform to sort of API sort of rules that are in that environment. So just thinking through some of the other examples, maybe that I see that you could see leveraging would be spectrum environments. There's some experiments going on right now with DARPA, trying to see how you'd have a bunch of people doing radios and how they might interact, simulating interference problems. That's more of a scientific computing problem. And I think you benefit from having Maxwell's equations and things like that with easy to sort of pick up and program mathematical models. The challenge I think here is, do we take the view that legal reasoning is probabilistic or it can be modeled effectively with multivariates? Do we take it that it's maybe a mix with rule-based sort of systems and traditional case-based AI? Is it an expert systems problem or we might have big corpus of information that we're drawing from? So I think for example, you brought up the speed problem. That's a video game problem when I went through the head or it's a cost-benefit thing and it's an economic analysis. What is the impact on sort of trade or flow or something? So I wonder what the metric is that we're trying to evaluate when we're reducing computational law problems to simulation. So I think that's a critical question because if you think about, and I mean I thought it comes, if you think about economics, it's the rational agent, that kind of main problem or it's got obviously the spread of it, the spread of it. In recent periods, not everything is always known but maximizing the profits. That's I think what you need to map a quarter and that's if maybe it's not set in stone-worthy article, a number of values to take off. Let's say maybe somebody wants just to do the rational agent. Maybe one is that is maximize happiness, whatever you're. But I think you got your right. You have to be very specific about what you're trying to model, right? Because otherwise I think the models get too complex, right? And so then you can think once the models are too complex, I think what AI can solve so much is trade-offs, right? Trade-offs between economics and happiness, right? So as a next stage you could even, I mean I think if you run a model like this, it's not gonna conclusively answer whatever problem you have but you have a better informed or you have a data set to better inform your opinion. So you could start with the rational agent, you can run it on happiness and then I think there's the human element that you need to bring in that talks about the trade-off and what's the right trade-off and that may be different for Atlanta than Hong Kong or New York, right? Doesn't, I'm not sure I'm answering your question but I think the modeling is key. What are you trying to model and not to get exponential complexity by trying to do too much at one time? Well, and I think what's really cool about it is that it gets better because I mean the first question is where do you start with the model but then as potentially these policies become implemented there's a chance of taking real data and seeing how the model plays out in the real world and kind of like the weather maps start to calibrate and see, you know, and it gets better over time. I'm still trying to understand the scope of what is sought to be accomplished. So as I see it, please correct me, the idea is to have a platform where you can model different laws or variations to laws and then see the outcome and based on the outcome and the Magna Carta that you laid out, greatest good of the greatest number, apply that model. Is that the idea? Yeah, it's a little bit of a Buddhist answer I'm gonna give you. It's not so much climbing Mount Kilimanjaro. It's more the journey up, right? So the platform is more the journey, right? And you're enabling kind of something and people can use however they wanna use it but I'm not defining a societal outcome which is gonna be very dependent on culture and who you are and whatever, right? I think the key value of it is creating a platform that enables analysis that that's kind of the vision I would say and doing it in a way that's credible through MIT. But isn't that what the... May I offer, hold that second question and then say it, but just to extrapolate that a little bit, I think the spirit of the conversation that gave rise to this masterpiece was started from, tell me your name again, I'm sorry. Marcy Harris. Marcy's question, which was much broader it wasn't really how do we model existing statutes? The essence of it was how do we refactor what is what a statute, how a statute exists such that it's now a creature of computation and what does that mean for rulemaking? It would be quite different from the legislative sessions I used to have to be a staffer at work on markup committees. One thing when you're writing like 16th century prose, it's a whole different thing when you're arguing over parameters and vectors and thresholds algorithmically and identifying exactly by design, as Chris said, what are the expected outputs? What are our success metrics? Wow, so some of this isn't merely modeling existing laws and which it took me a couple of years at the middle I'm trying to figure out what Sandy meant and coming up with ideas which are all modeling existing laws just the linear extrapolation, people saying like no, no, Dazza, not that including Joey Ito at a big room but rather transform what it is, look forward. So I think that's part of the spirit and when Chris says let's find a jurisdiction and like refactor like that and create a new method, a new systemic type is part of the spirit of it, I think. Am I anywhere? Okay, all right. Can I offer a meta on that too? Just quickly, I think that there's potentially the possibility of this I think in the future is imagining that you could move the controversial political back and forth layer of policymaking up to the question of goals and let the implementation take place more at the level of looking at data and outcomes. So here is the problem. I mean from what I, law as they say that don't ask how law and sausages are made. Laws don't get made on purely econometric or economic or good, bad analysis. They get made as a result of compromises. So at a micro level, I guess a platform can once a law is made and you have three alternatives to choose from, I think that's where a platform might be effective. But what laws to make is a subject matter outside of the scope of the platform. Somebody has a perspective on that. Yeah, Marcy, you want to try to be bold. It's one of the guys I learned to do computer stuff in law when I first was practicing stuff. Thanks again. Marcy mentioned the Congressional Budget Office and I'm thinking, could this actually be the CBO for the rest of us so that we could actually test laws and then enter that sausage factory and help people understand the consequences? I mean, it might be rather than be the end all, it could be a tool that may help. I think it's a tool. Yeah. Any more discussion? Okay. I'm gonna go first. Oh, just a simple thought on the starting point and this is just to see where we are on laws. We've often thought how all these states have different codes laying out allegedly the same thing but you can't compare ever. And if we could put all the different state statutes into one code system, see where they match, where the odd variations are, it doesn't really tell you where to go from there and fix the policymaking problem. But it'd be interesting to at least know what our baseline is and so it's something we've actually thought about as a way of just compiling this stuff and might help know where to go from there, I don't know. In the past, I've called that mapping the policy genome, like trying to figure out how it all connects especially across multiple jurisdictions. Yeah, exactly. Thanks. Have you thought about the starting using the way people interact with their environments as the starting point? In the past, I've thought that maybe regulation would be a useful kind of entry point because it does deal with people's kind of direct relationship with physical objects. You could just use Internet of Things to kind of map out how people do interact with certain objects and then use that as the template to formulate a regulation that is reflective of people's, well, with the rules that people abide by, which I think might overcome some of the real issues that poorly created or whatever you wanna call it, regulation is, is that it's not usable or that people fall beyond the end of it. So just kind of interested in your thoughts on that. Yeah, maybe one thing I would say, I like the analogy of the Congressional Budget Office. I think the bar is pretty low in terms of legislation because of, you know, I have a friend, he's a world famous economist and he said people always criticize me that economists can never project anything, right? And he said, but here's the thing, would you rather drive your car in complete darkness or with a dirty windshield? And he kind of, you know, economists, what they give you is like driving in a car with a dirty windshield. And this is kind of a dirty windshield. It's not perfect, I'm sure, right? But it's better than nothing, right? You model it and nobody really models ex-ante, maybe a little bit exposed, but this would do both. Sorry. Thank you. Hi, my name is Jeremy Fancher. I'm an attorney at Brian Cave. I was speaking of the dirty windshield being definitively better than, you know, absolute darkness and I was thinking about, you know, and instances in law enforcement, for example, when I think it was San Jose or some area, some city in the Bay Area started using Palantir data to identify, you know, possible areas for where illegal immigrants lived. And it created this feedback loop where law enforcement started targeting particular neighborhoods and then those particular neighborhoods started reporting higher crime rates, which meant that the model sent more police there, which meant that more people were arrested there. And so I wonder if there are dangers in this model or at least in presuming that there's no step, there's no possible step backwards, right? Where there are, I think, there's a pretty serious propensity for dangerous feedback loops. I'm just wondering if that's something that's been, that you've considered or how you deal with that. I think one of the experts on this group should be somebody who knows something about feedback loops in data and what the hazards are of that, right? And it's a real issue, I think you're right. I mean, but it's not something that's an unknown and people haven't dealt with before and, you know, economists talk about it all the time, so, but it's a good point. I was gonna interject to follow up on the policy genome problem. And I think if we're looking at the kind of people who, I piece, those are attorneys. And I think we've talked about this a lot that we are all here because we believe there's a transformation in legal practice. And at least there is a law of, in the form of law of computer science that's evolved. And I think on the following, also this idea that there are pitfalls that we should be aware of, we will be the people who will document that genome. We are the people who are prepared and trained and knowledgeable in that space. And so as we're reducing these, I think it's useful to think about these sorts of how we represent knowledge well and reduce some of these sort of domain knowledges to digital form. Because ultimately what we're trying to produce are essentially codes, agents that are knowledgeable about this domain. I think we should probably focus on traditional kinds of law, like the corporate law agent, rather than maybe the reasonable person, I think maybe subject matter, and then reducing that as maybe an also fruitful place to start. Oh, and that's why I thought it was so exciting to see this model here so well developed and thought out and to have the piece about the Magna Carta, because I do think we have to start somewhere and it will be, there will be so many parts of it that are flawed when it starts, but even to have people in the room that are thinking about happiness as a measure or justice issues and other things. I mean, and for that conversation to take place in a place like MIT, as opposed to Congress right now, I think if you, these technologies are coming, if we wait for the political system to adopt them without thinking about them really well first, it will be reactive necessarily as it usually is. And with fast moving technologies, especially like AI, where you're baking things into algorithms, thinking about this proactively in a really holistic way from the get go is gonna be super important. You're gonna do the report out of this, come on by the way. I think happiness goes up to 11 on this scale. It's Magna Carta, are the other people that are talking about that as well? Mark Esposito, but Harvard is for one. Is this the same conversation? I just made this up last night. Well, there are other people that are talking about a Magna Carta concept with AI as well. Yeah, I think so. So I'm just curious your thoughts on something. So and I say this speaking as someone who usually is the one building the gritty models inside of these things, that there's a way in which you build them and you hand them off. And like we all, like we've talked about that they're wrong, like the darkness compared to the dirty windshield. There's also a way in which like, once you build it, the person you handed upstream to like, really believes it, even if you caveated it in all of the ways. And especially since like I hear about like, and I have no legal background, laws from like the 30s. And it's like, yeah, it's gonna predict and be useful in when the internet shows up. You know, is there a way just like our biases to believe these things once we start using them are going to cause us more harm than good, maybe for the tools, you know, or what your thoughts are about that? Yeah, I'm a huge one on kind of confirmation bias, election bias and all of these. And so I think the idea behind this is iterative, right? So let's say we come out with something, but, you know, we continuously reassess whether that actually makes sense and whether real data confirms the model output or not. So there's backtesting built into it, right? With all the caveats or feedback loops and all of that. And so that's, I don't know, other perspective, that would be my answer. Well, I just think of it a lot like NOAA with the weather patterns. I mean, you have still, you know, 20 models of where the storm's gonna hit. And eventually you know what path it took. I think over the next couple of decades, we're gonna have the ability to use real data in ways that we never have before. I mean, when you look at healthcare companies working with Apple Watch to actually have real data coming in about people's health indicators, so much information from which to derive policy and measures of how things are impacting it. But the question is how do we measure that? How do we maintain trust in order to have enough data in order to make these models? Can I do a follow-up to that really quick? Yeah, I think NOAA and like the weather predictions are a great example, because like those are trash more than like seven days or 10 days out and are just these long-term averages. And, you know, it makes me really think about, like you're saying, well, we're gonna do this iterative or we're gonna do this feedback loop, but then also talk about how things change rapidly. And again, as someone who built the critical model, it's like, do we believe our historical data is reasonably even representative of the future and our things change? But then how do we do back testing or how do we do any kind of like, you know what I mean? Yeah, to me the question is, how long is your, so in statistical modeling, right? If you do like, or finance, if you do option pricing, you say, okay, I'm gonna price my option based on a 30, 60, 90-day trailing period. In each period we'll have, you know, a different option price, right? So I think what you wanna do, and this depends on the volume of the data, you kind of wanna, I'm a big fan of dynamic regulation or regulation, the way it happens now is through a rear view mirror. And I work a lot in financial services and I saw that you have this huge cost you load into the system because we had a financial breakdown, which ultimately maybe doesn't make all that much sense. It doesn't really go to the root cost, right? But this data set, I think, you create kind of a real-time data set that you continue to evaluate and as society changes, and there are obviously political views on all of this, so you need to have a mat in a quarter. I think a shorter data set is probably more relevant than one that's 20, 30 years old in today's society. So I think you can have a prediction about how are we gonna do this regulation and after six months a year, you do a test and take another look at it. And with the expectation that laws are gonna change because now the expectation is once a law is in place, it's never gonna change no matter what really, right? And this is actually the biases, this rule will change based on subsequent behavior and if it doesn't change, that's actually the exception because we think that's unusual in today's society. And Chris, just to follow up on that, I think the bigger hurdle here in the legislature whether it's municipality or larger is to create an environment for rulemaking where that's okay to have not a full absorption of liability because we had a rule that we had to change because new data came in and now we'll change it again. I mean, certainly that's not supported now. The liability is immediate and you say, oh, this was your rule and everyone bears that, which I think would be a hurdle to the actual implementation from model to actual rulemaking. Chris, if I could jump in here, I love teaching comparative law, that's my favorite topic. And Hart and Fuller had a very famous debate out of essentially the Nazi war crime stuff and sort of got into positivistic perspectives versus natural law stuff. But the thing that's interesting, I think on this is what is the function of law? And by studying these legal systems, we learn how it functions, what's the purpose? What are the actual social utilities that the laws are getting at? And the other thing is we get a finer view of what we mean by law. How do we separate legal from the other stuff? Because I think the problem with the simulation approach is you miss very important variables when you create functions like this and those can sometimes be the sorts of things that create the unintended consequences or create outcomes that are not consistent with the sorts of data that you would observe empirically. So it's probably good to follow this weather sort of example. Seven to 10 days out, yeah, the model is not as good at predicting but seven or 10 days after the same models applied are really good at explaining why. So there's an explanatory power for law and there's also something that's predictive and provides certainty and all the other features. So heart and fuller debated about what are the things that constitute law as sort of the definition of law? And I think that's a place to start also. How much of what we're representing our economic systems or human behavior around relationships with family versus communities and those are the things I think if you can find a place where the variables are easy to identify and quantify in the scope is easy to sort of trust. We might have less concerns. I don't know if there's some examples to start with but that's right. That's criteria where the variables are what and the what what. So if you're gonna create a simulation. Just those two things that the variables are easy or slow. They're well-defined and there's a strong relationship between the presence or absence of those features and a particular predictable outcome. Ultimately law is valuable because and then that's what we grade. That's the metric that we use. How valuable is our simulation of law based on what we believe law functions as? How many features of law do we see in that system? Yeah. I agree. Thanks. Speak. We're representing a city that wants to do it now. I just had a follow on. I think this is a really fascinating thing to focus on discrete variables that are that you can sort of identify and fix it on. But I think the inverse of that is thinking about what variables you absolutely have. You know, the unknown unknowns, right? And, you know, I'm thinking about, you know, if you just think about the different branches of government, there's a big difference between legislating and executing. And so, you know, for instance, I was, I have a DJI drone and I was like two miles away from an airport and I was like, fuck, I'm just going to fly this drone and it's fine. This happened outside the statute of limitations. It's fine. It was like six years ago. Don't make admissions against interest, please. And keep you out of jail. And so I turned it on and thought, this is my little plastic thing with spinning blades and so I'll definitely be able to tell this thing to take off. And it said, no, you can't take off. You are within, you're too close to this airport. I know where you are. The design specs of the device itself prevent you from violating a law. So I think this is an interesting distinction between a device that's essentially coded to not be able to deviate from a particular law, which is like, that's an amazing thing to simulate because if you change the law, then everything just does that and behaves in that particular way versus, okay, we changed the speed limit from 75 to 80. Of course, not everyone is going to be going 80 miles an hour versus 70 miles an hour. Until there's autonomous vehicles. Until there's autonomous vehicles. And so I think this idea of like, how does, you know, enforcement is a variable, right? And like, I said police earlier, but there are many other instances where, you know, how much can someone get away with, right? Or if they're using cash, then they're not going to be reporting, you know, their tips that they receive from the restaurant versus something that you receive in a paycheck and it's withheld. So I think this is a really interesting sort of the whole gray area of human behavior and what incentives there are or how much surveillance there is, which dictates the perspective that you have on a lot. So the idea of a self-executory legal structure versus one that requires more direct enforcement. Exactly. It's a really important point. I think one of the things that a lot of people talk about, especially in regulatory spaces, what level of ubiquity is this? How many people sort of like one to many are we talking about is the N really small or is the N really big? It absolutely is something I think people think about and you certainly do that. I think, you know, you make these regulations or laws with an eye to the enforcement problems. And maybe if I can add. So I threw the cold theorem for a reason because the cold theorem is from Ronald Coles. He won the Nobel Prize for it in a paper from 1937 and essentially it's about, you know, transaction costs and how private contracting of transaction costs are really low lead to a better outcome than through taxation and regulation. And the idea behind that essentially is to reduce monitoring costs, which is a real cost. So that goes to your example, right? If you have a technology solution that's embedded in the law, you can lower the monitoring cost. We should have an overall societal benefit because it's a negative externality. So again, so I would probably be, you can connect it actually to a lot of what's already there on transaction costs if you design this right. We might also suggest that there's also a notion of nudges versus enforcements or direct prohibitions and things. So to take your drone example, the drain scenario you're describing requires shapefiles on that device that are then updated or not. What happens when a new airport gets built and then you're flying your drone there versus, oh, this one is no longer operating and so now you're prohibited from doing it in the place that's the local park. So those sorts of problems are one set of things. How immutable is the legal structure? But then, if you have it on nudge and it's like, hey, you could really get in trouble. Like it talks to you and says, are you, I'm afraid I can't do that, Dave. You know, you or whatever. That never goes good. No, no, don't get outside the drone. Just stay inside. But that's the nudge maybe approach. And then you can see it giving you the right incentive and saying, you really should not do this. And then you can ignore it because you're in a park. Yeah, exactly. Yeah. The Coasean point's really important. You know, sorry. No, the nudge, I love the nudge idea. And you know, think of the cars and speed limits or the simplest self enforcing, the easiest one would be, so you never go above 60 but then accept when there's an emergency and you wanna get to the house. There are times we just, we're violating the law, we don't care. Do you just automatically get a ticket or is it stop the car from doing it? It's gonna be such a huge, sorry. In jurisprudence, we talk about, of course, is the rule versus standards. So the level of specificity that that, that pronouncement has is also that feature. So you could have a pronouncement that's really, really loose. Like just don't drive recklessly or it could be a number. And then there's the prohibition stuff. You know, you just can't drive your car over 55 or something, but you can always remove the governors. That was cool. More of that on car examples. Actually, I'm really wanna say anything, keep going. Well, so to that point about the car example, I mean, I was joking a tiny bit about autonomous vehicles, but I do think, you know, as we're thinking about these big questions, we do need to think in terms of the next decade, two decades, and then who knows what happens. But there will be more automatic enforcement of, you know, a speed limit changes and all the cars get an update and people are not going beyond the speed limit and drones are, you know, talking to GPS. And so if there's a no fly zone over DC because the Pope's there, you know, this weekend, then all the drones are notified. And, you know, I think the same is gonna be true in, you know, lots of different places we can't even imagine right now. But it will never be perfect, I'm sure, but modeling will become easier because certain things will be more predictable. Well, I guess as long as I have the mic, I don't know. Yeah, keep going. I was just thinking, you know, the transparency of the model, those are so important. Because I think of the one area where we really have tried to have models and policy-making, you look at climate change and it's the most complex model possible and so you just get dueling models. And so if, but I think you were saying if you can really transparently see the results and understand what goes into it, otherwise it's just going to be a little too crazy. All right, maybe to follow up just on that point. I mean, the idea of evidence-based rule-making though and sort of evidence, you know, data-driven policy and all these great buzzwords are wonderful. But when we're talking about the subject matter of the regulation or the legislation or the enforcement action, if it's sort of adjudicatorial, you have to have an actor that understands that subject matter. And I think one of the critical points that I think many of us are sort of trying to make is that lawyers may no longer say, I can't do math. It's simply not acceptable and it's not good practice or competent practice to pass on these issues without investing the time in understanding that whatever subject matter you're dealing with. I think law and economics dealt with this actually very effectively over the last 20 or 30 years. If you think about sort of the law of horse example, Easterbrook, if I think in that paper he actually does make the point about the law of and then references law and economics. And, you know, how many people here would take on a client representation for a competition policy issue if they didn't understand basic microbe? How many of these problems exists in the data space right now where we just are not prepared yet? So I think that's an interesting idea is that pedagogy is responding. There are law schools today that are teaching Python to lawyers. I think that's a terrible language, so that's a wrong thing to do. But yeah, no, pearl, pearl. Hey, my fellow legal hacker. Look, there's another legal hacker that's emerged from Toronto. Hey, Joe, welcome. So glad you could join us. What's on your mind? I was just thinking, you know, about how law works now. And, you know, we've had different points about enforcement, differential enforcement and how feedback loops can be incorporated into the calculations so that we don't have more problematic effects than we anticipate. And I wonder about that because I have yet to see that work extremely effectively in mechanisms for current law and current reality. And so I think it might be false confidence to think that we have the capacity or perhaps the inclination to thoroughly integrate what would be necessary for us to adequately reflect the implications of law throughout society. And so I think if we're gonna pursue this approach or a similar approach, perhaps one requirement would be to have not only data, but also kind of reflection mechanisms that are incentivized and also very usable by everyone and maybe with some sort of scaled in some way to make sure that there isn't a disproportionate emphasis on privileged communities or certain communities, whatever, I mean, probably privileged, but could be any number of things. And so that we avoid the kind of effect of black holes in our communities, which in this process could perhaps be even more easily ignored. I'm not sure if it would be more easily ignored, but anyway, I mean, this provides an opportunity, I think, for transparency. And I think that they're just being useful to have some thoughts about what mechanisms would encourage that kind of transparency. Cool. Thank you Joanna. Okay, and Chris, shall we wrap up? Thanks a lot for the high level of participation. I mean, enjoyed it very much. If you wanna continue to dialogue and hopefully some of you are interested, I think Daza is gonna put on the website a link to an email group. I don't know exactly what you wanna do, Daza. I already did it, just did it when we were talking. So I changed the come to the conference and register at mitlegalform.org too, participate in future activities of the MIT Legal Forum. Thanks to Sandy, giving us a little more room to continue a dialogue at least. And so if you wanna participate, for this group, I think there's some people that have said they'd like to just have some email or some communications about it. We can set up one that's computational law. If you just wanna continue the dialogue. Who knows what, if anything, will happen with this great idea Chris had if MIT wants to take it up or not. It's conversations to be had, worth having. But nothing else if you wanna continue the dialogue in a forum. Here you go, mitlegalform.org. Clicky, clicky, and you're in. Perfect, thanks Daza. Okay. Thanks everybody. Thanks everybody. Great.