 Welcome everybody to the special session of fireside chat with Joshua Browder of do not pay. We're so glad that Joshua is able to join us today and this is actually a second appearance at law.mit.edu's computational law report. If you look on our media page, you can see a very interesting kind of stage setting podcast that Brian and I did with him. Oh gosh, about a year ago now or maybe a little more. And so much has changed since then thanks to the ready availability of generative AI and I can't think of anybody who has done more creative and provocative work with this technology in a legal context than Joshua Browder. I want to thank you again for for joining us and ask you if you'd be willing to give a brief introduction of yourself and do not pay. Well, I'm sorry one other standard disclaimer. Of course we don't MIT does not endorse. Do not pay as a company or any of their products or services. This is educational, and I do think it is very informative to see what is possible especially in a consumer context so with that Joshua, maybe we could unshare the screen so we could see Joshua. And I'd like to invite you to introduce yourself, your company and to maybe let us know what have you been doing with generative AI and GPT through your company for consumers. Well, thank you so much for having me. It's a shame to hear that MIT doesn't endorse do not pay but I understand. So, at a high level do not pay is automated consumer rights. We like to call ourselves the world's first robot lawyer, and we've been operating since 2015, and we've had a huge amount of success with templates. So rules based systems where if this happens we send an angry letter to the government or a corporation to get a refund or get someone out parking ticket. And that has been taken us very far we've won over 2 million cases, just with letters. But what's really exciting is in the past year. The AI models available with companies like open AI and GPTJ, which is the open source version of GPT three have really, in my opinion, improved by 10x. And because of that it's allowed us to actually go back and forth with these companies and governments with disputes. So we've done things like automatically negotiated live with Comcast live chat where our bot talks to Comcast. Perhaps they're using an AI talks back and our AI legal system negotiates a build down. We've had a bot phone up a bank and using a synthetic voice negotiate to get a wire refund refund. And next month we're taking it to the next level where we're actually having a physical courtroom introduced the robot lawyer, where the bot will be whispering in someone's ear what to say in a speeding ticket case. We're really trying to push the boundaries of bringing this technology to ordinary people because typically when there's a powerful technology like AI, it gets in the hands of the big corporations and the government first. So we want to give power to the people and actually give consumers access to this so that they can fight for their rights. And the work that you are doing, you know, frequently to frequently when it comes to powerful new technologies, the little guy individuals and consumers are the ones that are almost subject to it and sometimes they don't come. We don't come out as well in the overall deal and so great to see you applying this creatively and effectively on behalf of consumers. I want to ask you if you could go back one half step. You made a quick reference to wire fees that I believe you had refunded from a bank and you kind of said it very quickly but I was hoping you could go a little in a little more depth to talk about what I thought was a very intriguing integration of voice to text generative AI on text to voice and then how you went from a phone tree to talking to a live person like just tell the story of what what you did there and what's possible. Yeah so all of these AI language tools are useless if you can actually communicate their outputs to where they need to go. And so that's what we specialize with it do not pay and we have bots that go on these websites and do all the clicking and submit this text to get a response. So we decided to take it a step further. There's an amazing API it's called resemble API, and it allows you to clone your voice. So you can record five minutes of you talking to the AI, and then the AI will replicate you and your voice. So then we used a Twilio bot to phone up Wells Fargo, and it was my voice in a robotic form, talking to them, and the conversation was powered by GPT three. The voice was powered by a different AI called resemble, and then we actually had other ai's as guard rails, because there's huge limitations with this technology which I can kind of go into that the biggest limitation is the ai talks too much. So if, if a representative is saying like, hang on, let me let me let me look at it. The AI is inclined to have a three sentence response. And so we've actually have another AI which even decides whether to say something or not, because it was talking too much. And this is going to be a problem for our courtroom case coming up. So there's a good test for that. And then the final thing I would say is that the AI exaggerates and lies a lot. With our Comcast dispute, when we sent the AI to get a discount, it said, I had five outages in the past 24 hours, or something like that. And that might be a good strategy, but from a liability perspective, it's not very good for do not pay. So we've had to prompt, it's all about the prompt what you're prompting these models and so we've prompted it to say, stick to the facts, don't exaggerate and we've managed to clean it up using that as well. And so I did a video and was amazed at your demo of how the result of this rather experimental I would almost call it use of this chain of technologies was that you got your wire fees refunded from the bank. It was a triumph. What one little questions I noticed when I tried to find the video. It looks like Twitter has a violation. What's that all about. There's a violation saying you can't have deep fake voices videos, and so they flagged it and they took it down is a voice representation of fake when you are the person who are choosing to use it as a proxy of yourself. Is that fake or is that an extension of your real identity. I have a compliance team that do not pay of real lawyers, and they always are very upset with me because I'm always pushing the boundaries. And so that was a good point. We decided that to reduce our exposure of this experiment, it would be better if I was the one who did it. So that that's that that's why we did it because at least there'll be a core argument saying that I was just calling them myself and using a assistive technology. What one in the media lab when we think about artificial intelligence. We focus on what we call basically kind of cognitive extension. And in looking at ways that it cannot replace people but actually expand and extend our capabilities and so I think I would say that what you did there is right in the center of one of the things at least I have in mind about how this type of technology can help people manage this myriad of relationships we have as consumers with all of these companies and government agencies and other organizations all of whom are using AI. What about us can we use it to Yeah, so the courtroom stuff is an experiment it's it's on the borderline illegal. So we're not making a product out of that, but we have several really exciting AI products coming out. One we released yesterday that summarizes terms and conditions. The first one's coming out where you can upload a medical bill and the AI will go on the no surprises act and dispute the bill. And I think it's really good for people for two reasons. The first is that a lot of people can't afford to get access to their rights that these big companies have business model of concentrated benefit but spread out harm. So I think we discussed this in the last call but Comcast can charge a million people $10 they make $10 million it's great for Comcast, but the people being charged 10 or $12, like in my wifey case, they don't have time to call up Comcast and waste their time over $12. And so that's a great job for software. And then finally I think it can really make access to justice affordable, especially with these more expensive cases like medical bills of the AI is sort of its own entity or, you know, like let's say an open AI or an anthropic entity versus the AI being used as a proxy or an agent of a person, and therefore having the affordances of the rights and obligations and the roles of that person a consumer. So let's but instead of a consumer let's talk about lawyers you know it's law.mit.edu. We love technology and law so part of that is practice of law. How do you think this is going to play out in a litigation context when lawyers are using the technology and the way you have in mind for this next, this next activity of basically having to provide information for them they would say, would this be deemed an consistent of the attorney operating under their license, or would it potentially as I'm seeing in the chats people are using that chilling phrase unauthorized practice of law. So, just to give some more context so in December over Christmas I tweeted out an offer on Twitter, and it said, and does anyone want to be the first ever AI court case will pay even if you lose, and we're actually going to even throw in some additional money for the risk of contempt of court and other things. And the tweet was seen by millions of people, and I had 300 different offers for people to participate. And so our team looked through all of these cases, and we were really looking for three things. The first is wiretapping laws. So for the AI to even process the what the judge or someone is saying, you're going to have to record it and broadcast it. So some states are one party consent states where just one person can record, but other states require everyone who's being recorded to give their permission. So that was the first thing that ruled out a lot of cases. The second thing was around, as you said unauthorized practice of law, some state statutes like California are very broad and entities and corporations and anyone can unauthorized practice of law. So it's a, it's a gray area in places like that, those. But the way these statutes are written, they'd no one could have ever imagined that AI would there would be robot lawyers. And so in some states, the statutes are very specific to a human being pretending to be a lawyer. They were written for like in the days when mechanics were pretending to be a lawyer back in the olden days. They didn't really have the concept of do not pay in mind. And so there's some places where it's completely legal, and we're not too worried about that. And then finally there's local courtroom rules, some courtrooms, like the Supreme Court ban electronics, other courtrooms you're allowed to have electronics and so things like that. So, it's funny you mentioned mechanic of all things because we were really fascinated by the potential of what we call legal engineering, or basically mechanics of law and we think in the information age. The mechanics are going to be a really good skill to have in the, in, you know, in the digital economy so it's a particularly poignant example. So, how do you imagine. So, I mean, we're obviously in a time of early experimentation I think you're, I think it's safe to say you're a leader on when it comes to creative new use cases. Can you help me look over the horizon a little bit I'm sure you've been thinking about this but what after the first wave or two of evolution and adaptation of this technology for, let's say legal practice in the courtroom that's such an interesting dramatic scenario. How do you think that this technology would be integrated as a matter of course, and you know taught in law schools and have a rules of procedure and courts that recognize it as being a place will it be sort of like a laptop along with people's desk, will it be the sort of real time on speaking on our behalf or how might it play out in practice. I think people should have a right to have AI advise them in courtroom hearings if they're a pro say litigant. As of right now, no, no state allows that but our goal with this case especially if we win is to set a point that it is an access to justice issue and it can open it up. I think it's also an accessibility issue, a lot of people struggle to read all of the laws and understand all of the text, and AI can help them overcome that on an accessibility front. And so, maybe there could be some ADA litigation around allowing AI and courtrooms, which I would be excited about. But the problem is the people creating the rules the bar associations have an incentive, unfortunately to keep prices high. And so, that's the pessimistic argument. The optimistic argument is that there's not a single lawyer who's going to get out of bed over a $500 small claims court case. And so this is really an underserved need. And so perhaps it's not even about replacing lawyers. It's all about expanding access. There will be some lawyers who should be very worried like the ones you see on billboards. So the show better call Saul, he should be worried, but others don't really have to create regulations they should be forward thinking. And in particular, it raises the question when, when, when other sides are using the power of these tools, if you are being, you know, kind of artificially restricted from using it is that in effect a kind of are you being handicapped. Maybe there's some new interpretations of ADA and new, new expectations reflected and supported in regulation and procedure that that we're going to have to look at adopting. Speaking of that, I want to come back now to another kind of big picture over the horizon. The concept that I think your early work is has raised, and that is what you did with the wire fee refund in the Comcast bill, at least initially with the on the wire refund you had to go through a kind of a phone tree and with the Comcast I think it was entirely the chat bot on the Comcast side. Do you imagine a ecology where consumers have AI based technology that is sort of the inverse or converse reciprocal service to companies and agencies in a large scale, so that we basically have sort of like general more standard types of APIs and interactions and maybe guardrails or boundaries for the context of certain interactions. In some way or how do you see it playing out when we have bought versus bought between consumers and organizations. So the AI arms race has just begun. We see that we've seen this for the past few years or do not pay where every action we take has an equal and opposite reaction from the companies where we're going to see things like voice verification so they're going to use AI and a lot of banks already do this on the back end they don't tell you that they're doing it, but if it's not your voice, they make the calls suspicious until you're already on a losing front. The good news is that do not pay is much more motivated than the average Comcast engineer. And so we in the past we've succeeded at these arms races. Another example is when we started sending in parking ticket letters, the government started ignoring letters that came from do not pay. So we randomized the letters and then they stopped ignoring it because they couldn't be sure that it was coming from us. So there's all these steps that are going to be taken from both sides. Regarding the Comcast chat specifically, you can't even tell whether it's a bot or not. I think it was a bot for part of a conversation, but then a human being for the rest. And even though it might not have been a bot towards the end, the customer service agents are unfortunately just acting within the script. They have a very set certain set of parameters that they can authorize a refund or not. So I think one of the biggest insults in life going forward will be you sound just like chat GPT. And so, unfortunately, the customer service agents already sound like chat GPT whether they are or they're not. And so it will free up the work for them and also free up the work for consumers and the bots will just do the hard part to get the $12. Outstanding. So in effect, maybe you could imagine a kind of a funnel where the consumer can just look at the ultimate result of the question to be asked or the selection to be made and not have to go through all of the, all of the rigmarole to get there maybe in a dashboard or something like that. Is that what you're getting at. And the good news is that there are a lot of rights that people have that are enshrined in federal law, like for example, if you have an agent appeal your credit report to dispute something on your credit report. And just because it comes from an AI, they still can't ignore it under the law. And so because and the laws, the way they're written is like if it comes in a letter or X format, so they just so they can't gatekeep a lot of these use cases and that's also helpful for us. There's an interesting application of this technology for consumers and we've talked about now the sort of help desk context who talked about litigation, especially pro say I was asking about lawyers but you very appropriately went to people that aren't represented by lawyers where the access to justice is cases very compelling. What what other contexts do you think this technology could be useful for. I think it's all about going from proactive to retroactive. So what I mean by that is right now, do not pay people come to us with a problem they like I want to get a refund for the in flight wifi. But in the future the AI will be so good it will save you money in the background like a true general counsel, like Walmart has a general counsel, just working in that working for their best interest. And I think AI lawyers will do that. So they'll be looking at your bills automatically and figuring out ways to fight back and you can just relax. So you don't even have to think about it. In terms of specific things we're working on, like for example on the medical bill side. There's this amazing law it's called the no surprises act, and it means that hospitals have to publish all of their prices. The problem is in typical compliance fashion. They, they just published like these obscure PDFs, just to comply with the letter of the law, not the spirit of the law. So what we're doing right now is we're having AI go in and crawl all of these hospital websites and take all of their information and make it into a standardized format. We're actually building an AI hospital price comparison website. So I'm excited by those sorts of use cases. So I think just understanding information are presenting arguments and also just figuring out ways that you don't have time to kind of look at yourself. I think so we've got a question now from one of our longtime collaborators and also an advisor of MIT's computational law report, Brian you listening. We've actually know as cool Brian. He asked, as he recalls in the amazing Wells Fargo demo his words and I agree you simply asked for a refund you didn't provide any arguments on your behalf. With that have been possible or might that be possible in the future. Yeah, so it definitely is so on the Comcast example, it provided arguments about FTC statutes around like quality of service. There's a negotiation angle, and there's also a legal angle. And if you combine them, then you can have success. We should have provided some arguments for the wire fees. We just wanted to be. We were worried that they could tell it wasn't a real voice. So we limited what the AI could say as well. So there were lots of constraints but in the Comcast example we're certainly citing FTC statutes and stuff. So I'm thinking about as you've been speaking, which is this idea of real versus not real, which I think is incredibly superficial and backward looking in some ways, and need to a full of fresh rethink for going forward. So on that I just want to pose the question and invite you to go anywhere. I could see you were about to say something so don't lose that thought, but the question I have also is, it's going to be useful as as chat centers and courts and other processes that are official. Start to adapt this technology to have a kind of a recognition that sometimes people are going to be using this technology to exercise our rights and to engage in the systems and to basically have a kind of a disclosure or like a field that we could set saying, the board is coming from my authorized electronic agent, which is this bot technology, and that we can just dispense with this whole question. It's real. It also happens to be the bot that I've authorized. I think there will be rules around that. So, open AI for their GPT three Da Vinci model and others they have guidelines that all businesses using their technology have to follow. And one of them is just what you said that you have to if you have a bot you have to disclose that it's a bot. And that's not very helpful for us because if, if we say to come cast this as a bot, they'll just end the conversation. So the way we get around that is, I mentioned earlier in the call we use GPTJ, which is an open source model. So we use the heavy lifting on the back end for open AI, but we also we use open source models to actually communicate to stop these kind of regulations gatekeeping regulations. I think that there's an argument to be made that if someone says it's a bot, maybe it loses 90% of its effectiveness like chat. I use chat GPT to write me a thank you note for a Christmas present. If it says this was generated by AI then the kind of meaning of the thank you note goes away, and the same could be true for these legal cases. Indeed. Yeah, it so there's, there's a lot more to do in the future as we learn how to adopt and appropriately adapt to the infusion of this technology for consumers and for governments alike. So can I just give you this opportunity, sort of free swim to just close with any thoughts or challenges or ideas that that you'd like to leave with people including questions you may have for us. Um, I think this technology is overhyped and underhyped at the same time. It's overhyped because chat GPT is really good at holding a conversation. It's really good at writing thank you cards and this generic stuff. What we found at do not pay is that it's actually hallucinates regarding the law, it makes up laws and things like that. And the reason we've been able to use the technology successfully is because we have all this training data from the past seven years. We've like basic weeks instead of saying write a dispute to comcast we say based on these 1000 documents, write a dispute, and the quality is like much better if you give it, if you almost retrain it. So I like to say chat GPT is a good high school student, but you have to send it to law school. So in the context of this discussion and the law, I think it all depends on the training data and making sure you have really good data. Um, and wish us luck for our court case next month. That's at this MIT workshop on behalf of everybody at law dot MIT edu and all of our participants. We truly do wish you luck and we hope that you'll come back and join us as you've gone through some of these early experiments to let us know how it went and what's next after that. Sounds good if I'm not in county jail I'll come back to present. Correct. And it will you should have a bot to definitely defend your rights to stay out of jail. Thanks again, Joshua.