 Hi, everyone. Welcome to April's idea flow. As I mentioned earlier, we're going to be talking about independent algorithmic auditing. And we're joined by Ryan Carrier. Ryan is the founder, executive director and chairman for Humanity. He's had over 25 years of experience in business and finance and risk management expertise, and this has given him this really unique perspective in thinking about how to manage risks regarding algorithms and all the things that dad talked about earlier. And today, Ryan will be talking to us about what algorithmic auditing even is, what his vision for a future in which we need these sorts of audits are, and then we'll be later on joined by me, Daza and David, it will have a brief conversation about some ideas we have about auditing as well. So, I'll give the floor to Ryan right now to get us started. Okay, I may. It's a pleasure to be here and I'm looking forward to, to the discussion. The background on for humanity is I started as a nonprofit about five years ago, and wrote a lot of words about future of work technological unemployment rights and freedoms in the fourth industrial revolution ownership data, just really covered the spectrum of risks associated with AI autonomous systems on on humanity. And then settled on that was kind of most, I think, relevant, impactful, and, and had a robust track record of sort of action is this concept of independent audit of AI systems, bringing forward this idea of a trustworthy system or in built trust and what some people realize is that in finance we have this 50 year track record of third party independent audits, and the trust that that brings to the equation is really robust people build entire businesses entire business models and industries. And that's financial audits and accounting, without even extra thought they receive these financial reports, and simply execute they deploy. And so it really is a high degree of trust built into the system, because of the way the system operates in terms of checks and balances and conflict of interest, ensuring independence and objectivity. And so what we want to do is we want to replicate those same features that same construct of trust into AI and autonomous systems in five key areas, ethics, bias, privacy, trust, and cybersecurity. And doing so what we need to do is we need to craft auditable rules, and auditable rules have some key features such as having to be binary. And so the art to what we are working on for example is we aren't kings, and we aren't legislators unto ourselves in fact recently was having a conversation with with someone at the GSA at the federal level of the US federal government. And, and he said oh really what you are as a secretariat and and I really like that that that nomenclature for what we are doing, which is we're drawing together we're now over 325 people from 42 countries with 32 fellows, and we are crowd sourcing audit criteria and what we do is we, we turn this into the these audit rules these, these binary compliance, almost checklist, but we submit them back to the regulators the governing bodies the authorities, and basically say are you good, does this represent what you mean, or maybe we didn't quite get it right, send it back to us and we will will rebuild and fix and iterate. And that's the crowd sourcing nature of the process but in the end, we're really just providing a service to these to these governments to take the law. So take GDPR as an example. Right. It's the law. It sits there. We have crafted an audit criteria at their at the request of the Information Commissioner's Office of the UK, so who is still governed by GDPR. Now the law sits there and what happens is is someone might go out and break it on purpose acts it doesn't matter right they break the law. And so what they do is they get smacked over the head with the law right they get punished by the law, they pay a fine or whatever. But unfortunately when that happens humans are already damaged in that equation right the law has been broken privacy has been broken somebody's been hurt. And so what what we can change about that or what we can improve upon with that is if you take the GDPR, and you turn it into a set of auditable criteria, and you require the audits in advance now you can generate better more frequent proactive compliance with the law before people are hurt. And so he would argue it's a better application of the law. And we've seen this in finance where tax code and markets based law are very frequently complied with because they're embedded into these third party independent audits. So really what we want to replicate is that mechanism that pro activity, especially into AI and autonomous systems where, well, with the exception of the recent new EU regs and GDPR kind of still remains the Wild West in terms of oversight, governance accountability and so on. So that's what we, we, we hope to bring. We're growing like crazy when when Daza and I first met almost exactly 12 months ago for humanity was me. And now we're 325 people and growing at sort of 80 to 100 a month will be 500 and in a few weeks and we'll be 1000 by the fall. We're focused on people, we don't invite corporations in, we don't invite governments and people can come and bring those perspectives. That's fantastic. But what we are is we're built from people up, and the rules and criteria we set are meant to protect people through the enhancement of governance oversight accountability and trust. So that's really the nutshell I think, hopefully it's a decent overview of what we're trying to do. Thank you so much for providing that very crisp overview, Ryan. And the first thing I want to do is just reflect that you have success. I my audit of your remarks is suggest that you have successfully modeled what we have in mind by a flash talk, which is, you know, and it's hard to translate so much in such a small nugget. But the idea is to use that to catalyze questions conversation, maybe new ideas. So with that in mind, I'll go ahead and maybe try to, you know, throw the first lightning bolt. Let's see if we can get started. So I wonder when we talk about audit, and especially independent audit number number one, what what's why does it matter if the audits independent versus in house, like, couldn't you have your own house auditors apply the same rules or maybe not even not auditors like just, you know, whatever security consultants or, you know, program managers is have it be part of their, you know, security and, and, you know, compliance checklists that they do, like what's so special or important about the independent audit, setting you up here a little bit. And then number two. How, how, how can you objectively measure that you've got the criteria right, especially when you're crowdsourcing it in a sense, like, how can you connect a successful audit finding to the to ensuring the results that we want in other words avoiding the bias and avoiding the anomalous harm to people as a result of reliance on AI systems. Those are my two kind of starting questions, two massive questions. So we'll come back to the criteria and sort of the process. Second, but when we talk about audit, there are multiple kinds of audit we put out a piece called taxonomy, about three, four months ago, and talk taxonomy explains the difference between independent sorry internal audits, independent third party audits assurance and assessments. And the reason we had to do that is there were people already in the marketplace in AI saying we do AI audits, and it's a misnomer or it's a misstatement or it's a misappropriation of the term. An assessment or what's currently going on is we have firms with speciality who are engaging companies, and they create a feedback loop in a process for evaluating and improving systems. It's very valuable, there's no criticism of that in any way shape or form, but that feedback loop and that even contractual relationship is between the entity and the company. Okay, and that's called an assessment remember we have a 50 year track record of these words and these terms being used over here in finance so just because AI is new and cool doesn't mean they get to steal words and use them the wrong way. So assessment is what's happening over here. The audit is very unique, especially when we use it with a capital A audit requires three key features that do not exist anywhere else internal or as an assessment auditors number one are certified practitioners, of which there are none currently in the AI world. Number two, when you do an audit, it has to be on an independent third party set of rules. And the third part and this is what most people don't realize about audit audit is a very special contract. It's actually a three party contract. So auditors do their work on a target of evaluation. They do not work for that target of evaluation. They do their work over the target of evaluation and instead, they do their work on behalf of intended users or more often the public, which is why they're called certified public accountants most people realize that's where the word public comes from. That's who the work is actually done for. There's a beautiful tension between the target of evaluation and the auditor, and that tension is where we build this infrastructure of trust. When I come in let's say I were the auditor and I come in as a to audit your firm. And you say we've got a new we've got a new algorithmic system I'm like great. Did you set up the algorithmic risk committee and you're like yep Megan and Renata and Walter they're on the algorithmic risk committee with me and I smile at you and I know we're going to go get lunch later. We've been doing this for years, but I got to lean in and I got to do this I got to say does it you got to prove it to me. Why is that. Why do you have to prove it to me, because as your auditor as the as an auditor who has a responsibility to the public. If I assert false compliance. If I assert compliance where it does not exist. Do you know who's liable for that me. That's what's different between this assessment and this this internal conversation on how I'm improving your system right there's no liability associated with that. When I'm an auditor of a proper third party audit, I do have liability if I assert false compliance. So I need to know I need to have it proved that you have complied with what the audit rules and criteria are. It also highlights why audit rules are unique and special. We can't have gray areas, because if there's a gray area with an audit compliance or question. Then what am I going to say, I'm going to say non compliant because it protects me. Right. So the nature of this audit relationship this this lovely tension here is I can get no other remunerations from from you. I cannot do pre audit service this is Sarbanes oxley. We learned this from and Ron and world common 2001. I get no other remunerations I get only the fees associated with providing the audit, and I wear liability for false compliance, if it doesn't exist. That system is created when I'm independent legal term independent when I am objective. Then if I assert compliance if the auditor asserts compliance. The public can hear that and basically say, yeah, they probably are compliant. You know why, because there's no reason for me to provide compliance where it doesn't exist. There's no remuneration, no other way to achieve that sort of a benefit. Now, no system prevents fraud and malfeasance so fraud and malfeasance can still occur and Ron proved that right but the system still had sufficient checks balances and transparency, etc, that enabled in and Ron to eventually be caught out. And then with Sarbanes oxley we enhanced the system and made it even better and even more robust to avoid because what happened what some may not know. The person was both the auditor and the pre audit service provider. And so they're complying with their own their own rules and this is why you have to have that independent third party set of rules. So if you have the auditor, you may not be the pre audit service provider. If you're the pre audit service provider you may not be the auditor that's the law. Okay, but here's the other part of this if you're either one of these things, you know what you don't get to do. So make your own rules be a conflict of interest, and therefore we need an independent third party set body who's creating those rules. That's the role for humanity aims to play in this equation. On behalf of people create a set of criteria that are submitted to regulators legislators whoever the legal authority is to basically say, does this represent your views. If it does, then these audit criteria can be used we would we would license it out to any qualified auditors any qualified service providers, who can provide these solutions, and then Dazza you also mentioned about internal audit. Internal audit is something that grew out of the original financial audit, which was for 20 years once, once the sec mandated 10 cues and 10 case, right. Auditors would walk in and every company reacted the same way is like, ah, it's the orders like it's going to be a bad doctors visit right it's going to be painful. The Treadway Commission came along and created the COSO system of internal risk and controls, and it changed how we respond to auditors. So now they walk in external auditors they walk in we usher them up to the third floor conference room. We lock the door on them for six weeks we slide food under the door three times a day, and we pump compliance at them right. It's a much easier system when it's built compliance by design and that's the role that the internal auditors play. They build this system so that they know that the internal solutions that feeds this funnel are objective and independently verified before they're handed off to the internal auditors so internal auditors play a hugely valuable rule in terms of meeting external auditors in a non painful way. So that was like two thirds of your question. I'm wondering if I should take a pause or if we should circle back. Probably good. And the second part. If I may TMA related to basically, well, I'm having a rational basis for for establishing what the audit criteria are, especially using this additional innovation of crowd sourcing, which is, you know, not how I think of like gap and IFRS rules coming particularly like the other source of audit roles. And so to that, though there's a CRISPR question that has been posed by Walter which we which who's in our batters box to go next but but first I wanted to hand it open the floor to TMA who is about to say something and also possibly David for again if if he has anything to interject at this point before we get into further questions. Yeah, I was just, when you're mentioning the role that for here he's going to have it sounds like for humanities role will be similar to that of FASB. And I was wondering, what would be which government agency do you see be playing the role of the SEC right helping create the laws are going to enforce you know out of it. Yeah, it's a it's a fantastic question. I would characterize FASB. Can I just ask, could you please for the uninitiated just break down FASB and who they are and how they relate to, you know, the creation of like gap and everything as the to in the government context. And then, except we've had any other acronyms people should just to come off mute and be like what what were you talking about. And let's just have a moment of definition from Ryan so we can all move forward together. Okay, so there's a few that we've probably already used gap which is generally accepted accounting principles, IFRS, which is international financial reporting standards. Those are the two main accounting schemes that are used for independent third party financial audits gap was the originals created about six months in advance of the precursor of IFRS which was an iterated process gap was and I'm going to kind of tease to what you said by a crowd, but it was a crowd of white dudes who met at a country club and who worked in the accounting industry so what I would call these days as a bad crowd. Okay, not necessarily diverse points of views here, but they had a mission which was to take these disparate accounting concepts and to blend them into a uniform system. And they did a decent job of that. And what they establish from that work is FASB financial accounting standards board FASB is the ownership or governance of generally accepted accounting principles so when they have to be changed and updated because tax laws change or markets laws change, then they they have to, they adjust those build those in have a, you know, sort of public hearing time where where people can can file opinions and so on, and interpret decisions and move from there to making the changes in in the accounting system. So they came to play that same role, but to be a more robust transparent inclusive crowd. And so, and also those people who started FASB and started gap accounting. We're from the industry and it wasn't an all inclusive kind of thing. Whereas for us, literally anyone can come and join the party and have and have their voice heard so we think it's actually a more robust process. Once gap accounting was established and the industry had agreed it, which they had within 18 months the SEC mandated that gap accounting had to be followed by all publicly traded companies. And it created this whole industry what is now a big four but obviously there's a lot more accountants doing third party independent financial audits. And it created this industry and this this mandate, and that was replicated the world around very quickly within within a couple of years, most markets, most stock markets and individual countries had similar kinds of mandates to follow either gap or IFRS. Who is the SEC in the equation of AI and autonomous systems. Well I'll tell you a quick funny story which was I was having the conversation with the senior legislative aid of Pete Olson who sits on the AI caucus. And we were talking about independent audit and he stopped me in the middle and he said, Ryan, who's responsible for this. He said it was damned if I know, because here's the problem. AI and autonomous systems exist across the whole economy in every different place and I'm just going to give you a quick rundown of conversations that we've had. DOD cares about AI and hiring had a conversation with the Federal Reserve of cares about credit worthiness. FTC is trying to put in place coverage for children, let alone, they're probably the biggest footprint in the, in the space. We're already doing work with the Joint Artificial Intelligence Center of the DOD, who's been tasked with taking AI ethics principles and turning them into auditable criteria for every AI and every autonomous system that they cover. GSA cares VA cares. Pretty much everybody cares. NIST is kind of sitting in the center of this because they're kind of the, the cool kid in terms of technology, but even then they don't create auditable rules. And the answer is who knows in the US. So, part of our approach for humanity is to try and talk to all of them to try to put out there that we, you know, we're happy to provide this service this secretariat service of trying to craft these criteria, trying to translate law which is not written in a binary way into these binary unambiguous kind of criteria. And again then resubmitting it back back to the authorities. So, in the end, if I took a guess as to who's going to own this it's going to be a triumvirate I think it's going to be FTC. It's going to be NIST, and DOD is going to be kind of allowed to play their play their part as such a dominant sort of fixture in this but I think NIST and FTC you're probably going to lead the way. So, you know, one of the things about freedom of speech is we never want to compel speech. It's not just about censorship but so I'll just make this as an invitation with no pressure but Mr. Horrigan is part of your debut splashdown as our co host going forward. Do you have any like reflections or thoughts or questions on what Ryan's laying out. And once again thanks for having me, Dazza. And Ryan I thought it was fascinating the idea of regulating the robots is always something interesting based on the work that I do as in I'm an in house council for a software company but do a lot of continuing legal education courses and speaking on data privacy data protection and the limits of AI because we use it to go through people's very personal data and litigation. And one of the things that is interesting you mentioned a lot of international perspectives here. And as you were talking, I kept thinking you know this is all great and it would be great to have a standard and be great to be able to have this independent auditing function. What do you think is the chance of international bodies agreeing to this, because you know you take the use of artificial intelligence and litigation, it's been relatively kumbaya surprisingly so when the United States was the first to allow litigants to do technology assisted review machine learning to go through documents, as so that lawyers can stand up and do their certification of the federal rules and say, Well, yes, we have reviewed the corpus of documents even though it was the robots really reviewing a lot of the documents. So, the United States was first to accept that Ireland followed the United Kingdom followed Australia followed, but US litigation is extremely controversial with the French especially as you probably know. So what do you think the chances are of getting people to come around and agree to this I mean you know how did the GDPR works and maybe California agrees with the CCPA and the CPA alright but I don't know about the rest of the country what do you think. Now you're you're absolutely right and so our focus is 100% on jurisdictionally sensitive criteria. So GDPR is the law of some lands, and therefore we have a criteria that suits that we could create a California oriented criteria very quickly, which would be similar but not the same. This is one of the more important issues which is protected category variables, as defined by civil rights, Civil Rights Act of 64 protected category variables define a whole set of things including bias and EEOC kind of issues. The world will agree on some of the basics of what these protected category variables are, but there is no chance, anytime in the near future or even a distant future I can imagine where all jurisdictions will agree on what are protected category variables. For example, in the UK socio economic status as a protected category variable, definitely not in the United States. So even if we agree on everything right race, age, color, gender, sexual orientation whatever it is right, we're still going to have these little differences that are going to exist because of legacy systems or because of just even differences in culture. So we make no attempt. We're not out there trying to make a global standard, I think it's a silly approach, to be honest. So instead, all of our criteria will be jurisdictionally sensitive. Now, the net result of that is 2030 5060 maybe even 70% of it will be highly similar, or even the same, but we're always going to have tweaks around the edges to fit again whatever the local law is and let's take GDPR. There are elements of the US Congress who laugh at GDPR. The right to be forgotten is just hilarious to people. Okay, at least on some in some political parties, but it's the law for GDPR. So what that tells you is GDPR is it stands now has zero chance of ever happening in the United States. But it doesn't mean that large chunks of it can't come through and have started to come through whether it's in California, New York States Privacy Act which is being considered, and in other jurisdictions where they're starting to take on pieces of this, which are highly valuable. And so our approach will always be to serve the whatever local jurisdiction, whatever that means. And, and over time, because we will do this all around the world. And we will. It's a lot of hard work but we'll do it because it's important. You'll begin to see a normalization where are the where the things that we agree on 40 again like I said 3040 50% of it we may agree completely on. We're always going to have those little bits around the edges for certain that are never going to be normalized. Does that answer the question David. Absolutely thanks. Hi, I'm not sure who was speaking what was it. Could you introduce yourself. Oh, that's perfectly fine. Oh so if my connections poor if I suddenly stop on David or TMA can you please take the reins but I see that on Bev and Walter had some questions as well. So let's in no particular order but Bev did you want to pose your question first on basically the again the role of the auditor in this new term that you've seen with respect to second parties. I am. I often audit cybersecurity policies and regards to like ISO and other sock type audits and I came across this use of second party audit I'd never heard that before and I wondered if you knew what it was. You did if you can tell me, and if there is a second party audit is there a first party audit or a fourth party audit what are the what where does this origin of use of third party audit come from thank you. Yeah, so first party I know is internal audit or some sort of self reflection. Right. The third party I think comes from this nature of the ability to analyze the difference between, or the interaction between two parties, and have a third party independently review that that transaction so that gives you third party. Second party not sure I know what that is I'm not going to try to pretend that I, that I think I do. I, I'm not I'm not certain what what that would be. So, I can't provide any further advice than that. I could hazard a guess, which is like in contracts that have been involved with in the last several years, especially with security audit. There's frequent and I'm thinking the insurance industry and some other places. There's standard terms that require or permit the other contracting party in a in a B to B relationship to audit compliance, or other other assertions of their counterparty. So, you know, perhaps it's a partner or a contractual counterparty, maybe. I don't know. I'm just guessing here that that makes sense and we actually have that so one of the provisions in our licensing arrangements is that when we license out to auditors, there's a couple of things that we want to keep them from doing. Number one, we want to keep an auditor from also being a service provider. We want to be able to understand what their, their revenue stream is and with whom. So we have an audit rights in in the license to basically check on people to see if they're getting excess remunerations from from an entity that they should be pristine with. So that's actually one part and another part is is as this this industry begins to mature, what will naturally happen as a pre audit service provider is going to pass. And they're going to say oh you should go get audited by this group right because it's just not going to be that many people out there and many entities out there that can do that. So also built into the provision is as into the license is as this grows we want to make sure that there's diversity. So it's not always you know I'm I'm this pre audit service provider I'm going to send everything to this order because I know they're going to be good. That's, again, we view that as kind of a collusive factor or or or some some measure of conflict of interest. So built into our audit criteria is is that you must have a diversity of either pre audit service providers, or if you're pre audit of end resulting auditors, as the industry grows and mature so again that might be an example of second party audit we have those audit rights to kind of ensure fulfillment of the the contractual obligations. Thank you. And Walter. Thank you. Walter you had a question with respect to, you know, the crowd sourcing and maybe other places it could be applicable. Yeah, so from my understanding from from your talk is that you're focused on kind of retrofitting already existing kind of laws and forming them into binary checklist that for the independent auditing algorithms. So does that do you anticipate that going forward if this principle were to be adopted more generally that governments would kind of take on that function and like bake these binary rules directly into the laws as they're writing them. If so, is that generally desirable or is that violate certain auditing principles. So there's a couple of questions in there. First off, we, when we craft audit criteria we are trying to mimic the law and make sure the law is well represented in these criteria, but we will also sometimes provide best practices as well. So it may not be law it may be before the law has caught up. We all know the law isn't the fastest right in terms of keeping up with with technology. So there are times when we will provide best practices, and as a function of that, we already have lawmakers asking us questions about, well, what should we do about deep fakes. What should we do about these other new interesting challenges of law and we will provide suggestions and opinions and some policy ideas inside of for humanity. We have a team called legal innovation, and it does exactly this kind of work is is tries to identify where the law has not quite quite caught up yet. And therefore what do we suggest or what do we think. Now, should the law be binary no I don't think so. When we have when we get down to binary. It's extremely inflexible. We're going on purpose right to achieve compliance or non compliance, but the nature of what we do is going to have to be iterated time and time and time and time and time and time again. Now we have the ability to do that we have the agility to do that. We have the resources in the team to do that, but the law does not. The law wants to be more generic is not the right word. The law wants to be a little bit more circumspect, right, and I, and identify issues and challenges and boundaries without getting to two overly specific and thus inflexible. And so I think this is the balancing act and actually I think it's a, it's a very nice marriage of combining the legal process, which, you know, I'm, I'm not sitting here criticizing it even if I say it's slow. It's slow sometimes for very good reason for very good process of consideration, right, or for representing democratic values that is slower than me being able to pull a team together of 40 industry practitioners to say the law just changed to this, how do we reflect that in our in a criteria, which we can do much faster. So I think it's a nice combination of the two that achieves a grander function of protecting humans in two different ways, two different ways of great value to be honest. I hope that answers the question. Brian, do you have any lobbying groups that you're working with to help move like regulatory action in the United States forward because it just seems like a massive undertaking to be trying to create these policies, and then also working in Europe in the US and elsewhere to try to figure out these standards. It is a massive, massive undertaking. The answer is no, not today. But that's not to say that we don't have, first off, we aren't, we're only allowed a limited amount of lobbying, right, so we, we aren't in the business of endorsing actual proposed laws or anything like that. We, we will help in the discussion, the formation, and from an educational point of view, but but not in the kind of official lobbying or endorsement phase. Now, we have 325 people inside of for humanity as volunteers. And what they do outside of that is kind of their business. And therefore, we may have people who are in contact with senators or, or, or state representatives, or lobby firms or law firms or any sort of entity that might have some, some real oomph behind their opinion. You would encourage that and we simply want to take the tools that we have, and, you know, kind of lay them in their lap and say look if you support this if you view that if you view that this is valuable please take it and talk to people about it. Can I see lobby firms getting behind this very possibly. Could I see lobby firms getting very against this. Absolutely. I have a picture in my head of sitting next to Google in front of Congress talking about this, and Google basically saying we can't do that like it's going to be too hard. And ideally that we are arming our representatives or our senators to respond in the following way which is their system is open. They take anyone anytime anywhere if you didn't like it, you should have been involved in the process from the beginning. Especially now that we've seen how Facebook and Google have been treating their ethics board by their internal ethics board and pretty much stalling any sort of progress in terms of auditing their own algorithms. It just seems like there's no way these companies themselves are going to be open to the audit is going to be, I feel we need something like the SEC saying as a publicly traded company you have to have these audits. Our goal is legal mandate, same as was achieved with financial accounting that is our goal there's no question about it. That's where we hope to get to for the very reasons that you just cited. But in addition to that it's funny we are building what we call a body of knowledge repository, because we want auditors who are wondering if they're staring at something like a code of ethics, is this compliant, what can they compare it against. So it's funny we were going through what is insufficient sufficient and mature for codes of ethics, and the guy who's starting this he had Facebook in as as sufficient, and we are initial reaction was we all laughed we were like no way right like can't be. But here's the funny thing. Their code of ethics is really good. It's very good it's very thorough it's very complete it is sufficient. They just don't apply it. So it's it's different elements of of of how governance oversight accountability work. And that's the thing is right now, Facebook can have a code of ethics, and they can abide by it or not. Who's going to tell them that they didn't. Who are they abiding by it for, and who are they required to do it. Accountability means that someone must comply with something right. That's where we want to get to so as we build this out there's so much to do there's so many, we're going to do criteria for every AI, every autonomous system, everywhere in the economy that impacts a human. That's a lot. Yeah, so, so, sorry, I was just going to say so so the way we choose what we do next is where regulators lawmakers or some other control body is in a position to kind of say, you need to do this. That's kind of how we pick and choose what we do next. Sorry. I would know as you're talking about this I was just thinking, I mean, I don't know if you already has had this idea but you know you talked about the big four accounting firms, and I was thinking, you know, a lot of these firms now have consulting arms that do have technology and you know specific algorithms, etc. I wonder if the first step could rather than going from regulators down to bring profit into it and kind of like somehow convince the auditing companies to include this AI audit as part of like their work and then to use that as building momentum, longer term. So, it's a great question and I tried that model once already. So, so, so with the big four. Basically, I wanted to more perfectly replicate fazby. So I went through a couple of the big four and honestly one of the big four I met. No joke, every single senior person around the world, including the global CEO and chairman. Sat with the global CEO and chairman and said the fall had an hour and 10 minutes and got you know this the whole time. We need to do this yes we need to do this and then we finished and I said, we get to lead this you can announce this. But oh by the way you have to invite the other three to the table. And that's when the head did this. They're just not in a position in a willingness to recognize that that broader scope and oh by the way, the global head of assurance which is the audit side of the house said to me not but three months before. They're not qualified to do these audits, and he was right. The amount of training the difference between auditing numbers and tax code and law versus ethics bias privacy trust and cybersecurity is a monumental difference in training. So, what that taught me by going through that process is, it's actually better to meet the law where it is with the regulators in in a smaller subset prove the model in a couple of places first, grow the crowd, grow the capability and process. And what's going to happen see the ICO hasn't even approved our first GDPR certification scheme they promised in January, it hasn't arrived yet. Once they do honestly it's going to be dominoes, because first off, all of the EU was basically going to say yep we approve that scheme. We've also submitted something called the children's code, which is a man magnificent gold standard law in the UK around age appropriate design. And we've submitted a certification scheme for that as well. And the demand for protection of children around the world in these spaces is really high. But we think that once this approval comes down, it's going to be like wildfire for this one to catch on. So I think that's, that's as good an approach as possible. The big four are starting to get involved with with for humanity, because they, they know they have to move here but you're right. It's the consultant side of the house, not the audit side of the house. And it's because the consultants are the ones that have the skill and AI and autonomous systems. And so we've got a couple of a couple more questions that I hope that we can get to before we time out one from Valeria related to the, what should the cadence be and Valeria maybe I'll let you ask the question and then if we have speaking of cadence if we have time which would be great. The next question would be from Chris, and I share it in part but it relates to this side conversation we've been having Ryan on the recent Wyoming law for decentralized autonomous organization type of LLC. And so in autonomous legal entity one version of which is algorithmically managed, which may mean that the algorithm is making all the material decisions for a corporation. And so, you know, what would, you know, could independent audit maybe be part of the toolkit for how to contain the risk of that. And then, obviously, I know you have opinions about whether that's wise to start with. But first things first on Valeria. Thank you so much and you have the floor. Thank you, Daniel. And my question is related to the fact that there are different types of AI system so for example, the European Union recently is talking about this, this problem so for example there is a recent, the recent proposal from the European Commission but before there was also a resolution from the European Parliament. And it's interesting because they divide the AI system in different risks. So there is high risk there. So for example, if you are talking about the AI system related to that operates in a road. There are facilities related to credit score and so on. They are high risk or there are less risk AI system so for example, video games they they have a way less risk compared to other. So, my question is, if in this case, the, the auditing in this case uses a sort of proportionality principle in the context of the frequency of examination so for example, if a high level here AI system is evaluated more often if there is such a debate or if it's not considered. Thank you. No, 100%. Everything we do is risk based. We actually want to start from a different direction, which is we want to identify low risk, and we want to get it off the table for compliance we want to, we know we have a fixed set of resources just as as people, right, as a world we have a fixed set of resources and what we want to do is we want to identify what is low risk and get it out of the way. When it comes to what the EU proposed they said high risk and everything else, we don't see it that way. We see it as systemic risk, in which case we don't even think it should exist. High risk, medium risk and low, where high and medium, as long as they have sufficient mitigations can be allowed to go forward. And so I think part of also of your question is just understanding how criteria are built we build them two ways. From the top down governance so we have a set of criteria that belong to board of directors CEO chief data officer. These criteria are governance oversight accountability. They apply to every AI every autonomous system, regardless of what it is. After that we actually go system by system, all throughout the C suite so we assign things to chief operations officer chief technology officer head of HR, so on. And we identify the systems in specific, because we want to look at those systems to see what is unique about them, again in ethics bias privacy trust and cyber that we need to or may need to consider in terms of dedicated criteria. So, when we respond to the EU, the new EU regs, we actually on May 17 they're launching a group that will do a series of things. First is that we will craft criteria from the top down that fit the EU demand for governance oversight and accountability. To finish with that we will then move to annex three, and we will go function by function by function and create audits for every single one of those identified high risk. So I hope the Larry that answered answered your question. Yes, thank you. Yeah, I was wondering that too. And just a quick reflection. Lately I've been working a lot in DevOps and automated testing of systems, you know, as they get released on nightly builds and you know some of the criteria have some overlap with some of the criteria that you're doing auditing on and it just made me start to wonder how much of could we have as continuous audit in a sense, as opposed to these events that people get ready for some stuff obviously needs a team of humans and a review and conversation and everything. But some of the information because it is AI. It's already captured in terms of what the rules are and how they're being applied and the result. Yeah. Yeah, fantastic question he froze so I'm not. Oh, he's back though. Sorry. No, no, you're good. I think I heard it. I think I heard it all so when we build the criteria it's built from design to decommissioning. And it also includes some internal real time monitoring so for example, when we talk about a model that is built with a with a specific scope nature context and purpose. Right according to GDPR, if you're using consent as your as your frame for lawful basis, then it has to be consistent with scope nature context and purpose. Therefore, if your model is a learning algorithm, and it's going to change and move what we have to I do and we require this in our audit criteria are key performance indicators guardrails around that model to identify when concept drift has occurred. If you have left the scope nature context and purpose for which you had good consent for. So there's there's design all the way to decommissioning there's criteria all along the way. As those learn this process they will build it into how they design and develop. There's nothing that we're asking for that isn't part of good design governance oversight documentation, and so on. So the idea of continuous audit sort of basically in agreement. And then on top of that there are already groups inside of for humanity who are building compliance by design systems that want to capture all of this, all of the time, sort of replicating that internal audit system where it was just you know feeding the compliance as needed. We don't want to get to that same place but now the compliance is automatically captured and be able to be delivered to that and that external order for verification and if the systems are robust enough. We could have more frequent reviews. Right now, we stand by an annual review. Here's the problem with an annual review. If you do an annual review you could be out of best practice the next day. 264 days when you're not best practice or when you're not up to scratch with with the law that's changed, and that's suboptimal. But at the same time it's also you know in some ways practical because some of the solutions to these issues are capital intensive, their investment decisions they take time to implement. So having some sort of lag in the process is reasonable to achieve compliance. So I think that's the balance of your question as a outstanding. Very quickly, how about the whole new, you know, half day seminar worth of content on corporate, corporate bots and automated LLCs and then for extra credit if you can get to it. I know David, hardly a podcast goes by where he doesn't raise the specter of the robot lawyer and the law practice of the future, right? That's algorithmic largely. How on earth are we going to start auditing algorithmic organizations and service providers and professionals? Well, I think we have the system to require the disclosure transparency documentation of all of these systems. There's nothing unique about these particular algorithmic entities that couldn't meet the demands of what we are requiring for compliance. I think it applies in the same fashion. However, coming from the very simple perspective of for humanity and ensuring that humans are best served, I get the idea of smart contracts. I get the idea of automated decisions and speed to market tools. All of that makes sense to me. Here's what I'm worried about. There's also an element of control. I'm going to use a very sci-fi sort of example here. But if you don't have ultimate control, what we could see is a poorly optimized machine without embedded values and risks, which is essentially operating against societal views, ethics and so on, and becomes problematic to the community around it. When we talk about ethical systems, we talk about impacts on humans, but also on society at large, systemic risk, environmental, and so on. What I think is a better system, and I wrote this years ago, and this could be changed over time, but I would argue that there should always be a human beneficial owner who we could jail or who we could separate from the entity to slow things down, to maximize that control. There's a whole world out there that believes in something called the stop button problem, that there are certain machines, certain algorithms that we can build that we cannot stop. And so insisting upon some sort of framework, and we can make jokes like, well, will you just unplug it? Well, yes, we unplug it until the machine in transcendence that Johnny Depp movie goes out and finds a different power source. And yes, it's very science fiction, but let's plan for it today to ensure that we have robust human control over the tools that we have. And we aren't allowing a machine to have its own raison d'etre, if you would. So from from a group called for humanity, you could see why we would have this this sort of perspective. Does it do you think that answered the question. I think that answered the question the way that raises more questions, which is the way we like it. And so any any kind of closing reflections or the last word goes to our esteemed co host David Horrigan. Thank you. Thank you. And Ryan, thank you for sharing these insights today. It's been truly fascinating and we will see people, especially in the legal work that I do are very interested in how artificial intelligence is going to apply. And the interesting thing to me is that it really has been applied for years people just haven't thought about it like they are now so thank you for being here with us. It's my pleasure. It's an honor to speak with each of you and great questions and it was, it was great fun. Thank you to me, can you take us out and your customary manner please. Thanks again, Ryan, and thank you all for coming today and we'll see you guys next month at the next idea flow.