 I can see that we have our next two guests here. I believe this is Dr. Santos. Please correct me if I've mispronounced your name and Dr. Cohn. And I don't know who would go first, but welcome. And we've just had an hour with Brian going over two bills. And we'd love to hear from you. Eugene, you wanna do first? I seek my time to the gentleman from Dartmouth. Thank you, John. So I just wanted to come and say, and I actually was listening earlier. I was on YouTube listening to the discussion. Yeah, these are all the questions. I just wanna refer to one about, where are the technology experts? Do they have the questions yet? And I'm gonna be blunt. We don't have quite the questions. We don't have quite the answers, but there's a lot of focus areas that really has to be addressed. So like the question is like, hey, what is this AI actually doing? Those of you who did get a chance to watch Coded Bias or any of the other shows just comes up with, why did they make this decision? So I'm gonna give you sort of the point of view that I think is important at least to me as an AI creator is that if I'm gonna be putting out a technology which is AI, what is the transparency has to be behind it? What are the intentions of that AI tool? One of the biggest things that people don't talk about as much when they put out a particular AI system, what is the goals? In the end, it's a computer algorithm. You could say that it is a mathematical function that the computer algorithm is trying to compute and trying to optimize on. So, a simple example is that I have some sort of mathematical function that ends up in the end when it has to do like, let's just go ahead and talk about facial recognition, when it has to classify something, it's trying to optimize this function. As a creator, when I dig into those functions, let's just even talk about deep learning. Oh my gosh, what the heck is this function? I can't even understand it. And then, then our job as a creator nowadays can I decompose it in some way? Can I make it in some way so that we humans then can understand it itself? So, for me, having served on the task force, one of the biggest benefits that I think that if a setting of commission would be is that, hey, we have this diversity of viewpoints from other members too that are not necessary creators. They would be potentially the consumers, the people who would help also evaluate impact across society and all of helping me as a creator. So I'll just focus back on me as a creator, focus on where I should go, what information should I provide? So, like Representative China had said, for the second bill that was just presented, these are the elements that, you start off with those questions and then you could dig in deeper on that. So that's my point of view. One term I use a lot is, and it's my own work, it's intentions. If I know the computational intentions, if I know the AI intentions, that at least gives us a context for explaining people what this AI algorithm is doing. So let me stop there and I'll pass that to John now. Thank you. Great, I really appreciate that, China. So just by way of introduction, I'm John Kohn. I'm an IBM Fellow at the MIT IBM AI Lab. This is, I've been at IBM for unbelievably 40 years. I see some former IBM colleagues here. But I had a very interest, my career has mostly been in the creation of computers and chips and I'm sort of a newcomer, a digital newcomer too, running this AI lab. I, and the other background that I have is I'm a strong STEM advocate and have worked for many years in promoting science, the love of science and technology in the state and have served on the Vermont. It was the Department of Education then Science Advisory Council for many years in the early 2000s. I prepared a few comments, mostly around Representative China's bills and I just wanna say beforehand how much I appreciate everyone, their deep consideration of this topic and of those bills. Just our experience on the AI task force was very, very interesting. At the time, that task force was the first in the nation and we really looked at it through the lens of how do we maximize the benefits? This is an amazing technology and is already providing a lot of value. But I came away with a much fuller understanding of how we have to minimize the negative impacts. And I was very struck, we in the course of the year, we were very struck by, I think not just me, about how much concern there was. And if anything, I became much more sensitized to that. But I also realized that a lot of the sensitivity and concern was well-founded, but a lot of it was also based on a lack of understanding and fear. But I came away really very excited about the recommendations that we had in the task force and that's why I'm very excited about a very supportive of H410. The idea of trying to take that piece of work. Like I said, I think Vermont has at its advantage the brave little state that we're small enough to actually get things done. And we have kind of a lead in that to sort of lead by example. And I really am happy of that. What I'd like to say is there are a couple of specific things in H410 coming from the AI task force that I really, really think are important. One thing that kind of building off of what Gene said is the need for standards and guidelines and ethical use of AI. Right now it's kind of a wild, wild West. I mean, the problem with things like ethical standards it's there are some things that are absolute in ethics and some things that are kind of relative. And we really need to have some guidelines as a yardstick. And the US has been a bit behind like the EU and Canada in coming up with a set of ethical standards. But so in our guidelines we adapted and adopted with representing Gene's leadership, the EU guidelines. And I think we need to, we really need to have an evolving but a standard set that we can kind of use as a yardstick. So I think having standards is important but standards without some sort of oversight wouldn't really get us anywhere. So one of the main recommendations of the task force and one of the main recommendations of H410 is the need for an ongoing oversight committee to actually do the business of monitoring and adjusting. I think it's so key that we do this on an ongoing basis because in my 40 years in technology I've never seen anything grow this fast. And this is not hype. We're already, AI is, and everybody's cell phone it's everywhere and anything that tries to statically capture what it shouldn't do what it can and can't do is gonna be out of date the moment you do it. So I think it's absolutely essential that the state of Vermont has some sort of an oversight group and that's why I really support the commission, the ongoing commission and we'll support that in any way I can. My personal, in terms of the other recommendations that are in H410 and the AI task force the three things that really struck out to me beyond having ethical guidelines and a body to observe them, monitor them, make recommendations and adapt as the technology adapts as the first is education. Since I look like a crazy mad scientist, I play that on TV. I very, very much believe that we have an opportunity and kind of an obligation to educate the public on what AI can do, the potential because I think it's huge, what it should do, what it can't do, what it shouldn't do and I personally believe that the best vector, as I mentioned before, I was very struck by the lack of nuance and the conversations that we had with educated people about what AI might do. I believe that a major vector in educating the public is through students and several of us on the committee, noticeably, Professor Donna Riso is a professor at UVM. I'm also a professor at UVM. I should have said that. We were thinking, well, we really need to try to get exposure to AI technology, AI ethics, AI privacy, these kind of things. You can start having these kind of conversations as early as middle school, maybe even elementary school. We went out to look around and found out that not only is AI not being taught in any consistent way or not in any way really at all but computer science in general or technology is very, very spotty in the state. So we really support, we've been a small group of us have been actually trying to act on that. We've actually got an NSF grant in and something called computer science for all that we're reapplying for. We have several other programmatic ways including things like first robotics, which I work in to try to introduce students to the basics of computer science and thereby introduce them to AI and then able to have not only technology discussions and career discussions, but you can have ethics discussions. So I think that trying to tell people about the promise but also the danger of unintended consequences is a very, very important thing that the state could do. And I believe very much, again, this is where our brave little state, a small state where we can actually make things happen. So education is my main goal in this because I think an educated population makes the best decisions, whether they're building AI or using it. The next one would be around incentives, an incentive structure. I believe that once we've got a good handle on what ethical AI is, we wanna figure out how do we maximize the benefit to the state? And that also includes economic benefit. I think that the AI jobs, and I'll talk about future of work in a second, but AI is changing all sorts of types of work and it builds great opportunities. And if we could attract more, what we call good, clean, ethical AI jobs to the state, they're low resource, they're environmentally clean, and it's an intellectual place. And as a longtime hippie, Prius driving Birkenstock wearing hippie from Vermont, you want people like that moving here. This is a great place to do that kind of work. So I believe that we can come up with incentives that help high tech companies in general and specifically AI companies, and they don't necessarily have to be buckets of money, they can be access to accelerated computing, et cetera, through the University of Vermont. I think this is something that the task force came up with and that I think we should look for creative ways that the state can afford to help incent those kind of jobs. Finally, I mentioned the future of work briefly. My group at MIT is doing a lot of using AI to look at the impact of AI on the future workforce. And we have to really think about that because there's a lot of worry that things like AI or robotics are gonna displace human labor and create, and some of that is true and some of it is not true, but we're actually trying to look at it analytically. And one thing that we've noticed is that on the low end of AI, I mean the low end of kind of wages, those technologies, human labor, manual labor, et cetera, are not that susceptible to AI specifically. On the high end, there's certainly gonna be lots of high-end jobs like mine that'll be formed and there's great growth opportunity. The big concern is that in the middle wage jobs, which is the bulk of them, service jobs, et cetera, there's going to be job displacement, job migration. And I believe that we need to start factoring that into our career mentoring of students to figure out how they choose careers, how they tool up for those careers. Once things like service industries, restaurant jobs, retail, et cetera, are getting highly automated, how do the students prepare for that? And for people of a certain age like me, if they have to retool in mid-career, like I became an AI person in my 60s, how do you adjust? How do we retrain people? So I think all of these things, I think in H410 would, acting on the recommendations would be good. If I have, I would really like to talk about H263. Is it okay? Can I keep talking for a few minutes? Well, you know what? We have, our chairman is gonna be rejoining us in just a moment. And I know that it would be helpful for him to hear your testimony on the bill. I see that we have a few committee members who have questions. Would it be okay if you- Okay, if I can, I would like to talk about H263. But sure. Of course, absolutely. Sure. Representative Chase. Thank you. I have the initial formation of a thought and I would appreciate both of your insight on it. Something you said was a, the ethical use of AI and I wanna highlight the difference between that and the use of ethical AI. Like the idea of developing a quote unquote evil AI as a test bed to create like a war game sort of scenario where we could find vulnerabilities and so forth and how that might be. I'm not sure whether that would be good or bad. And I would appreciate both of your insight into how that could fit into this discussion. Whether that's something that we want to- Interesting. Jean, what do you- Okay. I got a thought, but- Okay, yeah. My reaction to that is that this is something that has to be addressed here. You know, whether it's the commission or whether it's, you know, the second bill on, you know, where the algorithm is going. The whole point of that is that there's a lot of positives, you know, you're developing an AI to search for vulnerabilities. And if it's developed under, you know, the right guidelines in terms of, you know, hey, are there gonna be any side effects that, you know, that this AI might cause? What are understanding? What's the dangers of accidentally releasing this AI if you don't want it to be? I mean, it's a classic. It's like, you know, one of the historical examples is if I were to remember the, read about or hear about the Morrison worm, I think I got that John was, it was Morrison, right? I'm sorry. So this was one of the earliest computer worms that I remember that at the time, I was a graduate student back in the 80s, had gotten released and actually basically took down our early internet around here. And, you know, a lot was learned from that. I can't say, you know, even to this day, whether, you know, it was deliberately malicious or it was an academic exercise, but it was important. I mean, these things that I just said, if you use an award-gaming situation, we've now defined and stoked out the problem. But I think it's a very good distinction that you're putting us as a, you know, ethical AI or ethical use of AI. And each of these, this is where the commission needs to come in and help, you know, folks make decisions on that and think about where it goes. John? I really like the idea Representative Chase came up. Or are you Representative Chase? Yes. Oh, good. Everybody else has rep. I think that this whole idea of using AI to explore AI is actually catching, have you heard of what are called generalized adversarial networks? This is a GAN. So, Gene, what I'm, you know, at MIT, we have a project with MIT where we're using adversarial network, that's a terrible name for it, but it's basically using AI to explore AI. And so the example that Gene gives on the worm is that we actually have a malware AI that tries to formulate new, you know, look for exposures and then another AI that tries to block them. And it's an interesting technology because that's what's happening in the malware world right now is that, you know, the bad guys have AI and you just have to have better AI. And it is this kind of, and unfortunately, it's, you know, it accelerates, but AI is a great tool for exploring those kind of vulnerabilities. I really like the idea of exploring, you know, kind of ethical vulnerabilities. And I think it would be interesting, you know, one thing that as a UVM faculty, and Gene, I don't know, are you probably the same with Dartmouth that we would love to be able to work, you know, that's a real meaty problem. And I think that would be very interesting to see if we could seed some research in that. Cause I think that would be, I'm sure that somebody somewhere is doing ethical adversarial exploration, but I think that's a really interesting idea. Okay, great. Representative Sims. Yeah, thank you so much for this testimony, really interesting. I particularly appreciated the, you know, calling out the importance of a permanent task for us to provide this oversight, you know, I think we all are experiencing how technology is innovating more and more rapidly and appreciate you calling attention to the fact that, you know, the minute we maybe set rules and guidelines in place, they will be obsolete the next week and that this is really continuous monitoring and evaluation that's so important. You know, I think we heard some really helpful kind of context and framing from Representative China about some examples of what AI or algorithmic decision-making looks like right now. I'd be curious whether either of you could expand on that a little bit more. Are there stories that you find particularly, you know, resonant for sort of helping us who are less familiar with this world understand the opportunities and challenges that this kind of technology creates? And, you know, what are the things that keep you up at night or the things that you're most concerned about just to kind of paint the story and the picture of what this looks like for those of us who are not deep in the weeds but certainly experiencing the impact of this in our daily lives, but may not even realize it? I guess I can go first, John. Sure. All right. So, you know, I work, you know, when we looked at the categories for the second bill, you know, in terms of like support automated decision systems, or I actually can't remember the exact terms, but there that's one of the areas that I work in. And so I'll say two things. I'm going to go ahead and start off with sort of the concerns. The one thing that keeps me up at night is that, you know, if we're creating AI that's supporting, right? And it's coming and says that, hey, if whether it's like helping recognize that it looks like there's a bad situation here or it's recognizing that there's an opportunity here or this is how you should, you know, make your decision. I mean, you know, the concern there is that, what's the basis of it again? You know, like I said before, so that, you know, where's this answer coming from? Why is it selecting this answer when there's other alternative answers? And, you know, the concern is that, if I don't give enough context, you know, I am influenced that, or I shouldn't say I am, my AI is influencing somebody to make a decision. And, you know, it could be compounding bias on top of bias. Am I reporting enough or the AI, I'm sorry. I keep saying I and AI. So is the AI have a blind spot itself? Is the AI making assumptions in choosing that decision for you? So that's very concerning that it says, so what does it take? You know, AI needs to work with the human being and, you know, we already have, it's hard enough that, you know, when I have to talk to other people, I'm sorry, I'm sounding very geeky like that. When I have to talk to other people, it says, you know, am I understanding them? Am I making the right decision? And of course, you know, that's a classic challenge of humanity, but then now throwing the AI. Now, on the positive side though, is that, you know, with the AI, it says there are all these opportunities. You know, we are overwhelmed, complex systems, complex world, complex data, and the AI gives us a chance to help us better organize it. And so, you know, that's the general things that, you know, keep me up at night, both positively and negatively, you know, both excitement as well as concern on that. Yeah, I mean, John, I'll hand it to you now. I'm not programmed to respond to this question. No, I'm fuzzy. I think that's really good, Jean. I think that's where I was gonna go too. I guess, you know, the sort of human computer or human AI interaction is, you know, what really keeps, I mean, there are many existential kind of things when you think out in time, but I think, you know, clearly the ethical decisions, I tend to be a technical optimist and believe we can solve those. I think the thing that practically concerns me, two things that practically concern me the most. One is that in our rush to do things, you know, we'll miss unintended consequences. You know, if you looked at like the social dilemma, did you, anybody see that movie? Yeah, that while a very good intention and one thing, or, you know, understandable intention can lead to an unintended consequence. And I think the general problem there is that we're relegating, I worry that we might skip a step when we relegate decision-making, absolute decision-making and authority to an AI prematurely. I think that my experience, I long experience in automation in other domains and there's kind of a growth curve where you build up mutual trust between you and the AI. And it is actually two-way and sort of a weird way that the AI has to trust you or has to be able to understand the data that you're giving it. And that I think that whenever we skip a step and start to allow an AI to make decisions on our behalf, whether it's explicitly or tacitly, where we're going to, you know, based on an AI decide that we're going to arrest someone or not give them a loan. I think we really have to be, we have to be slow and cautious on one sense to make sure that there's always a human in the loop, that there's always, you know, some expertise that would question the AI, I think in a, and would be able to ask through the explainability, et cetera, that Gene mentioned, you know, why is this decision being made? And ultimately from a accountability that the human would make the decision based on the guidance and would be accountable to the results. The moment we start to take that shortcut and let the AI start deciding, now clearly there are times in our lives where we have to do that. You think about, well, think about flight for a moment. Let's just talk broadly, not so much about AI only, but automation, air flight is so much more safe. It's like two orders of magnitude, I believe safer now than it was a few decades ago because of automation, but it had to happen slowly. You know, you had to build this confidence in the AI, you had to build the confidence between the pilot and the AI. And as I understand it now, you know, a transatlantic flight takes, you know, there's only a legal 45 second interval where the pilot actually has to be driving, you know, where the AI has to shut off. That's pretty amazing, but that we came to that slowly. We're starting to go through the same thing in automatic driving. And there's sometimes you do have to relegate a decision to AI because a human can't react that fast. But I think we have to do this in gradual ways, whether it's in, you know, making loans, making arrests, making parole, you know, we have to proceed as in a sort of human AI partnership until there's enough mutual confidence that we can make the decision. So I worry that we take shortcuts there. My other worry is actually a little bit on the opposite side, as somebody who does AI for a living in corporate, that I worry that an un-nuanced understanding of what AI can and can't do and shouldn't do would lead to good intentioned regulation that would actually prevent the good that could happen. So I think that we, you know, kind of when I think about the second bill, I think that having, I'm a big believer in oversight and regulation, but I think it has to be very precise. I worry that in kind of a overreaction that we would say, you know, no, we can't use any kind of this technology and that we block ourselves out from the potential good and exploration. So I think we have to be very careful and precise about the regulations that we provide because otherwise I think we won't really be able to see the true benefit. So that's kind of a more of a business concern. The first one is kind of a life in limb and in humanity concern. The other is from a business, we have to make a balance that we don't hobble innovation so much that we don't get the benefit. And that keeps me up in a different kind of way, yeah. Framing this balance, this tension, you know, it can be the best and the worst of ourselves and that's our job to sort of shepherd and steward that really carefully so that my father-in-law can drive more safely at night because he's supported by self-driving but, you know, that we're not baking in the bias that we all know that we have, especially within a field that's predominantly dominated by white males and our data sets, you know, so that folks have more opportunity to jobs, not less because of, you know, we can mitigate for the bias so that depending on your name, you aren't excluded from job application. So thanks for framing again that this is an incredible opportunity but we have to proceed really carefully so that we're not exponentially amplifying the worst tendencies within all of us, thanks. Hey, we are, we are, we have our chair back. Welcome back, Mr. Chair. I know you've been on a fast ride. I think Lucy, Representative Rogers had a question. Thanks. Yeah, thank you for your presentations and for this being relatively new to me, I think one of the challenges is just trying to pick out, you know, everything seems important and just trying to pick out what's one direction to go in first. So I had a thought that I've been kind of formulating since our earlier conversation this morning and just wanted your take on it, which is something, something had come up earlier about accountability and it really just started me thinking, you know, who is legally responsible when things go wrong with AI, whether it's, you know, whether it's something where a human ultimately has decision-making or something where the technology is making the decision. And I guess just wondering if it seemed like that could be a helpful place to start because I guess the fear I have is that we get into a situation where nobody, no human is taking responsibility for what's happening. And I shared, I'll say one more thing and then I want to hear your thoughts on it. I shared earlier a story about a friend of mine who had a medical school interview that was conducted between her and a computer. And I know there's current examples of court cases with Institute for Higher Education where the Institute is being held responsible for bias and its admissions. And so it just made me think, you know, in this case, this isn't the case, but in a more extreme example, if the computer was actually making the decisions about who was admitted to this medical school, would the medical school be responsible if there were instances of bias with the maker of the software? And so I guess I'm just curious if this seems like it's on the marker, off the mark as far as a direction to be heading and when we think about what we can do legally and do you have a perspective on who should be the one responsible to make sure this all works the way it should? Yeah, I guess I'll start off with the caveat. I'm not a legal expert, so. But having said that though, I mean, these are, I think what you're saying is very much on the mark. One of the things that on the task was we studied and I was forced enough to help lead this part was for example, AI and medicine. So in this case is that, you know, at this point, you know, there's a number of AI driven tools and actually AI decision-making tools use the medicine such as, you know, diagnosing retinopathy, you know, which is, you know, was a decay of the eye nerves, a lot of systems, for example, even predicting that, you know, what is your trend and your blood glucose levels? And each of these though, like, let's take one where, you know, it's trying to predict whether your tumor is cancerous or not. Well, AI is not gonna be perfect. There's gonna be cases where it's gonna have a false negative or a false positive and who's responsible for that? Now though, this is important to address, but the one thing that gives me, how should I say, some more comfort, maybe that's the wrong word, but, you know, some more sense that it's potentially addressable is that, well, we already have other frameworks in place, which is the FDA because the FDA has to approve these things. And so I think it's gonna be very important in terms of what are the functioning frameworks that are out there, FDA, FAA, all those things that all right, see how much they can expand and how much they can actually cover. Now, when you get to the question like, you know, the bias and admissions, like, you know, your friend having just being interrogated by an AI and the AI making a decision, can that fall anywhere? That's gonna be, you know, I think an important thing that the commission helps address or any sort of commission like this tries to help addressing going forward. I think that this is one of the first things that, you know, we go forward on, but as also like, you know, John was also say earlier, like from our original task force, you know, starting with the ethics and at least getting a bounds on that first, then gives us a shot of finding the right path. I think this is very good, Jean. I think a couple of things, and I think this reads on H.263 is that I think we need to develop standards and those standards have to be evolving. And I do not think that there will ever be a sweeping, you know, a sweeping statement that you could say, you should never do X. I think that the kind of regulation and decision making needs to be handled on, if not a case by case basis, there can be some sort of a regulatory framework. So for example, when I think we can make a clean distinction between when a decision has to be made, has to be relegated to a machine. And I would say that those are pretty clear. Those are things where a human is not able to make the decision fast enough, you know, what I would say for like autonomous driving, autonomous trucks, autonomous trains, which there are now autonomous airplanes, that sometimes you have to relegate a decision to the computer because a human can't be consulted and a human might not be able to respond quickly enough. But I think for the most part, most decisions that are not, you know, lifeline time critical, I think you can make it always the case, and I think you have to think about these in the case by case basis that a human needs to take the final decision, whether it's a medical diagnosis, whether it's a criminal arrest warrant or a jail commutation or anything like that, that I think that AI should be working in partnership with humans and because it is transparent, it has to be transparent, it has to tell you what data it's using, what conclusion it's drawing, what data it's storing, because it's got to be explainable. So if it makes a decision, it can defend the decision and sort of say, here's the logical basis for that. And because we would have it tested for bias in whatever appropriate context that means, and we can talk about that technically, that the human would be in position, whether it's a doctor or a lawyer or a hiring manager would be ultimately, would know that she or he was responsible for the decision and that way would not lean back and just take, because if they knew that the ethical implications and that perhaps the damages implications would be on them, they will lean in and pay attention and ask the right questions and ask the AI to explain more. So I think that there are guidelines, I don't think that there are sweeping guidelines, but I think that we should make sure that we understand when a computer should be able to make a decision on its own. And that should be very limited cases where there are life and limb are necessary for that. And everything else, I believe at least for the foreseeable future would have a human that would have to sign in for accountability. And then I think, but I kind of go back to the idea of having a standards body with narrow targeted rules is the only way to do this because trying to figure this out post hoc is very, very difficult, right? You end up with finger pointing, but I actually see that there is a kind of a bright line where most decisions don't need to be made so quickly that a human can't be assigned responsibility up front. Excellent. Okay, I have a question about limiting just in terms of policymakers. So for both of you, what is the number one way we could limit or manage AI? And I'm not asking what is the best way, I'm asking what is the most impactful way to limit or control AI? Even if I may represent- For policymakers. Yeah, if I may, even that language is sometimes, I think we want to, we do want to limit in the appropriate, but we also want to enable. So I worry a little bit sometimes that we already are putting AI in the hot seat. I think what we, I hope it's okay for me to say that. I believe that what's in H.263 about actually doing an inventory and understanding all the places where we have software that's potentially helping us with decisions or in some cases making decisions for us. I think the first thing I would say is important is having a crisp understanding of what technology is in use and what technology might be put into use. I think understanding all the points where this might happen would be the first thing I would do. And I like that in H.263. Okay, good. And then I think a set of going, starting with those test cases, going through and trying to come up with a rubric for accountability and what overuse, oversight was necessary in each of those, which also I think is in H.263, I think is would be the second case. So I think those are two of the right first moves. Great. Excellent. Thank you. Mike, and then I see Representative Cheena has his hand up as well. Okay, good morning. Thank you, Dr. Cohn and Dr. Santos and Brian for your presentations. It's very, very far-reaching. And as I listen to this, it's such a huge apple that we're trying to take a bite out of. There are, I guess, four areas that I've been thinking about in terms, I guess, are jurisdictional. And one is state versus federal. We're a small state and anything we do, it's going to have limited reaches in terms of developing standards or anything else. And we're not gonna be able to reach far beyond our state boundaries, if that. So it seems to me like federal action is also going to be very important in terms of developing this. So one question would be, what's happening on the federal level on these lines? A secondary is private versus government. I mean, we're talking, we have oversight over the IT systems of the state, but we don't have any oversight of what goes on privately in terms of enacting standards or telling companies what they can and can't do. Well, we might have some, but it's kind of limited and has to address the public good. A lot of these questions are so deep that I wonder whether, in the third area, whether they might be more addressed by academia than a commission. And then the fourth is distinction between operational artificial intelligence and judgmental artificial intelligence. By that, it sort of goes to what Dr. Kohn was talking about, which is, we want AI to make a decision in, if we're driving and the car has to break because of an obstacle, but we also want human involvement in decisions that affect people's lives. So the example that Lucy brought up, Representative Rogers brought up about a computer making a decision whether somebody gets into medical school or not. So these are all very complex and I think we're taking a little bite out of our huge apple and comment on that or not. Okay. Representative Cheena. Yeah, I'd actually like to comment. I know my time is up, but I'm still kind of here as a witness. I'd like to comment on that. And then I have sort of a question that I'm hoping the other guests could speak to to help elaborate something. So just to speaking to what Representative Yontagica was asking about like the debate and the discussion being left to academia more than a commission, that's something that I've heard through our testimony that this does happen in academia. This does happen within the field. One of the issues is that the public in general is not being engaged enough and that the difference between the academic work and the government work is that the government is directly accountable to all people. And so by creating a commission and empowering that commission with that work, then we are increasing the possibility for the public to be involved in shaping the policy. And it's democratizing it more and spreading power beyond academia and business where it's been concentrated out to the people. And so that's why I would argue that the commission is necessary. I see John Cohen's hand. So I'll let you speak to that. And then I'll ask, then I'd like to, if it's okay, Representative Sebelia, I'd like to ask a question for our other guests to speak to because it's something I would like to talk about but I don't have the expertise, so. So I wanna make sure that we also give both of our guests a moment to comment on the bill. I know they have been commenting kind of along the way but if there are any additional comments. So I guess at this point responding to Representative Sheena and then commenting would be. Okay, just very quickly, the representative Yantachka and Representative Sheena, this idea of how universities might fit in. Again, I feel that kind of having a foot in both, there may be some gray areas but I think it's a pretty bright line. I think that I loved the idea early on. We talked about doing research around how we could use AI to look for bias and unintended consequences. That's an academic topic but I believe that where public safety and public good is necessary that as a career academic, I would say you don't wanna leave that to universities. I think that there are things that like for example, the autonomous driving cars that if or other forms of automation that are gonna be affecting and having to say the Vermont where a public safety is an area, then that's clearly a place where I think a public good, that's why I like H.263's idea of having some targeted, understanding of what technology is being used in the state and by the state and having guidelines on those that are deemed to have a public good aspect. So I think that there's a pretty bright line, things that we might explore with academia and I really encourage, I love the idea of having a partnership between the state and the higher learning institutes, including Dartmouth, UVM, Middlebury, all of those. But I do think that the H.263, leaving that only to universities that don't have a point of enforcement, I think doesn't serve the public that well if I understood the idea. I think that having the university, drawing the universities in is not only good because it allows us to explore some things but in the same way of creating jobs, it's a great way of actually building the strength of our local universities on the topic. I mean, I think, speaking just for UVM, UVM has already distinguishing itself in some areas like in complex systems, emergent behavior. And I think that this might be an AI ethics area that might be a great place for the state of Vermont to start to distinguish itself. Yeah, I'd like to add to that too and amplifies is that academia is very good at exploring questions, exploring theories, exploring, ultimately, I mean, I'm in academia because I want to explore way out concepts and things that may be really crazy. But it's not enough because for lack of a better, once the rubber hits the road, it's gone out into the public. It's something that's going to be used. It's something that's going to affect lives. That's where the aspect of the government, the aspect of, as Representative Tina said, this is affecting people directly. That's a different point of view. I shouldn't say it's not different point of view but it's a broader point of view than say academia is actually considered. So it's a partnership that we need to draw all those together. And so one aspect, I very much also support both bills and addressing the second bill on what system is there that's a natural part of that but going back to commission itself. Commission, one aspect of it that excites me is that it serves as a bridge between the theoretical exploration and actually, let me bring up this. Today, one of the drivers of the exponential growth of AI it's so easy to deploy AI. I don't know how many undergraduates I've had company from freshmen, even high school students that are building AI and putting them out. Now the question is that, are they aware of what they're doing? Are they aware of, is the public aware of what they're doing? Are people aware of what their impacts are going to be? And so the commission serves as this mechanism to enhance that awareness further. So yeah, that's what I wanted to add on that. All right, excellent. Is there anything else that you would like to tell us about these bills today or any other questions from the committee or Mr. Chair? Is there any questions? Representative Pabilia, I wanted to know if our guests cause while you have them here it's a really important opportunity that in age 263 it talks about independent testing for bias of automated decision systems. And I think it might be, if our guests are willing to just speak a little bit about that, like how do you do that? Because if we're asking if it's done and we're talking about giving the power to the agency of digital services to have some standards around that, it might be good for them to just speak a little bit about how that's done. I see some hands. So it looks like they have some thoughts. I have a hard stop here in a couple of minutes. So I'm glad you asked that, Representative Gina. I think that the bias is something that it depends on what the bias might be because of the application but there are great open source and closed source efforts that try to look at bias in terms of representative, representative Sims talk or I believe or no, maybe it's representative Rogers. One, we're talking about bias in terms of recognizing gender bias, et cetera. My company IBM has put in the public something called an AI Fairness 360 tools. So they're tools that are open for testing for bias. They're basically curated datasets that would allow you to check bias in certain scenarios. I think I believe that there are technical solutions to this. They are gonna be application specific. They're going to be in terms of like gender bias or racial bias, but I believe that there are tools out there that would allow you to test specific decision-making based on demographics. And I think that once we have the survey that is recommended in H.263, I think we could go and figure out whether there are bias sets or not for some of these, for all of the applications and then figure out how to apply them. I suspect that there would be areas where there is no such test case and maybe that's a place where we would work with academia. Yeah, let me add on top of that. Yeah, there's many efforts going out there. You'll hear terms if you want to go search out, like just simply explainable AI and explanatory or X AI, which is one of the acronyms they use. The one thing though that to amplify what John was saying is that it's very application-specific because it depends on the question you're asking. What's the bias you should be looking for? I think something even more troublesome though is that really bias, I would say is a special case of we don't have enough information. We don't have enough knowledge. And then how do we know that we don't have that information? And that's on the side of academia. Hey, what can we present out? If we're going to develop this, can we identify this? And that itself is an open question. But again, back down the application, when you're talking about the transparency of why that decision came from and what is it driven from? And then if you can even add a minimum doing analysis on the data that you're using to drive that decision, that analysis could be a gender analysis. It could be a decision bias analysis, demographic analysis. At least that gives a clearer context that okay, I'm basing my decision as this is what we as human beings do, right? I'm drawing my decision based on the evidence I have, but now I have to self-assess. What is the nature of this evidence? Great, okay. Well, thank you very much. We appreciate all of the time that you spent here with us today. This is an interesting topic and I think we'll be taking more time up on it, Mr. Chair, anything else that you'd like to add? Just my apologies to Gene and John. I had some technological challenges and actually sprinted, I'm probably a quarter mile from Gene's office right now. I had to come to my office. So I'll be interested to watch the first 40 minutes of this discussion on YouTube tonight, but apologize, but it's been fascinating what I've heard and appreciate both of your direction and also representative Gene for his leadership on this. And we are definitely gonna dig deeper into this in the coming weeks. So I wanna thank everybody. Thank you all and thanks for taking this up. Thank you. Good to see you, John. For committee members, let's take a five minute break since we've had two hours straight here and let's come back at what time? 1107 and we're gonna have some discussion about H360 as I forewarned yesterday.