 Great. Thank you, Matthew. And good afternoon, everyone. I'm Tim Brigham, the chair of the House Energy and Technology Committee. It is Friday, April 23. This is our one o'clock hearing. And we've got a few things that we're covering this afternoon. The first portion of our testimony, we have two representatives from organizations that represent different folks in state government around the country. First we have Pam Greenberg, who is here from the, from NCSL. And what we're going to be talking about initially in this hearing is some of the artificial intelligence and automated decision system legislation that is going on in other parts of the country. And this is as we move forward and are working on a couple of different AI and automated decision system pieces of legislation in our committee. And so first I want to introduce Pam Greenberg from NCSL. Pam, thank you for being here and, you know, giving us some visibility on some of the things that might be going on in other state legislatures. So welcome. Thank you for having me. I'll try and make sure I can share. Can I share my screen here? I'm getting a message here. If you have co-hosting ability, you can share your screen. I don't know if we can give, yeah, there we go. So I think you can do it now, Pam. That's one moment here. Yep, take your time. Yep, and we can see it now. Let's come up. Great. Confirming that. Yeah. Well, again, thank you for having me here today. Chairman Briglin and members of the committee. And again, my name is Pam Greenberg and I work in NCSL's Center for legislative strengthening in Denver, and I cover a wide variety of issues mostly related to technology and privacy issues. So, you know, very quickly, I think I wanted to mention some of the services that NCSL provides. We serve our all 7,883 legislators and more than 25,000 legislative staff. And we provide nonpartisan policy research. We link legislators with each other and with experts. We have a training and meetings of course for our members, our DC office staff represent or speak on behalf of states before Congress. And when representative Rogers first contacted me about what other state legislatures are doing related to artificial intelligence and ethics issues. I thought about this cartoon and doesn't this express how we sometimes think about AI. So for me, especially on the right, will AI cure cancer or will it take over the world. So like all issues. It's not black and white, and it's never easy dealing with technology issues which change so quickly. But today, I'll start with a couple of quick definitions. I'm going to talk about artificial intelligence as the development of computer systems to perform tasks that would normally require human intelligence. And then machine learning is branch of AI focused on building applications that learn from data in order to improve their accuracy over time. And then I thought I would just start with a quick view of how new AI legislation is in state legislatures in 2010 there were no bills that use the exact phrases artificial intelligence or machine learning. And it's still not a huge number in 2021. This has been obviously a steady increase over the past 10 years. And it's the same in Congress. It's not a direct Congress comparison in terms of mentions in bills, but mentions in the congressional record were zero in 2009, and then up to 129 in 2020. So it looks like this issue is on a path of continued growth here. So focus just on sort of broad brush, big picture AI legislation in states and not on the many other types of AI legislation that relate to the implementation of specific technologies such as autonomous cars where we've seen lots of legislation or facial recognition again we've seen lots of legislation in those areas and I'm not focusing on those today. There's one exception being Illinois. I'm going to talk about that last and tell you a little bit more about why I'll include that one. But again, big picture. So, I've categorized here, the most common types of state enacted AI legislation that we've seen in the past several years. We've seen legislative legislation creating commissions or task forces that focused on government use of AI and then finally funding or incentives for research on artificial intelligence. We've seen just seven states with legislatively created commissions, a few others created by executive order, but not very many. We've also, so we've seen. As I said Washington and Vermont, the earliest to created by legislatures study commissions studies. And then this also includes what some states are called future of workforce task forces that address AI so those are in California and New York and Alabama. And, and Washington as well, and then Utah's deep technology talent advisory council which was created just in 2020. A couple of states focused on how state government uses AI as I said, a resolution in Delaware asks state government to consider use the adverse effects of artificial intelligence, and then Texas. passed a law requiring state agencies to consider using next generation technologies including artificial intelligence. And then we've got Utah and California that have set up programs or provided funding for AI research in higher education. And then just a couple of other bills I'll mention in passing Mississippi enacted a law this year requiring the states, hey through 12 computer science curriculum to include instruction in robotics AI machine learning. And then California in 2019 passed a law requiring requiring reporting of any net job loss or replacement due to the use of automation AI or other technologies and that's an unlimited circumstances. That's required. And then I wanted to, since we talked about your interest in a code of ethics. I thought I would go into a little more detail about California is 2018 resolution. ACR 215. It expresses support for the ass along our AI principles, and the principles were included in full in the resolution. I'm not going to go into them in detail but they fall. They, they were created when a group of AI researchers economists legal scholars ethicists philosophers met in some ass along our not sure if I'm even pronouncing that correctly. California to discuss principles for managing the responsible development of AI. And the result of that collaboration were these 23 principles. They fall into sort of three main areas research ethics, and longer term issues. So under research, they state that AI research should have a goal not of undirected intelligence but beneficial. Funding for AI should be accompanied by funding for research on ensuring its beneficial use. There should be a science policy link. So constructive and healthy collaboration and exchange between AI researchers and policy makers. And then under ethics and values, AI should be safe and secure throughout its operational lifetime and verifiably so. AI should be transparent. For example when used in judicial decision making, it should be auditable by a competent human authority. Under privacy, it states that people should have the right to access management and control the data they generate, given AI systems power to analyze and utilize that data. The principles also as I said address longer term issues for example. There's existential risks posed by AI must be subject to planning and mitigation efforts, commensurate with their expected impact. So, as I mentioned, I'm going to talk about lastly about Illinois and their artificial intelligence video interview act. Unlike the more general kinds of laws that we've discussed that I've mentioned so far, it addresses the use of a specific AI technology video interview software that uses AI. It is the only state with a law like this, and it's received a fair amount of attention so I did want to, to include it here. I'm not sure if you're familiar with that with artificial intelligence videos interviews I've, I was not, but it's a it's a one sided interview where a job candidate records answers to various questions on a computer. And then the software analyzes the characteristics including the language they use their speech. And then sometimes at least in early iterations of this software and controversially, their facial expressions and then it would provide an assessment of that applicants suitability for a job. And a measure of certain traits like dependability emotional intelligence and cognitive ability. So, if that can be measured by AI, and then after that screening tool was used it would, the applicant would perhaps would not move on to human interview. So, Illinois set up disclosure and privacy requirements around this software. And again, I think this was just to be a brief presentation and so this is the the end of what I was going to cover today but I'm happy to answer to try to answer questions or now or at any point. Pam, this is Tim Briglin. I'm just curious that the Illinois piece of legislation. That has been an introduced bill it's not something that's been enacted, I'm presuming it has been an active. It has been an active. Okay, and it also looks like that was very specific to a particular type of use of AI. It wasn't necessarily broad in its implication but clearly someone had focused on, I guess a concern about that particular application. Correct. And, you know, I just wanted to mention that NCSL has quite a lot, quite a few resources on autonomous cars are energy and transportation program can speak with you extensively about that we had a special project on autonomous cars, connected and then in terms of facial recognition. That's the other area where we're seeing lots and lots of legislative activity. That's as a form of AI and so we're looking at a number of bills that relate to law enforcement and AI and government use of AI those kinds of things so you know I just, we wanted to focus broadly here, but again, if you have questions at some point about other areas we're happy to respond to those. Yeah. Pam, do you mind taking the presentation off the screen and then we can see other members. Yeah, no that's fine. And I've got a couple more questions but we've got one hand up. Representative Sims. This is so helpful to get a window into what other states are thinking about and very interesting that data about just numbers of bills and mentions it certainly feels like this is an emerging area with with lots of focus, and then you may not be able to answer but do you have a sense of kind of what's on the table right now or like next year, a sense of, you know, you give us a window into what's passed already and some other states but what do you see as the cutting edge leading edge of next steps. You mentioned facial recognition as a piece of that but if you were to forecast what what's coming down the pike that we should be thinking about right now. Yeah that's a great question. You know I think that. I think we're seeing seeing more states that are trying to look at how government is using AI and maybe quick, look at possible standards for that guidelines for that. So that's one area. We are tracking legislation this year. There's only been I think one bill passed so far. It's similar to what we're seeing out there is similar to the bills in those three categories that I mentioned. So fun, you know funding there's also funding being provided for programs that may not specifically mention artificial intelligence but their emerging technology programs of research and higher ed, and their appropriations that are going toward those as well. So, you know, I'm, I think, other than the connected cars and facial recognition. Facial recognition just really exploded this past year or two. So, that's probably the number one area of a specific technology but otherwise, I think we're seeing, you know, increased look at, you know, how can you set up guidelines of principles for government use. Pam, a follow up question I had and I'm just also interested in what other state legislatures are doing. Even though this committee is called the energy and technology committee, really the purview of technology is quite narrow. It's simply technology as it's used within state government. That's really our purview. And I'm interested in where the legislative activity is in other state legislatures because you mentioned some education aspects of how other states might be approaching this. There's clearly a commercial aspect and people are looking at this as a, you know, an economic an area for potential economic growth, as well as concerns about job issues and how, you know, labor might be constrained by additional issues. And clearly there's privacy concerns, which probably fall more in the civil liberties kind of judiciary committee world. I would anticipate that our committee is going to take the lead on this but there are clearly areas in the education world in the commerce committee world in the judiciary world there might even be others that I'm not thinking of. Where is the action on this and other state legislatures. We've seen, for example, Alabama. Their commission on AI has done a lot of focus on the economy and and the potential for growth with AI and so you know that along with incentives to companies or programs. Encouraging use of AI will see, again, I think, you know, responsible use of AI in government. We'll see we're seeing I'm sure Amy can address this much better than I can. But we'll see much more use within of AI within state government. Well, I'm also thinking of our vice chair of this committee was on the ethics committee maybe there's a place for the ethics committee to be taking into this as well but I'm saying that tongue in cheek but clearly there's an ethics element to this as well. A couple of a couple of hands up and then we'll bring Amy into the conversation. Representative Rogers and then representative in touch. Thank you and thanks for coming in today. With with full recognition that this may sound like complement searching. I recognize that Washington and Vermont were kind of first here with with having bills and I guess I'm curious if have you had other states reaching out to you where an interested in what Vermont is doing or just maybe looking for a compliment but also genuinely curious to see how kind of the state actions all interact with each other. Thank you for the question I, I have, I have received a number of information request from other states about the issue. And fortunately, I have been able to send the Vermont's report report. And I have to say, you know, I really appreciate that that kind of great information is documented, and available to share with others because many times commissions meet and it's very difficult to find what the even the results of the, the meetings and the commission. Guides were so. Yes, Vermont is definitely a state that I see as a leader in this area and that I share information with others. Mike. So, King of what chair Briegel said I think that that makes a good argument for why we should have an artificial intelligence commission, because they can look into all these diverse aspects of artificial intelligence and then recommend things that I recommend legislation that could go to whatever committee is appropriate for what they're trying to do, or what they're trying to propose so just comment. Yep. Thanks Mike. Amy, why don't we turn to you now. Thank you for being here and I don't want to butcher the acronym for your organization believe it's the National Association of State Chief Information Officers. That's right. Excellent. Well, thank you for being with us and giving us a perspective on how this is being looked at, you know, amongst the technology experts and state government across the country. Yeah, absolutely. I just want to thank chair Brickland ranking member Sherman and members of the committee. Good afternoon. Thank you for having me today. I'm Amy glass cock the senior policy analyst for the National Association of State Chief Information Officers or NASA. Founded in 1969, NASA represents state chief information officers and information technology executives and managers from the states territories in the District of Columbia. NASA does not take positions on specific state legislation but today I'm happy to provide the committee with some background on the use of artificial intelligence in state government. So first a definition. This is artificial intelligence as an umbrella term for technologies like machine learning natural language processing robotic process automation chatbots and digital assistance. Nassio really started asking our members about how they view AI in the last three to four years each year Nassio surveys are CIO members to gain insight on their top initiatives views and challenges on a range of topics. Starting in 2017 we asked state CIOs what they thought would be the most impactful emerging it area in the next three to five years and 2017 only 29% chose AI by 2020 that percentage had more than doubled and jumped to 61%. In our first publication on this topic in 2018, we acknowledge that AI was relatively new for state governments, but that it held much potential. Just a couple of states were using chatbots or digital assistance. There were a few examples of states using AI for traffic management and a handful of other small cases. We laid out definitions of AI, how AI could be used to relieve, split up, augment or replace the work that humans do, shared some ideas for best practices and put forth what some of the implications and challenges would be for state CIOs. The computer wants to reboot right now so let's hope that doesn't happen. In 2019 Nassio partnered with strategic partners IBM and the Center for Digital Government to conduct a survey of state CIOs on their usage of AI. At that point only 14% of states reported that they were currently using artificial intelligence with an additional 19% piloting projects. A majority of state CIOs in 2019 reported that they were still looking for the right business case for AI. Keep in mind this was less than two years ago. Then 2020 arrived, as did the COVID-19 pandemic. Suddenly CIOs had found the right business case for AI. With unemployment offices slammed with calls and citizens looking for information on COVID-19 restrictions or testing sites, state government started rolling out chatbots for the first time. Nassio published a report on this in the summer of 2020, which I've included in my materials today. By June of 2020, three quarters of states were using a chatbot on a state website. Most of those were either for unemployment insurance inquiries or general COVID-19 questions. Most of these chatbots were the first the state had ever used and most of them were rolled out in a matter of days. This was a perfect storm of two factors. One, the chatbot technology had matured to a level where states felt comfortable using it, and two, the pandemic created a strong business case and a need for the technology. Nassio partnered on two more publications that were released that fall. One was a follow-up with IBM and the Center for Digital Government interviewing CIOs about how the pandemic had changed their views on AI. The other was in partnership with EY, studying the governance of emerging technologies. I've also shared these with the committee. We have gathered a lot of data and statistics on this topic since 2018, so I'm going to give you just a brief overview of the themes that we've seen throughout these publications and our research, as well as an idea of how quickly things have changed over the last year and what may be ahead for the future. The first point I want to make that we've heard over and over again is that CIOs are rightfully cautious about using new AI technologies. The common sentiment is that you don't want to go looking for a problem because you have a fancy new technology you want to try out. Wait for the problem and then look for the best tool. As when CIO said, implementing technology should be about solving a business problem and meeting a need. It's very easy for us in IT to become enamored with a shiny new toy, but if it doesn't provide a better service or make somebody's life easier, it's very likely not worth doing. Of course, several challenges in adopting AI and state government. The top challenge, according to state CIOs, is legacy IT infrastructure. It's challenging to apply new AI technology to a legacy system. Many state applications are run on cobalt and other decades old computer language. It can be really hard to find employees that still know how to write that code and integrate it with new AI technologies at the same time. Another challenge is cultural concerns inside the organization. Employees fear that AI will eliminate their jobs or make them irrelevant. The truth is that state governments everywhere are actually seeing a trend of more available jobs than job applicants. AI can be a tool to actually bridge this gap rather than a threat to employees. That said workforce resistance to change is real. Another hurdle is the lack of necessary staff skills for AI. Many states will be looking to the private sector for AI expertise if they can't recruit the skilled workforce needed on unlimited state budgets. So while the pandemic did not eliminate these challenges, the surge in successful chatbot deployment did provide an opportunity for states to look at other areas of AI adoption in the near future. NACIO advocates for a handful of best practices for states looking to embark on greater adoption of AI. I'll share four of those now. One, develop an AI roadmap. Putting AI into your overall technology governance plan can mean the difference between an ad hoc approach full of unexpected problems and a well-designed project. In our 2020 emerging technology governance survey with EY, only 21% of states reported that they have a formal governance structure for emerging technologies. Two, CIO should be prepared to talk about disruption to the workforce, address employee fears and use their role as change managers to think ahead to how these technologies will alter, change, disrupt or improve the work that people do on a daily basis. Three, CIO should be involved in the procurement process for AI technologies to ensure they fit within the roadmap and conform to security and privacy requirements necessary for safely using these tools. This also helps to streamline the purchasing solutions for multiple agencies. And four, CIOs should consider running pilot projects to try out new technologies before launching them for broader uses. CIO offices may allocate a small percentage of the chargeback budget to an innovation fund for emerging technology or seek a general fund appropriation to create a grant funding model where agencies can apply for assistance to AI pilots without financial risk. The chatbot search of 2020 was a huge leap forward for states and how they view AI. States are now looking at other ways they may be able to leverage AI in the near future. Here are some examples from five states. Massachusetts is piloting a program using AI to help citizens complete public assistance forms accurately and more quickly. Massachusetts is also looking to AI to assist with cybersecurity efforts using digital intelligence to help staff members sort through network log data to separate actual threats from false positives. Georgia completed a robotic process automation pilot project for agencies to streamline new employee background checks and onboarding procedures. The governor of California last year ordered a request for information to investigate how machine learning might help officials better understand the spread of wildfires and assist fire risks. Texas IT officials plan to look at how AI and machine learning could enhance staff efficiency, optimizing costs and promoting innovation as well as improve the citizen experience, expedite service delivery and ensure citizens receive accurate responses to inquiries. And finally, Utah is looking at how these technologies can help the agriculture department use an AI embedded image recognition application to identify brands tattooed on stray livestock to return them to their owner. In closing, I would just like to restate that unlike at the beginning of 2020 here in April of 2021 state technology leaders have become comfortable with low hanging fruit AI technology like chatbots and digital assistance. Many are developing pilot projects to investigate other uses for AI and robotic process automation. That said, AI and state government is still new, and most states have a long way to go when it comes to the governance of AI. But while challenges remain the business case for AI crystallized in 2020 and interns CIOs appear more committed than ever to the technology. Thank you and I look forward to any questions. Great. Thank you, Amy. There are a few hands up and I've got a couple of questions as well. But let's go to representative chase and then representative and touch it. Thank you. I was wondering if you had any information or if you could speak for a minute about the, the customer satisfaction element with all of these rollouts. So content and well served has the public been a with this technology and be with the rapid deployment thereof. Yeah. Yeah, it's probably hard to measure because I think, you know, especially when it comes to, I think we're kind of talking like the chatbot rollouts of 2020. So when it comes to the unemployment insurance, people were just so frustrated. So, and at the same time, as CIOs have seen that citizens are expecting that kind of instant service that they would be able to get with Amazon or target or, you know, Delta airlines and so it definitely it definitely was a huge help and it was a boom to the call centers. And it allowed them not to have to go find more employees at the last minute, like, like I said, like some of these were were rolled out in days. And I don't have like specific metrics from states on, you know, satisfaction as far as that goes that probably again was a lot of frustration but it definitely was a big help on the side of state governments. Thank you. Mike. I have a question to whether any surveys done as far as how people perceive the chat box. But you answered that so let me go on to. Is there any, were there any extensions of chat box to say track. Back taxpayers who were having problems and couldn't get them resolved. And, you know, escalating them to for a higher response. When we did this sort of survey of chat box last summer, we didn't see any for that, but that doesn't mean that they haven't sort of spread out to other state websites in that way because I know some states were kind of rolling out more but we haven't done another survey of them. Okay, thank you. Representative Sebelia and then representative Rogers. Oh, sorry, thanks. I have a question. I don't know if this is for you or Pam actually but thinking about thinking about the best means of approaching. Since we're kind of towards the head of the pack here, thinking about or so far, thinking about how to approach regulatory framework here for AI. Is it helpful to do we have a sense of it's helpful to look at like computerization and electrification, or, you know, if these kind of industrial revolutions are. I mean, is it helpful to kind of compare those and how things evolved and am I making sense here to. Yeah, that makes sense to me and I don't have an answer for that so maybe Pam, speak to regulatory framework. Yeah, I mean I don't have an answer for that either. It's a great question. But, you know, we talk about principles when you're working with new technologies and regulating new technologies being, you know, we keep in mind that how quickly things change and not to sort of stifle innovation through over regulation but also, you know, recognizing that maybe broader language is better than more specific, you know, things aimed at current technologies because we don't know where this is going so much as new about artificial intelligence. That's absolutely true. We talk about that with even the procurement process. Sometimes it can take so long and state government that the technology that you had been trying to procure is outdated by the time you get to that actually being able to get it in your in your hands so I just wonder if the. I wonder if there are any kind of breadcrumbs from those, you know, from adapting to those kind of industries that we can follow. That's just something I'm thinking about. I'd be happy to see if I can find any information on that. Yeah. Representative Rogers. Thanks. I thought this. Thank you for the whole testimony the specific examples of where states are a few states are heading was helpful. I was wondering if you know in these states that are rolling out different models of AI such as you know supporting people signing up for public assistance. Where is the accountability or responsibility. If something goes wrong, if someone, you know, incorrectly gets public assistance and now they have to pay it back but the AI told them they were eligible. If a fire is missed and somebody dies is that is it specified in these states. Who holds responsibility when the AI makes a mistake. That's, that's a good question. And I, you know, I don't know if it is specified in every case, but we do talk about the importance of, you know, not just letting the AI run wild. You are constantly training it and even once it gets to a more sophisticated level where you can kind of trust the outcomes a little bit better. You still need human involvement to keep checking it and making sure it's, you know, not learning things that are false. Also making sure that you have good data that's being fed into the AI. And that old cliche, you know, garbage and garbage out so a lot. There still has to be a lot of human involvement to ensure that those kind of mistakes don't happen and then if there are, you know, huge consequences consequences like life or death. You want to evaluate your risk when thinking about deploying an AI and make sure that that what you're using is sophisticated enough or that you're able to take on the kind of risk for that the outcomes that are possible. That's a helpful way of thinking about it. Thank you. Yeah. And do you do you or does your organization have an opinion of where the responsibility should fall if it should be with, you know, assuming this is AI within state government should it be whichever department or agency and state government is using the AI is where where ultimate responsibility falls if things go wrong or I was just curious if that's something. That is not something that we have taken a position on. Okay, has it come up in your conversation. It has it. No, but that's that's interesting and I'm sure it will. Yep. Thank you. Oh, another question. Just, just thought I was just curious, if you would say a little bit more about workforce disruption you had put that as kind of one of your directions and I just, if you could give a little bit more direction as to what you would be looking for from states in that area. Yeah, sure. Um, so, you know, I think a lot of it, the sort of pushback that that we're getting the CIOs would be getting from their staff the workforce is like, you know, but I've been doing data entry for 20 years like a robot's going to come in and take a job but what we've actually seen in the input that we've gotten from CIOs that have done some of these projects is that, you know, the idea is that you have AI to free up some of this mundane back office work and use the humans for the work that you still need the human brains to do and and the result often ends up being more rewarding for the employee and a lot of stuff that is like paperwork that people are spending a lot of time on let's say you have a social worker for instance and they spend a lot of time on paperwork. If you can free up some of that time so that they can actually be interacting with families instead of filling out paperwork or doing data entry that's really useful. In addition, I briefly mentioned that, you know, there's there's a workforce problem in state government, especially in the technology world, you are competing with the private sector, you can't compete with those salaries so if you have a way to get some of that work done without trying to go and hire more humans because you can't hire them anyway. It can be really useful. The National Association of State Chief Administrators did a work for a survey a couple of years ago and found that while the number of jobs was increasing and the number of applicants was decreasing at the same time so that's a big problem with cybersecurity staff. Getting people that are experts in cybersecurity to come work for state government is tricky. So if you can apply AI to cybersecurity and threat detection, it can be really valuable. And that is is one top way that state CIOs see AI being able to be used in the future. So Amy something that was interested in hearing your testimony, and I will use this to get to my question that's related to AI, but about four or five years ago Vermont changed how essentially the management structure is to how technology is in state government. It used to be that every silo of state government had their own technology staff. And I can't remember was 2000, I think it was 2017 we consolidated that into one particular agency, the agency of digital services that basically is an umbrella agency that manages technology across state government and then has individual or multiple folks from ADS that are assigned to different parts of state government, but it essentially feeds up into what essentially is a chief technology officer for the state. And I'm not sure how common that management structure is across across the country whether we're catching up or whether states are moving to that model right now, but it kind of relates to my question of, what are some of these questions are best kind of answered and managed in state government, and clearly are, you know, CTOs and CIOs are, you know, the chief in government experts on technology, and yet a lot of the questions that are related to this type of technology really AI are really beyond maybe the grasp of a CIO or a CTO they might be very specific to the department or agency in which they're deployed, or they may have to do with issues related to civil liberties or privacy or other things, which, you know, we kind of hope that our technology officers are expert in but maybe they're not at all. And so generally my question is, you know, if you have thoughts on, you know, to the extent some of these questions we're wrestling with around AI are best managed within a, you know, a CTO is organization. We're talking about a commission here, which would have a much broader scope of capabilities within it to deal with these issues but you know is there anything that CIOs are thinking about across the country is to you know where to where to actually wrestle with these issues and questions. Yeah. Yeah, I have a lot of thoughts about that. So one thing that we've seen. Well first I'll talk about the consolidation. That's definitely a trend that's been happening in state government. It may be like 20 years or so more more common the last 10. So, yeah, Vermont's like right on track there. We find it really useful to have that structure, generally, from NACIO. And we've also seen that the CIO role has changed over the last 10, 15, 20 years. So where a CIO or, you know, CTO, that role used to be kind of a provider of technology services. So you have an agency that has a need and the CIO's office provides them with a solution for that they have a lot of in house expertise and solutions and programs. What we've seen happening is the CIO changing from that to become more of a broker of services and really going out to those vendor relationships that they have created and finding the right kind of a private sector solution for agencies and having a good idea of, you know, the best solution for an agency's problem or issue or, you know, also maybe purchasing something that five different agencies can use. And that seems to work really well too. Again, there's that workforce shortage issue so you have less people that can create things from scratch. It doesn't always make the most sense, time-wise, staff time-wise, you know, when there's a really good solution out there that you can purchase from a vendor. So when it does come to those tough questions that maybe the CIO doesn't know how to answer from a technology perspective or it's outside their area of expertise, like, we see that as okay. You know, they have developed relationships with others that can help them answer those questions and get the agencies what they need. So that's kind of how things are going around the country. Great. Well, thank you, Amy and Pam. I don't see any other hands up in the queue. And Amy, I think you had mentioned some information that was that you had submitted. It's posted to our website. Pam's presentation is also presented to, is also posted on our website. So I really appreciate that's great information for us to be able to refer to and follow up on. So appreciate that as well. Thank you very much. If there aren't any other questions, really appreciate your time this afternoon and being with us. We're going to continue on on another topic now. And you're welcome to listen to some energy regulatory cleanup bill work that we're doing. Or if you have better things to do, you're welcome not to listen to that as well. Thanks for being here. Thanks for the invitation and electric utility regulatory stuff was a past life so I think I will enjoy the rest of my day. I bet there's life after energy regulatory work. Yes. Thank you. All right, take care. I was going to say, Tim, does that count for us too? Yeah, exactly.