 I guess we'll get started and as people trickle in, they trickle in. We're here today to speak with transportation, technology, manufacturing, and instruction industry speakers to hear what they have to say about artificial intelligence. There's some new faces joining the group. So maybe we go around the room real quickly and introduce ourselves. I'm Brian Breslin. I'm a civil engineer working really transportation related engineering with a private consultant firm called DeWine King. I'm Guy Ruel. I'm with Du Bois and King as well. I've worked with Mr. Sigali for quite a few years, the agency of transportation as the aeronautics administrator. I'm Don Ariso, professor of civil and environmental engineering at the University of Vermont. I've worked with AI on all sorts of applications, and I'm part of this past course. I'm you here, Igor Ossowski. I'm the CTO of Varisemic with the spin-off that came out of Global Foundries. And my focus is AI. I'm Nick Grimley. I'm the director of entrepreneurship and technology commercialization for the Agency of Commerce and Community Development. I'm part of Michael Jordan. My name is Milo Kress. I'm a high school senior and artificial intelligence enthusiast. Jill Charvino. I'm president of the Vermont State Labor Council, AFL-CIO. Joe Sigali of Vermont APC of Transportation, Policy Planning, and Research. I'm John DeWay. I'm a member of the TASC. What other way of TASC? TASC course. I'm a retired justice of the Supreme Court. Cool for it. Matt Slyzen, Walter Media. Steve Law, CDRRC. Oh, thank you. I'm a staff member. Do we have a quorum? Yeah, I don't think we have a seven, so two, three. Six, eight, and we'll sort of sort of sort of. Tell me, he was web-exing in the night, but? Yeah, I sort of, let me turn. I have a feeling. I'm feeling a little bit better after a week. I mean, listen, obviously, a quorum is testing. Can we take an action? This is the first time this has happened. So can we make what are the rules associated with going to make decisions? Like acceptance of meetings? I cannot do that anymore. OK. Well, we can go on to. We might have one here. We might have John, you said. What's that? You said John was planning to join us. He's definitely web-exing. Looks like he's there. Hey, John, are you there? Hey, John, are you there? Hmm. I heard him faintly. No? There's something else. Oh, there it is. He's there. I think I can hear him. There's a knuckle, but there's also another noise. I think there might be. I don't know if you dialed in with both the phone and the computer. Yeah, that's what we're doing. Oh, yeah, that's what I'm going to shut this off. I have not got the Skype for Business Trading yet. So is he calling in or is he doing something different? He should have. So he was calling in, using web-ex. What was? Do you feel he's there? If he uses web-ex, he has to be there. Yeah, that's what I thought. I guess until he shows up, when we get a quorum, I think we'll move on to leave the acceptance of minutes until someone else shows up, or we stay until the next meeting. So I am technologically inept. But if he's joining us via a computer, is that what's happening? So how, if we don't have a computer on for him to join us, how would he join us? He'll come for the phone. He'll come on the phone. Thank you. Yeah, if he's there and send an email. I guess we'll save the acceptance of minutes, again, until someone else shows up, or next meeting. Next agenda on the item would be public comment period, if there is any. Would anybody like to comment about this? Just for this ass-horse. Hearing none, I guess going on to the next item on the list would be an overview of AI by Milo. Good morning. As you know, my name's Milo Cress. My computer isn't working. I had a visual presentation planned out. But that's fine. We have a whiteboard here. I guess the next question that our committee has considered, and the first question that really came to my mind when I was learning about artificial intelligence is how do you define it? And the definition that I was working with as I created this presentation was some computer system which displays traits that we would traditionally consider human, such as the ability to change its internal state to optimize itself to perform a certain task. So my understanding of AI is that it largely arose from the statistical model where you create some kind of model where the model is basically a function, and the output of that function for a given input is some meaningful number or piece of data, something that can predict a feature of a data set without actually knowing what the exact value for that feature is. For example, if you're modeling, you're having technical difficulties on manual levels today, if you're trying to find a model for these data points, you could use linear regression as I've learned about in statistics class to just draw a line here. And if we have some kind of test point to see if our model is correct. I'm sorry, I don't know what you're saying. You know, if it's right here, say that our model has a prediction to go with your point there. And so that's a good model. But the thing is, as the volume of data that we have increases is the amount of variation and noise in that data increases. And as the complexity of the rules that create that data increase, it becomes less and less possible to use traditional modeling systems where some programmer or statistician chooses the model itself. We have to create models which are able to build themselves or at least create some initial model and some function for updating the model over time. Hey, Brian. How are we doing? So that's where artificial intelligence comes in as we know it today. If you have, if most models are thought of, or many AI models are thought of as neural networks, where you have neurons basically, and they're connected to each other in a network structure. So if you have, say you're trying to calculate the value of a house given a few parameters. Say you know the number of windows in the house and you know the square footage of the house. And you're given those two things and you connect them to your neural network. You connect them to each neuron. Now, a simple model of this might be to just give a certain amount of weight to each input feature. Like if a house has one more window, then we can say it costs $10,000 more. That's a pretty simple model and it might not work out very well. You'd have to test it in some kind of data set. But when you're using a neural network, you randomly assign weight. And you connect this network as fully as you can. The exact connection features of a network vary depending on what you're trying to get. But the idea is that eventually, this will boil down into a single value. And that value is some kind of prediction for the value of a house. And once you get that value, you can see if it matches your data set. You generally start with a training data set, which is labeled. If you have a house with three windows and 1,500 square feet and you know that that house is worth $250,000, then you can add that to your training data set. And then when this model is run through completely and it produces some kind of estimate, say $150,000, you can then compare that to the actual value and then get some kind of idea for how you're going to update each neuron in the network using pretty complex calculus and the back propagation algorithm, which was developed fairly recently in the learning part. Yeah. This is for the. Yeah. And a lot of times with AI systems, the two algorithms are often thought of as being the same, a model of which is able to predict and learn. Because those are some of the key features of neural networks. But those are two distinct abilities. When a neural network runs in the forward direction, that's just a prediction. But the power of it comes when that prediction is coupled with an algorithm that's able to update the model to be a little bit more accurate each time. And the ability to do that is what makes these algorithms able to consume billions of pieces of data and spit out a model which is highly tuned to a specific data set or to a generalized form of data. Anyway, this was a pretty fascinating thing for me to figure out, because the ability for pretty simple models to model pretty complex data was surprising to me. But it makes sense when we think about the laws of nature which we have described, which model how particles behave and how elements of a population behave. Those models are generally fairly simple. E equals mc squared. F equals ma. All of those models are easily expressed within a single line. But the thing is, those were discovered by people. What we're doing is giving algorithms the ability to discover those models for themselves. And in doing so, create valuable prediction algorithms for future behaviors of systems. So that's my understanding of how AI works at a basic level. But this is generally to solve the problem of classification. Given certain inputs, what's the output? Given certain known variables, how can we use that to calculate an unknown variable? And that doesn't translate to some of the problems that we're seeing AI applied to right now. So the question is, how do you transform some kind of problem of classification to a problem of some kind of actor state model where you're driving a driver's car and you're trying to optimize your reward signal and minimize your punishment signal if you have one of those? And what you see is that you can apply the same basic algorithm in that domain. You can have, say, some actor, maybe, I'm not trying to, well, maybe you have Pac-Man, he's moving around trying to eat the dots. And each time Pac-Man eats a little dot, it gets a reward signal. And the goal of this algorithm will be given a certain action, which will be encoded here. What would the predicted reward be? And it's trained over time as it makes those actions to figure out what the reward over time is of a given action. And then using that, going backwards through the algorithm and trying to find the optimal action to maximize your reward. Neural networks are really powerful because they don't just run in one direction. They don't just predict the value they update their internal state. And also, they can give you meaningful outputs based on a given input and also meaningful inputs based on a given output. So that's basically how my understanding of neural networks is under the hood and how ICAI being applied in domains such as classification and also reward optimization. There are many other domains. Yeah. I consider this a continuously updated algorithm. Would you say that? Yeah. I would say that's true. There are ways where you can batch the updates together. It's more efficient, say, to run this algorithm through the house algorithm through 10 inputs. And then calculate the total error and then back propagate with the total error as opposed to each individual error. But yeah, it can be continuously updated, especially in actor state models. That's the best way to go. So typically, it usually operates in two modes, training or inference. So training means that, just like Milo described it, you're constantly updating the model. And then once you're done, you deploy your solution, your weight to those edges to the field. So a self-driving car, for example, would not be learning while driving. It would be just implementing whatever solution was deployed in the field. But that training model will continue to learn. And then once it feels that it knows what the next solution should be, it would deploy that solution to the field. But just to be clear, I think anything that's deployed in the field typically doesn't learn, because that would be too dangerous, because you would be learning different things and all the products would behave differently. My understanding is the data that it gathers as self-driving cars, for example, are driving around, can then be used to tweak the model in some development environment. And then those tweaks based on that data can later be deployed. And that probably, it's debatable whether that account is continuous updating, because it's definitely not online as it's being updated. But the data that's generated through the active state model can then be used to feedback to train the algorithm. Are there any examples of AI that currently is learning in real time and adjusting its behavior in real time and not being tested first? None that I know of. Is that something that could potentially happen someday? Well, I can see it happening right now. It just depends on how you update your algorithm and whether you decide to update it online in some kind of development environment. So I understand this is the end of the wood description about how AI works. Let's go back to the definition I had. That we were kind of struggling with the idea of a relatively short, dubious sentence, relatively simple definition of what AI is. Is that possible, or do we have to do this? I think so. I think it's possible. My own? My own? Hello, how are you? Can you hear me? Is that your on? Do you have a heart gate or something that you're presenting? Because I don't see anything. Well, that's because actually the computer I'm working on, there are technical difficulties. Where are I able to get it to connect to the TV? I did get the presentation that you sent me, though. And it was very informative. Oh, no, no, I don't have to use it. I'm just thinking the answer to the question I think about real-time learning. I would say things like Siri and Alexa are examples of things that have sort of a continuous evolution based on people's asking habits and stuff like that. I would say that that's a real-time un-tested learning. I mean, it's tested in some sense, but not from a safety standpoint. But I don't think those are safety-specific things. I just, that's an example of something that is real-time learning. Do you buy that? That makes sense, yeah. I think so. Also, the auto-correction of your keyboards often use neural networks, which are updated on the fly based on their reward signal. Yeah, do you also pick up accents, John? So accents of different speakers would be something that you would maybe deploy at the edge. These are the advanced devices. Exactly. So, Milo, I just have one more question. Because part of our work is to be looking at recommendations down the road. Currently, is there any limitations on deploying AI that, are there any limitations on allowing? You were just talking about how there's people that the developers are imposing a limit. That they're gathering data, they're testing it, and then they're deploying what the machine's learned. And that sounds like a self-imposed limit. Is the only limit right now self-imposed limits? Like, in other words, or can people freely deploy the technology without testing it in a lab first? There are other people in this room who could better speak to the reality of various algorithm implementations. But my understanding is that it depends on what your state's responsibility is to your client, what the effect of some kind of malfunction in your algorithm would be, and how you describe to your client how your algorithm works. If you make it clear that your algorithm is updated on the fly with data that it gathers, and that its behavior could change from time to time based on inputs that it receives, then, as I said, other people could better speak to the reality of that. I think it's limited by the companies that are deploying it because this is just hardware and software, right? Right now, nobody really, I don't think anyone has insight into Microsoft software or anything else. It's hardware, really. So I'll go ahead. Yeah, as long as, for example, it's something like Alexa. I don't think anybody wants to relay what Alexa says to you. Of course, Alexa might defy you, I assume. If you got to that point, you would just be using pre-existing legal principles to say, wait, well, obviously, you can't do that. But the minute you get in applications that might invade privacy, particularly, I'd say that's a very, very sensitive area. Or give human control to something else like an automated vehicle. I'm trying to figure out whether autonomous or automated. Autonomous vehicle, which will not be aiding human driving in itself, will be doing it itself. Then you've got a particular choice that you need to talk about regulating. If that's the way you choose to go, of course. And all of those things are in the future. This is where technology outstrips very much at this point. Also, one thing that I would add about all of this is that traditional models are designed to be transparent. And that statistical modeling, like there's something that a statistician would come up with themselves. It's designed so that any statistician who provides it can understand what variables are at play. The example I gave in the very beginning of saying that a house's value in general increases $1,000 per each window that's added, or $2,000 per 1,000 square feet that are added. That is designed to be very understandable to humans. But when models are able to update themselves, like in neural networks, it's a lot harder to understand what's happening under the hood. Even someone who knows how a neural network works doesn't necessarily know why a certain neuron has the value it does. And when neurons are trained in a neural network, they're often initialized with completely random values and then are tuned to their optimal values. But those optimal values can change, depending on what the initial random value is of, which makes it very hard to know what a neural network is doing and possibly debug potential biases that we have. And I think that's another important thing to explore is how transparency figures into this, because that's something that companies with really high risk would need to be able to explain to people whose money's in their hands, for example, exactly how those algorithms work and what are they putting in place? Did we answer that question, John's question, about sentence definition? You did say in the very beginning you had a one sentence definition. Do you want to repeat that? The sentence definition that I worked with is a model which is able to... An algorithm which is able to display traits which are traditionally considered human like the ability to learn or update its internal state to optimize itself to a given task. So, can we make a definition without using the word algorithm? Why did you get one that is... If you walk into a legislative committee, which is what we're reporting to, right, and you said here's what it out is, everybody in the room would immediately understand that. Right now? I know. Who used that chart, you know, the one that I have of four boxes, you know, artificial intelligences, and I had some, I thought pretty jargon-free sentences there. Did you try that? As I said, I don't have my computer up right now. Is it on Slack, John? No, but it can be. Or just email. Give me a second. Also, John, the information from the Montreal Conference that you put on Slack does include a definition in the beginning. Yeah, that might be... And I thought it was a pretty jargon-free, I'm using that term, you know, just something that was really understandable. But yet like Milo's, compared to when yours may be something that was assumed to be a product, normally a human intelligence that now is capable of being done without human development. Let me, I'm gonna put this on Slack right now, but I don't know if it's necessarily better if it sounds to me like you put the right thing in here. So just... I have it right now, but it's from here. I think if I can plug in. Oh, okay, so yeah, I'll just take this. Okay, so I've got it. Oh, sure. And I've always found that a group that's talking about technology has the least ability to use that. I don't know, I don't know, I don't know, I don't know. I don't know, I don't know, I don't know, I don't know. You understand the channel, because it's time to melt the fog. I don't know, I don't know, I don't know, I don't know. So you have a Webster's dictionary explanation of the definition, a branch of computer science dealing with simulation of intelligent behaviors and computers. If anything, it's really just a... There it is. So it's flat as on. So you're flat and you're flat and you're flat and you're flat. Okay, let's go. Do you need your mic on your... You need the dom or the latest? I think I want you to get, I think I've put a little link to it. But it seems to be... Keep going, sound more. Sound more. If it's there. Is that right? That's right. Yeah, that must be it. Do you see the form of my own? Yeah, we're all figuring it out right now. Yeah. So it's showing the evolution of AI, blue boxes, artificial intelligence down to deep learning. G-learning, their own network. So programs with the ability to learn in reason like humans. AI algorithms to learn that that's artificial intelligence, but that could be explicitly or explicitly programmed. Machine learning is algorithms that's learned without needing to be exclusively programmed. So that's distinct from rules-based AI. Neural networks are machine learning things that are based on, roughly on the sort of neural model, which I think you've defined, I probably already talked about mild, which is sort of like brain structures. Indeed, neural networks is just more of the same with computers, with hardware acceleration so they can work on huge data. That's the best I've ever been able to find. Did those work? Sure. I think we've replaced the word algorithm with the word program. Yes, I see. Deep learning, this is true, but others work on massive amounts of data. I, everybody's working to learn with less and less data. I think IBM has been put in the, I mean the top definition makes the most sense to me. Oh, yeah, that's possible. Or you can replace programs with methods or recipes or effectively. It is an approach that really doesn't follow humans programming, how the data will be processed. It's actually the machines figure out how the data is processed based on examples, and yeah, you can learn by talkers. Makes sense, because I'm really interested in that. It seems like the machine learning, neural networks and deep learning are more of the methodologies that result in that outcome and maybe they all differ a little bit in the level of precision that you're able to do that, learn and reason and... And I think the legislature should be in the broad sense. I don't think you guys should focus on specific implementations or architectures or pathologies. It should be the general all-encompassing approach. Is that a symbol that you like there, ever? Yeah. Johnny's still there. Yeah, I'm sorry, did you say it? I think we're all good rather than this for now. Yeah. What are we supposed to do? Are you okay with that? Good. I think it's helpful. I mean, it seems, does it seem helpful that top definition has a pass? So this is now on Slack, and this is what you're gonna use, Milo, you're gonna use this or point it at your own. Oh no, I was going to use this one, actually. Okay, so now it's on Slack and we can go and exalt it and maybe next meeting and you have a creative voter definition, I guess. Question I have about this is not so much a definition, but are there physical constraints now in terms of our processing power and computing power and that sort of standing in the way of fully realizing that definition? So maybe sort of reason and learn like humans but not quite yet? I mean, we do image recognition now. Machines can do image recognition better than humans can. So we already have these type of definitions, right? So image recognition from deep neural networks, which is really the subset of these is already doing better at image recognition that humans will, but it's a 3% worse humans ability to recognize specific images of about 5% there. Figurity there. I think one technological development which will aid the training of neural networks and AI in general is quantum computers because they have the ability to solve optimization problems in one time or with that. Real time, very fast. Yeah, yeah, very quickly. Faster than other algorithms, good. John, I think the answer is yes. Yeah, I think that's going to be very important in the future. I just do want to say that while there's a lot of comments, there's nobody's been able to demonstrate that yet, but I believe that once that happens, I think there will be other breakthroughs too in our ability to accelerate these things within a certain cost and power and below. We're all so matter of life, but it's advancing so fast. I mean, did you see the charts I put out there about how fast AI is advancing in terms of both technology and interest? I saw that, but I wasn't able to predict it. We can't. No problem. Yeah, it's like, I saw it on slide. You had just put it on recently, but it was the last thing you put on. So, this is kind of important today. But if you actually look at it, there was a, in the chart that just posted, one thing I would call your attention to is like maybe on the one, two, three, four page, that might be a little cryptic. Didn't even come out right. But what that shows is that there was kind of a big bang in 2012 where there's some canonical problems that people solve at every technology generation just to gauge progress in algorithms and computing, computer, et cetera. And there's one that's called ImageNet, which is characterizing and labeling about a million images from the internet. Just that's a cat, that's a dog, that's an explosion, whatever. And you see that the accuracy percentage from 2010 to 2011 kind of went down. 2012 was when people started applying GPU graphics processors to acceleration. And then you see how quickly accuracy percentage has improved. And that green bar is sort of the accuracy of a human in his or her ability to actually identify the same pictures. And so you see this precipitous drop that started in 2012 and it continues to go down, meaning that the accuracy of the matching is now superhuman and the speed is outrageously superhuman. And so that kind of advanced, like the kind of thing you're talking about with quantum will be yet another advanced both in the accuracy and the timing because the two things are somewhat trade-offable. The more computing you can put into something the more accurate it can be in the same amount, in a tolerable amount of time. So that's kind of interesting. And super close on that red line is just kind of my lane way of showing how interest is scaled. That's just the text didn't come out if you demoed the PowerPoint to be able to see some of the other texts on the screen. That's how many people enrolled in the beginning AI course at MIT. So if you demoed the PowerPoint you'll actually be able to see that from 2013 from 300 people it's now standing remotely at almost 1,000 people. With respect to the agenda and speakers could we move on to the speakers portion of this or are we still discussing? Oh, yeah. I just, can I ask a couple of questions? Or is there anything else left to discuss? At this time we're gonna be at a later date in time. I'm willing to discuss anything in a later date. John are you okay with that? Yeah. Is there anything else you'd like to add? All right. Okay, Joe. Okay, so before I start talking about what I'm gonna do is I just wanna say a little bit about the subcommittee that has pulled together all the witnesses today just to remind everybody this is the Transportation Technology Manufacturing Instruction and Labor Subcommittee. So that's what we have, we have quite a few witnesses. So we have the first part is we're gonna be talking about transportation. So me talking about automated vehicles, people are talking about how AI is being used. I guess in transportation planning and analysis and Guy Ruel's been talking about AI and how it relates to aviation. And then the second, then we'll break. And then the second part is I'm really about the manufacturing construction side of things and I'm sorry, I'm gonna put it all out. Igor's one of our speakers and Tom Kennedy, is he here yet? No. He's Joe Alton, Joey Alton. He's still planning on being here. He's a engineer's construction and a jet fleet from a labor perspective. So with that, that's about it, we'll get started. Hey John, I don't know how to show you my, I do actually have a little power point here because it helps me, it keeps me, my job's organized and there's some, it's a topic that's just. Okay. I have a good amount of it. Okay. No problem. So the first thing I just wanna share is like, this is generally the type of equipment that's on a self-driving vehicle. So it has radar that can identify how close to this other vehicle's. Liar, which like it says out of point cloud and you kind of get a 3D view of the world around it. And there's the ability to connect with other vehicles and other infrastructural processes on board, dedicated short range communication. Then there's software, which is really where the AI is, right? And the more I'm worried about this, and the task force kind of, and I have to say what we're trying to do is actually a lot of what I'm thinking about is more on the policy level and taking it for granted that the AI can be able to drive the car. And so it's kind of jumping to that highest level of AI. And I think the remarkable challenge that we might have is that that AI has to be computed in the cloud, so to speak, if it can't really be done on board, that can be a challenge for us just given our limited self coverage, right? That's not being as simple as that. If it is capable, that's kind of what I was asking the question about, what are the constraints to be able to run through these gigantic neural networks that say, oh, that's a cat in the Milky Road, I better slow down, you know? And so that can't really be done on board. That's going to be a challenge for us here. That must be done on board. I don't think there's an option of that. It must be done on board. I actually drove a car from Williston to here from the entrance of Williston highway to here. I didn't really have to touch the car. It would drive all the way through, and that's like current cars. That was the test line. Yeah, the test line, yeah. So you don't need the self coverage or anything to actually have that. If you're going to do more, the cat in the road, I understand. But if you're going to choose a route based on getting information in that tells you what is clocked or what might have obstacles or whatever, you're going to end up somehow having to connect up and get it. And that's only the simplest of information you might want to know in order to decide how to drive for me to be there. There's sort of the tactical, dynamic driving task elements of driving self-driving cars and steering moving forward, braking, interpreting the environment around you, acting accordingly. And then there's the more strategic type decision, which is I want to go from well-stand to national light. And those decisions are still going to be sort of amplified human, right? I want to go from here to there. And even now, you can use navigation and it's not totally reliable, right? We still have trucks that go over smuggler's notch in the wintertime, for example. But I think that that is something that is what can you mitigate. As long as, even if they choose the incorrect route, as long as you're traveling safely on the incorrect route, they're just like human. That's always the premise for self-driving cars. They don't need any connectivity to anything else to actually operate safely. And that's really helpful to me because I've come to say that, or that in order to really work in Vermont, that's going to have to be the case. I think, yeah. Yeah, isn't it sort of, I think I heard someone say the same thing. Sort of nuance. I mean, there are mission-critical things that absolutely have to be local and have to be able to work, like collision avoidance and lane changes and things like that, that you can't have a round-trip, you can't have a cloud connection on. But then there are things like navigation and things like that that happen to the local, that are less mission-critical and happen on a lower latency. So you will have kind of a higher rate of vote, right? And you might have, you know, you might have, one thing that's very interesting, for example, is that in sort of recent automated driving, or highly assisted driving that, like when your ABS system goes off, like when your car is at just its slippery roads, it could network back and share nearby cars to say, you don't have to wait until you start picking this up. There's ice up ahead. So there's kind of different latencies of mission-critical and less mission-critical that involve a certain type of connectivity. And there are a lot of people who are saying that with 5G everything will be able to be in the cloud. I will go on record as saying, that'll never happen. You never really can rely on anything non-vote. That's just me being grouchy. I, there was a book review of the New York Times two weeks ago, I think, the Sunday New York Times that I picked up a bit early on. I'll put the name on Slack. It's really quite good. And one of the things it emphasizes is that for fully added potential cars need to talk to each other. They have to do things in relation to what other things are doing. But of course that creates all this data about where things are located and has its own set of problems to do it. But that is an essential part of the AI development at this point. I mean, we have currently cars, Ford is working with Dominoes, they deliver pizza in fully self-driving cars right now. There's nobody in the car, they deliver the pizza, the car pulls in your driveway, you enter a code and you get a pizza. So we already have that enabled, despite the fact that there's no 5G anywhere deployed at this point. So, this next slide is just showing five different levels of automation. And levels one and two are really pretty simple technology and human is clearly responsible and in control. Things like adaptive cruise controls is an example of that. The Tesla is, the autopilot is kind of on, this shows steps in it, but it's really more probably a continuity than it is like all of a sudden you jump up to the next level. And these are really just for guidance purposes, not like that super specific, but so like the Tesla autopilot is more of a level two because you really can't complete with let it go. You need to stay aware and be ready to take over control. And that's kind of, it gets into the level three and that's like the definition of level three is the human always has to be ready to take control. The vehicle can do all the dynamic driving passes. The human has to be ready. And that's the level that gives most regulators it's one of the biggest challenge because we know we can sort of move their attention easily and with cruise control. And so when there was a recent news story about a driver who was under the influence and he was using a Tesla fell asleep and the vehicle kept moving and the only way to please pull him over was to get out in front of him and somehow slow it down, right? And so it's possible that the technology sort of went and dropped his guard enough to influence how he fell asleep. And at the same time, it made us save his life too. At least I knew enough to just get in front of him and then just started slowing down in the car with collision avoidance, just stopped the car. And then they had a banging on the window when the guy kind of woke up. It was like really, I didn't even know, my wife was asking me, what are they charging with? I guess it's TWI, right? Yeah, but that's the challenge, I think, is there's still that high-level human interaction. Level four has the same, the vehicle has this ability to do all the dynamic driving tasks, but it's generally designed to operate what they call operational design domain. So it could be like on a college campus, it could be certain time of day, certain weather conditions, certain facilities, like just on the interstate. And what distinguishes it from level three is that you can get into a minimal risk condition. So if a human does not respond, that the car has the ability to kind of pull over or fly to another interstate stop, and the example I just gave you, the Tesla did not have the ability to do that. The level five is completely anonymous, so that no steering wheel, no brakes, no human drives. So how far are we technologically from level five? Now I'm talking about all the policy questions, talking about all the human questions, I'm just saying the technology can produce a level, but we already have them deployed delivering pizza, as I mentioned. I'm not sure it's totally, do you have to see, go ahead. Yeah, no, we're 20 years away and more. I mean, I don't get to answer the one question directly, which is in a defined environment, we have that today. So in situations where you describe, you do geofence and you say within this area, we're going to operate in an autonomous vehicle, that works, we can do that. But as soon as we get outside of those constraints, either bad weather or roads that are unusual in some characteristics or all those things, those problems, those edge problems aren't solved. So we'll get 90% of the way there in the next handful of years. That lasts 10% though, who knows what's involved. And I suspect I'm going to emcee knowledge, it sounds. So yeah, that's my answer. The geofence, I totally agree. I think all the fully autonomous cars right now are geofenced, you cannot pull them or not. More level, four light pages, you're just saying you're limited to certain, or they're level five, but within a constraint. So let me give you the example that you said. I'm sorry, yeah, I'm just trying to answer this question. Let me shut up, because you gotta go back to your thing, but in Florida, the villages, and I guess those are sort of very, very large, I don't know, old age homes, whatever the right terminology is for that. Senior living. Senior living, it's kind of very funny. For the entire community, this is not for this language, right? Very large, a robust network within the, a rowy network within it, but fairly constrained, right? You know where the loads are, you know where the people are, you know what's going on, you can do all sorts of things. Right now, we have autonomous vehicles operating pretty successfully in that environment, and it's a perfect environment, because it's one of the reasons you can imagine. So in that area, what I would say is it's level five, but within a very constrained environment, right? So, Froha, can you be in level five or level four without everyone being in level four and level five? In other words, can you have some zeros interacting with autonomous vehicles in four and five without problem? You can. You don't gain quite a level of optimization that you do as you start to approach some of the next vehicles that you're referring to, where you can get a lot more advantage for that, but you can operate in that environment. I want to ask a simple question. When you said you can fully automated ride, you were talking about being in a two or three, but it was full. So that's a highway statement, right? So on the highway, the car can recognize Mark's well. He's right. If it's snow, it might basically tell you, you gotta take over, I'm not doing this. It will give you an alarm, but you can just click on it and it will drive. It will actually shift lanes. If somebody is too slow in front of you, it will pass them and it will keep you on. So you just have to monitor it. So the statement is, this is not a fully autonomous car. You better hold on to the steering wheel, but the technology is there to actually get you to see if you've decided. So was the car a commercial US-sold car? The ADAT, what's it called? Advanced Driver Assistance. I've been in Germany and a lot of my friends' cars have level three. Yeah. I think level three, but I'm not aware that other than Tesla that there are many commercial US cars. Is that true? Yeah, so this is a Tesla Model 3, John, but you're right, Audi has their level three car. They're the first ones that do release, I think, a Model 3 capable car. So this kind of just leads me to my next slide, which is like how fast are they gonna penetrate that market and this forecast was done by the Governor's Highway Safety Association and kind of looking at vehicle turnover, the cost of the technology and some other factors, and the prediction is that the next 10 years, maybe one to two percent, the vehicles on the road can be some level of automation, so three, four, five, we sort of see that happen. And then out to the 2050s, no more than 40 to 60% of the fleet, so this challenge of the mixed vehicles on the road is gonna exist for really the foreseeable future. And so that's one forecast, and another forecast is by this group called Rethink X, and they started looking at disruptive transportation. And they're saying when you look at other technology, it's not as linear as it has been in the past, and that they think that the benefits of the cost savings and the convenience are really really huge difference, and there'll be a much bigger turnover. So that 95, 90% of the passenger miles are going to be in shared autonomous vehicles. And the sharing is really critical because that's how you use the costs. And potentially say 5,600 per year, or household, because maybe you don't have two cars and all that, and yeah, it's okay. There seems to be the number one one raises a question for me that I even found the answer in the book and I'm interested in generally. There's kind of a link between shared and electric. There's a link now between autonomous vehicles and the fact that we're gonna be electric vehicles, but I don't understand technologically why that. They're not there. The link is there. They're not. You could have a autonomous vehicle that wasn't, it was simply a gas-guzzling. I think it's linked to drive by wire. I don't know if anybody else in the room, Joe, the idea that if you still have mechanical linkages or for everything, brakes, steering and stuff, where a human is still in possibility, it's one kind of electromechanical control. And when you completely sever that and all of that is done electronically, it makes it much easier to control by computer. So I think they're linked in that way. The more drive by wire you are, in terms of how the car works, the more automated it will be. Is that fair or safe? I think, John, you're right. It makes it easier if all the systems are electrical. But I mean, the Audis, for example, are gas engines that are level three. There's a lot of level twos that are also gas engines that also steer, keep you lane assist, they keep you in the lane. I don't know, but I agree with you. In general, we're moving the two trends in the same direction quickly, both electric cars as well as autonomous cars. So they're most likely to be married down the road. But I think that there's a whole, I think it's maybe more of a whole way than it is a reality in that. The thinking as well, the simplistic explanation I've heard is your automated vehicle will drop you off and you can go to charge. So it can start to deal with the range and anxiety a little bit. That doesn't help you during the long trip, right? So it's not going to get charged along the way. So I mean, I pointed this out only because there's kind of these two extremes about how fast it will happen, but even in this scenario, they're saying, you know, even within 10 years, 40% of the fleet are still going to be dimensionals. People probably still are going to want to drive, especially just in 10 years, right? We can remind our average age of a car is 10 years. So it takes much longer than after the complete fleet to turn over. This, I found this article that was really interesting just about the investment and this is tracking the transactions and anything related to automated vehicles. So it could be the sensors or it could be the computing systems, but you just see this huge jump in 2016, almost zero in 2014 to $80 billion in a really short timeframe. So people are putting their money where their mouths are and they're taking kind of the risks here. Maybe there's some hype around this and they're investing it in hoping for a great return, but it kind of is an interesting indicator. And maybe there was something, John, you know, just showing a slide that something happened in 2012, maybe there was some kind of breakthrough that, you know. But the breakthrough was image recognition as John pointed out, which is the key technology needed for self-driving cars. Without it, you would not have self-driving cars, right? It would be hard to do. Yeah, Andy, the technological advancement that made image recognition was kind of a combination of algorithmic requirements, but it really involved, you know, kind of a numeric acceleration at the edge. So things like Movedia and those kind of platforms where you had a special purpose acceleration part of the computation hardware at the edge. So that's what made image at the edge possible, I think. So, you know, there's indications that there's changes coming, right? And then there's still a question about how fast that can be absorbed by the world. And there's definitely a public skeptical, there's, you know, trust in medical issues. And so this is just, you know, where the benefits, these are the fatalities. This is what we often hear in transportation, people talk about first, you know, self-driving cars. So these are the fatalities in the US in 2017. Vermont, we have about 60 fatalities here. And so the thought is, remove the human and, you know, you can reserve some of the, you know, safety, she's 90% or so are behavior. That's another thing here. Everybody's throwing their presentations off with talking about self-driving cars. But, you know, the other benefits will be talked about mobility for people that can't earnly drive. You know, I think remaining economically competitive is another benefit, you know, as these things roll out, Vermont will be ready for them. Potentially environment, if you buy it, you know, electric vehicle connection. So there's a number of potential benefits. And, yeah. And the 37,000, what is it? 37,000 points on deaths, I want to answer this. And you can dramatically reduce that by the technology, but I'm going all the way down the line, okay. Many of the things you've got here, you can have an alcohol breath test in a lot. I mean, you've got all sorts of, technological possibilities that are coming out of this, maybe on the way to Thomas Vickles, but would greatly reduce this by a point on the way. Yeah, and another technology which he touched on here a little bit is just the idea of connected vehicles. So vehicles connected to traffic signals, vehicles connected to each other. Again, the human is still in control, but hopefully getting better information and making better decisions and avoiding collisions over the vehicles and so on. A lot of the ATs are really just pushing that because they kind of recognize, we don't have, we don't control the deployment of automated vehicles, right? I mean, look at all those sectors. Well, I mean, it's the manufacturers making the investments and taking the risk, but we do control the infrastructure and we can do some changes. Another key benefit of this is also, as you mentioned, the vehicle-to-vehicle communication. You can actually utilize your resources much better. Your roads can actually now hold four times as many cars because you're not leaving gaps between starts and stops. So all the cars can kind of start moving at the same time because they know when to move and when to stop. Where time seems kind of high to me. That's a California utilization, the rule of thumb that they're using for the amount of cars they can put on the same roads. Of course, part of this is we'll look at what the same if it were fully automated in Charlie. I mean, you've got the risk that they'll look like houses, I suppose, but everybody has to sit back and they want to return. Hot dogs. They can't or whatever. The other thing is, you can bake them in the shade if it's on the roads, as opposed to when you have them. Yeah, people are talking about you can have, you feel like exercising on the way to work, you can have an exercise, I would pick you up. Do you want to watch a movie if you have a movie car to pick you up, especially if you're sharing cars between different people. So Emma, I'm not asking the people on the inside or no, I don't know how you want to handle this. So what are the, you're already in the middle of this. What are the policy questions that are looming that we should be concerned about as a task force? Well, I'm hardy to break from the table. So I think in the short term, I've been advocating, suggesting that Vermont should allow for testing of automated vehicles on public roads and putting in a relatively non-burdened process to allow that. I think the public has a right to know what's going to happen. I'm not sure people are lining up. I know people aren't lining up right now, but it's sort of to be prepared for that. And that's a fundamental question. Should public highways be used to test vehicles? It sort of goes back to the neural network questions. It's like, after it's kind of run through a bunch of iterations and have been tested in similar circumstances without just closing into the public, are there enough potential benefits in the long-term that you want to allow these vehicles to be tested on public roads? Certainly lots of other cities are doing that or neighbors are doing that. It's happening all over the country. So that's one question. And the other question, and I have draft legislation that we may work on before this year. The other part of it is, assuming these vehicles are on the market, they're starting to come on the market, what are some common sense regulations that allow them to drive legally on Vermont highways? And getting into the nuances somewhat between levels three, four, or five, even if I go down the road a little bit, like what should these vehicles be allowed on the roads? And who's responsible, who's viable at that? Those are some really big questions. And I think from that side, you'll see actually two waves of both car deployments of fully autonomous cars. I think maybe the personal fully autonomous cars are maybe a little bit further out, but you will see commercial vehicles that can pay the hundreds of thousands of dollars to enable full driving for like trucks or semis. Those are going to want to be driving on our highways much sooner and then you'll see commercial vehicles, like private vehicles. Right. Yeah, and the whole idea of truck platoonity is another big question for us, which is basically technology allows the vehicles, the trucks to travel much closer together. You know, there would still be drivers in all the trucks, but they wouldn't necessarily have to be full control. And the main thing with the spacing, the use of the spacing right down to the lawn, we have a convoy lawn that prohibits vehicles traveling closer than 1,000 feet together that are the same, you know. Would you be thinking about it as a little very ridiculous and I don't think it's all forceful, but that's what it does with the law sets. So what do we do? This is bringing up the ownership question. I guess I have a little concern about thousands and thousands and thousands of more vehicles on the roads as that's directly going into. And what might choose you the other way at that point? Yeah, find, develop, encourage, or whatever, or ensure ownership, transportation as a service brand and make investments on that side. So yeah, some of the, so these are definitely the longer term questions, right? The initial question is just mitigating the risk, you know, in the short term, long term is, there's always pricing strategies, which in Vermont, you know, can be hard to implement, but if these vehicles are really, you know, connected, there could be certainly different, you know, cost structures for the time of the day to kind of shift things off, piece and so on. I think the pricing structure's going to happen more naturally and through the market. Like, simply if you, you know, if you've ever used Uber, you have an option. You want to share a ride with somebody that will cost you less money, right? And if you don't want to share a ride, it'll cost you more money. So that might happen naturally anyway. But there's also a lot of, you know, we kind of always come back to some of these smart growth strategies. It's like the way, even if you have a shared driving scenario where more and more people are sharing rides, it's going to be much more efficient if people are located and closer together, at least one end of their trip is located and closer together, so it makes sense to share a ride. So we get back to the land use to get, you know, makes use concentrated development, separate, I hope the countryside, we sort of, that's like, John, you might, you were in the CUNY administration, right? When Act 200 was kind of put into place, and it's like that still kind of holds water, right? It makes a lot of sense to do that, no matter what the transportation option is. Joe, this seems like a very interesting topic to everyone. Well, I mean, we're going to have a lot of fun. And I'm sure you're not done with all your slides. Maybe Steve, that's okay. Now that is just a good time to say, where should we save this or the rest of your presentation maybe till the next meeting? If you want more information on it, sure, but we just posted it. And I think, you know, as we deliberate later on about what our recommendation would be, I'm not a witness, so I'm going to be available when we come to work out. I am interested in what you think the labor and implications are of this. I don't know whether. Maybe Jill can touch on this. No, it's not that, that is. Because obviously, drivers of various kinds, trucks, and taxis, and the Uber's, and yeah, all of those things is a pretty substantial part of the labor. Yeah, and my sort of take on this might be that the transition is probably going to happen slow. And there'll be time to adjust. And that's kind of, oh, anyway, you should be glad to see my time, or actually, I don't have any time left. Steve's borrowed, I'm taking it away from somebody else. We're working on it next. So Steve is next. Sure. Do you want, where do you want to come on? Yeah, come on next year. Here, where are we, Blake? So, John, this is Steve Law from Resource Systems Bureau. Does anybody else on the phone, I should ask? It's good. I think someone's still on the phone. Oh, good. You're putting the phone here. Apparently, I was on the phone here. Nobody knows how they came from this. You've got to promise them, too, this one, so. I'll get this back to you. OK, OK. So, Joe said no presentation, and then the last two, and then the last two, and then that one. Sorry. Outstanding, I'm not going to start here. So maybe just a moment on RSG, just as a, because you have my life, so I won't bother you with me at all. But RSG is a firms industry, spin-off of the engineering business school at Dartmouth College. So happens that AI partners back to Dartmouth College in the mid-1950s. At least that's some credit. A workshop at Dartmouth College with the origins of AI. We do predictive analytics and statistical monitoring. That's what we do. And we do it around the country. We have offices around the country. A bit of a long report for the National Academy of Sciences. I want to work with the highly-industrial. Mostly the transportation space, a little bit of the energy space, a little bit of the health space. So, and of course, as such, we've been working in this field for years and years and years. So I'm going to just start with a couple of things. I thought I'd like to describe for a moment, you know, AI and how I might define it, but also how I think about it really more than a definition. And then dive into a couple of things on the transportation side. Since you've done such a great job with this, I might just move a little bit more to some of the policy considerations that you're going to be facing and have an answer in technical questions. So here's a bit of a thought for you with regards to definition. I'd say that it's elusive. I don't think you should find yourself too constrained by any one definition. Because neural nets are just one way to do AI. They're not the way to do AI. They're just one way. And so deep learning and neural nets are very popular. You see them a lot until we come up with something better. And then they're going to be the one. And something else will take its place. So there's this thing called the AI effect, which is, and I would describe it as follows. Technology has just been advancing over to this as we know. Computers are getting faster. We're getting more and more data at our disposal. Sensors are getting better. All these different things are getting better and better. And so that, which we called AI 10 years ago, now we just refer to it as an optimization model. It's a relatively trivial thing. Now AI is that highly complex, unique thing that is self-learning and self-adaptive and so on and so forth. But in 10 years time, it may be something else you get again. So I guess that's just my comment about definition. But I think one that works for you right now, I would note that the past 10 minutes of conversation hasn't really been at all about AI. So I just remind the committee that shared vehicles, not AI. Connected vehicles, not AI. Autonomous vehicles, sure. There's a lot within autonomous vehicles that are AI. There's only going to be a lot of fun to get really interested in what's funny and craft fiction models that we have right now. All the things that we have that refer to more efficient, more safe driving, all these types of things. Those for sure are using AI technologies and sort of whatever you want to call it, algorithms and things like that. So I just think that it's go wherever you want as a committee, but I would alert you to the fact that if you're really on an AI direction, then you watch yourself with regards to where it goes because it's pretty fascinating stuff. All that's incredibly important, but not necessarily AI. So yeah, sorry. For a win, there have been multiple times in the past where somebody said, OK, this is going to be AI. It has been two AI winters where things were really good. The industry was getting out of it, and then it crashed, and all the jobs went away. So your point is very good. Yeah, no, absolutely. And there will be yet one. I love that term, AI winters. What does that mean? Well, it is like nuclear winter. It's like just fall off. There's this notion that at some point in the near future, AI is going to solve everything. And then it's like, oh, and it just drops, and it doesn't do that thing that we imagined it does. But then it takes off again. All of a sudden, we find another use-ray, or we're computing power, and it increases in the way that otherwise is going. And all of a sudden, we're back in the game. And AI has told you, life is breathing into AI again. So that's the AI winter concept. But OK, so hopefully that helps you just to think about AI. With regards to transportation, I think there are two. And then so, actually, let me just say that. So that gives me a little bit of latitude to talk about AI, because now I've defined it pretty broadly. And I've said there's a lot that we can and cannot include in that. So now I've set myself up to be flexible in this way. But with regards to AI in the transportation space, there are just an awful lot of really interesting things that are coming out for sure. Safety and near crash prediction models that exist today that allow us to determine with a fairly high level of precision and accuracy to the cusp of all the data we have at our disposal, what is or is not causing accidents and what sort of behaviors can allow us to avoid accidents early, which is, of course, the thing that you're trying to do is to predict them as far and advanced as possible so that you can take a right of action. Ego driving is another major thing that for sure it's taken since we've already gone past a well past that notion that cars are now better mechanically at being efficient than the drivers who drive them. I mean, that was that shift from standard vehicles to automatic vehicles. And now they're just more efficient. They're just better. They change and shift to change in the right time. All those things. That was a really early thing and nothing to do with AI. But we're just making advances in the technology around transportation such that ego driving is just going to get better. We're going to get more efficient. We're going to get. And there's a ton of AI with this part of an awful lot of really interesting algorithms that are at play helping us understand how we can improve vehicles so as to drive more efficiently. So I mentioned safety. I mentioned efficiency. For sure, this notion of convenience, right? I mean, just this idea that as we get into AI, all the things that you were just talking to us at committee, the notion that you can now drive a vehicle if you're impaired. You can now drive a vehicle if you're whatever it is that you can now get into your vehicle and it will drive for you. Once we get to levels of autonomy that take over certain things, that we simply, as humans, are not good at doing. Staying aware in that fifth hour of your driving. This is where long haul. I mean, we already have long haul trucking. That's already sort of that. And many of the vehicles on the board, you'll see over the next 10 years, I would say that many of the vehicles will be autonomous. Now, what they're not doing is the first one I'm going to ask my own. What they're doing is the long haul of our vehicle. You have to figure out how to get onto the understate yourself. And when you get to New York City and you have to drive through traffic and figure it out, that's not going to be an autonomous solution. But the autonomous solution is the eight or 10 or 15 hours of driving on a fairly well-defined piece of road. The firm that's doing that right now, they can give you the name. They've not only stopped handling their first and last mile, they've also said, we're just telling you which roads we'll do this on and which roads we want to drive. Don't worry about snow. We're not going to worry about heavy rain. We're not going to worry about all these things. We're just going to focus trying out in Miami. And they're all like, what? It gets really heavy rain sometimes. And we just can't operate those conditions. So they've kind of cut all of that off. And so they're defining, this is sort of a little bit of the geofence type of thing, but they're saying, what roads we will work on and what problem we're trying to solve. And within that, we have a fairly high level of confidence that all the AI functionality that breeds life into an autonomous vehicle will work. I mentioned just sort of advanced sensors. And you've already heard about the computational idea of quantum computing. We'll get beyond a state where it's a 0-1 computer, but it has both states at the same time. And that will open up a whole new level, we imagine, of computing power. But my answer is to the question, is computing constraining what we can do now? I think the answer is, it always has and probably always will. And I submit that just simply because we are always going to be pushing the envelope on the amount of data that we access in the amount of way we pull them. The methods we have for pulling it in and how to go to use for it. So we have deep learning now, which has a dimensionality that we didn't have before. We can do that because computing now allows us that we'll figure out something else to slow them down again. Well, that didn't happen. So is it a constraint now, like in the plunge of automated data? So that's a different question. No, it's not. I mean, we have now the systems in place to run fully autonomous vehicles under the right conditions. The snow thing we haven't sorted out, all the snow things we haven't sorted out. But under the right conditions, we have that. I was before trying to ask the question, are we past the fact that there's no problem with computing now? And the reality is we're always going to happen. So we have this big data issue right now. Another term I don't like at all, which is because it's big until something bigger comes along and now you've got to redefine it. But this notion that every time you carry a cell phone around and you have GPS on, you're submitting where you are and what you're doing to some vendor who's storing all of that. And at this point in time, your best guess is that we are storing about 400 gigabytes of compressed data every month nationally. And as compressed, that's a data we can work with. We have to expand that to work with it. So it's a massive, massive amount of data. And it will just increase in size. But there's a huge amount you can do with that data. And we can get down to the point because you have it, which will now add an ID on each device that tells me exactly what device you have. And I can put your traces over time. And I can line it with other data sets that we have at our disposal. I can see what your shopping patterns are. I can see where you live. Sure, I know where you live. I know where you work. I can see all these types of things. And what's the need of it? Well, I know that it's your ID, but for sure. We doesn't know where you live. I probably can't figure. I can pretty much figure out. Yeah. I mean, I can figure out the demographics of a family, the ages of people in the family. There's a lot of you. Certainly, I can get your shopping patterns from this. And why do I talk? You might leave this at home all the time. That's right. You might from now on, actually, right? Are you ready? And then, of course, then it looks like you didn't travel. That's a good way of doing it. But vehicles, right? Vehicles are all doing the exact same thing. So we talk about this as a computing. I'm sorry, I'm holding the cell phone. But your vehicle is going to do the exact same thing, right? It's saying where you're going. You're a Tesla that's reporting information on a regular basis, just how quickly it's going. What speed, how much gas are you using? Do you get off at a stop? All these different things. How long did you wait? We have all this information now that every vehicle is already outfitted with more lines of code than the space shuttle has. And then they're just massively more complex as computer environments. And they're data environments. And so they're storing and reporting a huge amount of data on a regular basis. All of this by way of saying, we're not anywhere near using that data in a way that we could. And that's the next step for us is to begin to fuse all of these data sets together and to begin to draw more inferences about that. And then we'll just keep going. We'll store more data. We'll use more data. We'll talk about connected vehicles and the fact that they wear that data going. There's an infrastructure notion here. Originally, when we talked about AI and automobiles, there was a thought that we would deploy infrastructure, smart infrastructure, that would say something to us about, oh, you know, signals coming out. Maybe you can even control the signal and tell it to turn green through at a certain time. Well, we moved a little bit away. And also the connected vehicle idea, the vehicles we talked to vehicles, we moved away from that recently and relied forward heavily on the onboard capacity of automobiles. And the reason for that is just primarily that it's so hugely expensive to deploy infrastructure. Already, we're not optimizing our signal designs. Just think about having to go out and tear every single thing out and put a whole new land and have smart sensors throughout the road and all these different things. And it's just probably more expensive than we'll ever be able to do. And so that was a shift in how we thought about the transportation system and put far more reliance on the automobile to do its work and far less on vehicle to vehicle or vehicle to infrastructure technology. So let me just make a couple of notes for things that you might want to think about with regards to, well, first of all, you should do the National Governance Association advice on, I gave you the link for that, on the types of policies or processes that you, you could think about that will help you in this regard. Very, very useful that I can make. And that'll be helpful. I would also note that it's probably worth talking to some people before the DOT. I think at this point are more, this, I'm gonna try to be careful not to say this in a positive way, they have been more inviting to AI and to autonomous and all those things than other states. And they did it with this express purpose of trying to generate revenues and trying to be an early adopter and so as such, one of your slides had the Florida A&E Summit, was from there and as such, the Florida A&E Summit is really the national summit. I mean, there is a national summit, but pretty much people go to that every other year or every third year, but you go to Florida every year. So they really made a concerted effort to try to attract autonomous vehicle vendors and they are doing an awful lot with AI in general. So maybe a couple, just sort of a closing sort of comments and then I'd love to hear your questions to talk about it. I do think that AI in transportation is going to move in sort of selective ways. I'm just going to start with what problems can we solve today and those sort of that last 10%, we're going to figure that out over time, but I don't think you almost never see a chart that says, here's where we get 100% adoption because it's just so hard to know what that last 10% looks like. I think the period of transition is going to be really the most difficult time, right? And so now is the time to think about legislation and think about the ways in which you manage and control this because that sort of vehicle mix between autonomous and non-autonomous is a really, really tricky time for us and it's going to be difficult to sort out. And then I think lastly, agencies are absolutely going to have to adapt whole systems and this is sort of the thing that I think that structural agencies have to, DOTs is where I really want to refer to. I think DOTs are going to have to restructure themselves. I think they're going to have to think very differently about the problems they're trying to solve. The capacity has a whole new meaning in that system. I think safety, and many of the DOTs are talking about pushing these zero-way fatality and all the different things. It's not really a DOT issue at that point. They have an auto manufacturer to solve in that. It's a bit like the problem we had years ago with lean air, which was these standards that were pushing hard and the vehicle standards about lean air. Well, it turns out that all you need to do is buy the new vehicles and have a shift towards electric and problem-solve on the transportation side. It's just you have to go over to the energy side. So I think DOTs around the country are going to have to completely change the way they do planning, have to completely change the way they're structured, have to completely change the way that they develop policies and that they think about what the future and what the challenges are. I think the longer you're planning doesn't make as much sense. The 30-year-old planning cycles don't make the same sense that they do now. Many, many interesting changes are going to occur. Just to leave you with, I also think, issues are going to be seeing issues, hard-out-state issues around data security. That's the big problem we have. And you cast around and saw DOTs throughout the country and they're one or two data scientists. I don't know what they're doing. And it's a huge, huge problem. It's going to be the big problem of the future and they're just not equipped to do that. Let me stop there. I did a lot of time, but I don't feel any questions to big topic. What do you think the government should be doing? Is that what you're saying? State, should they be doing anything? Did they get out of the way? No, I think for sure they should be doing something. I think that it's, in fact, that's sort of the latter comments were all around. I think that at this point in time, the private sector is pushing full force. And the shift, there's been a shift away from the public sector kind of owning the regulations and the private sector operating in a fairly constrained way too, one where the private sector is just running by them and doing things. So you get cities where a burden comes in and just drops 1,000 scooters on the streets. And now you have to figure out where they run the sidewalk. They run the roads. Are they safe? Where do they put? How do you move them from place to place? Are they legislated at all? Is there any cost structure? Are they paying for the infrastructure they're using or not? None of this is sort of that. And it just, you know, happened in a weekend. Related question, with an aggressive private sector, one of the things that very clearly people say, we need federal preemption, and we want you to have no sign of the sector globally. And so what's the chances that it would go that way? We had one experience, as I recall, on a translation of heavy vehicles, big sized trucks coming through causing considerable more road stress than would occur from what we were willing to accept. And we got preempted on that subject, and they had to take a regular weight in size vehicles coming through by federal law. Isn't this the right subject for exactly that? I think it's absolutely. I think it is. And I should say, what I'm suggesting is that DOTs and agencies that don't spend energy on time thinking they'll advance will find themselves behind the curve. What I'm not suggesting necessarily, I think it's an open question, is that relationship between private and public. And we're working right now in Sunfield, Max, and Audi is, has put several billion dollars behind just investing in infrastructure, new parking, new sort of smart parking. So Audi's coming in and paying to implement smart parking with the effect that if you own an Audi, and it's all a test bed for them, it's not scaled to the country, but if you own an Audi, you can drop to your favorite restaurant, get out, and just tell it to go and park. And it'll go and park. And the reason it works, in part, is because the AI on board and the autonomy of it will facilitate a lot of it, but it's also supported by the fact that they're putting the investment into the private infrastructure. So there, you couldn't get more of a substantive shift in the private and public relationship there, where all of a sudden, private companies are coming in and being willing to invest in infrastructure. With no immediate return on investment. For sure, we have P3s and all the different things where you're welcome to come in and you toll road. You can buy infrastructure, you toll road, you can have some relationship. All of that's been consistent for a long time. But this is different. This is coming in and paying to update infrastructure so as to facilitate their research and their design. Can you describe smart parking more? Because you said you can get out in your car and go park itself. But I mean, does that mean it finds a public spot and then uses an app to pay the meter? Or does it mean there's special lots that they own? How does smart parking work? They, in the case of, sorry, I'm going to look what they've done is they've negotiated to add infrastructure and electrical infrastructure to certain just designated parking. So they built some parking in a very remote area. And the idea was, right, you get out and it goes in that remote area and it just sits there. And so you need to, you call it and come back or you call it a car in the back, of course. And so now you've got to draw off, pick up the problems instead. So there's infrastructure issues around that. Which are less challenging to build a whole right park. Which are less challenging to build a whole and now you're building parking in remote areas that are less expensive or, you know, and you go downtown, you can do more virus, you can kill it. So it would not be a good thing, I would think, if the effective AI is massively increasing partner. All right. I don't think that's what the image is, the direction we're supposed to be going here. Right, and that's maybe that's just my reaction. Well, and I think that gets to the questions that people have been asking about autonomous vehicles, which is, there may be fewer autonomous vehicles with more lane miles traveled. Like, you know, more of your team. Maybe I'm moving around. Right, exactly. And then there's Cherokee, you know, gets into this, which isn't an AI issue. And I was trying to be careful not to, but that's a good one. You know, one thing I just wanted to say, you know, there's certainly things, like I'm talking about, you can say they're not like AI specific, right? But yeah, AI is really enabling them. Absolutely. So, I mean, and that's kind of, you know, I was thinking about that as I was preparing to speak, is like, what's our roles and task forces that you really are looking at those spin-off implications and how we manage those, right? Not, because we're not talking about neural networks and having control, but if we're talking about the consequences of this new technology. And I think, you know, data is one of those changes that you have to consider. I personally think that with the advent of increased technology that is in existence, for which, you know, some of which has AI and some of which may not, there's going to be a massive flow of data and the data is now monetized in ways that it's never been before. I mean, you know, you get these companies, you know, Google is making more awful data than anything else they're selling data. And so, and so you get these companies that are actually, you know, why would Google be in the transportation space? Why is, you know, why are these companies stepping in the transportation? Or because of the data? Is that because of, you know, they're interested in having facilitating movements around, you know, around the long-term and for their companies? Because with that technology comes data that can be quantized in a very substantive way. And that's what I think. So, all right, please. No, I'm just going to reinforce it. Most of the drive for AI, if it's not for, you know, world peace, it's for advertising. Yeah. Exactly why they're doing it. That's exactly why they're doing it. Yeah. I don't understand that motivation, you're exactly right. Yeah, and so, you know, that I think, and I raise that because I think that's one of those places where if you're a committee paying attention to AI, it does feel close enough to, you know, to the charge of the committee to think about, okay, so that's going to mean a lot more data and data-saving and security, what are we going to do with that? Yeah. So I'd like to make a remark, I listened to your conversation and I think of the transportation problem that the state of Vermont has. If there is no public transportation, there is no opportunity for people who live in the country to come in to work without having their own car. And so I look at this as an opportunity to help people be able to move themselves around Vermont. And I think that's what's going to be one of the benefits of this. You know, I'll, oh, sorry, please. One of the questions that's all kind of related is, particularly given the money center of what the development is like, is it possible for your leader in the state of the transportation sector now, if you live in a relatively small rural state? We're not the place where you can, like some of the mass you can have going to restaurants and having your cars parked or that's a big thing for you, which in public is, of course, in Cambridge or something like that. Most of the things that seem to be greatly advanced in the transportation side by artificial intelligence particularly fit their environments in Vermont. Is that fair? That is fair. I think that's true. I think that's also, and for sure we're seeing a lot more money being invested in those areas. It makes sense why companies would invest money is more people and more money to be made there. The problems in the rural areas are harder to solve. I don't like that you can't get from, let's say, the place be easily like a job or a service or whatever. That's right. The way the rate is, and the money isn't there. And the money isn't there. So anyways, part of it is to put that in place and then you've got, it's the extent that roads don't, you don't want to get a weather or just roads in general make a difficult difference. So I think you're right. I'll just raise one. One of the things we're finding ourselves doing around the country a lot is dealing with the very largest public transit providers. So, you know, Metro and all these different, they're trying to determine that their ridership is decreasing by the 15 or 20% year over a year. And it raises a really interesting question especially in a state like Vermont where we're putting a reasonable amount of money into our public transit. And if in fact it's losing ridership, you know, compared to like Uber or so whatever it is at this point in time, should we continue to, what does it mean to continue to invest in? Not should we, but what does it mean to continue to invest in if in fact private sector is coming in and supplying some of that mean? You know, where, how do you balance that out? What does end up happening is there are for sure some people who, you know, don't own a smartphone and don't have a bank account and don't have, and these people, you don't want them left without options. But it changes the calculation from a public sector standpoint as to how you think about, you know, public good and invest in these and, you know, ensure public good. I think it's time for, to we now we transition to our next presenter. I just want to say thank you. Yeah, you're welcome. Thank you very much. Yeah. If you guys are still on this, do you want to talk some more or do you want to hear more on that? We'll be talking about aviation, so if that's, you're more willing to comment. Our next presenter is Guy. I guess maybe could you give a brief introduction? Sure. And then you mentioned you want to talk about a couple of things. And again, one of the main things we want to hear from you is, what do you think the current status of AI is in aviation? What do you think the future uses or implications are, pros, cons, and then maybe what do you think the government should be doing? Perfect. Well, thanks for, first of all, for having me. I do want to throw a quick shout out to Milo. I think you're an amazing speaker. I think you carry a lot of credit for your school, so I think I wanted to just say that anybody didn't recognize that. I'd be like, hey, we hire you right away, so if you need a job, let us know. Thanks. Yeah, great job. Yeah, I thought I'd talk today about aviation in general and in Vermont and nationwide. I think it's important to, and I'm just going to talk about seven key elements. I'm going to just talk high level bullets and then, you know, and then open it up for what I think should be policy considerations. And Joe is cringing right now. No, I don't think so. So, yeah, wait, yeah, I know, yeah, take one, right? So, you know, I think it's important for us to understand what, in the industry and what governmental initiatives are currently underway right now. I just want to briefly talk about that, some high level things. We also want to talk about current restrictions. Right now, on aviation, as it relates to autonomy and autonomous operation, we'll touch on that for a few minutes. Technology and play, I think it's important to know what currently is out there. And again, once again at a high level, then also, you know, what is coming very soon. There are a lot of vendors that are in a race to develop aeronautical vehicles that you could just walk up to, swipe a credit card, and it'll take you from point A to point B, and it's currently in the mix. I'm going to transition from there to regulatory issues, not only at a federal level, but also at a local level and the state level. There's some regulations in the state of Vermont right now that should be considered, which will segue into policy considerations. But I also want to just throw the one part of it, which is kind of an oxymoron, I guess. We're talking about artificial intelligence. I want to talk about the personal perception of human beings as it relates to, do we want to actually hop into a unpersoned aeronautical vehicle? So I think that's important to talk about. So the FAA years ago actually had the forethought of this industry. Right now, and then by 2020, it's been going on now for 15 years, they have a program called ADSB, it's Automated Dependent Broadcast, and that surveillance broadcast is going to enable the national airspace system to take what's called the five mile separation issue, which is a congestion issue in the sky, when airplanes are flying into, or air route from California to New York, or they're getting ready to be sequenced to land at an airport, there is a five mile separation requirement right now because of pilot error, controller error, meteorological scenarios. So ADSB, and what's called NextGen with FAA, is the sequencing of those aircraft by utilization of AI. Now we're able to take ground-based radar systems, which is very clunky and expensive to operate, and they're gonna remove those systems, and now the ADSB, the aircraft, will talk to one another. That's ongoing right now, and they're testing it. A lot of colleges in Texas and Alaska are utilizing this system, and they're able to tighten that five mile, imagine on the highway, it's a distance that's acceptable from the aircraft. I'm sorry, four vehicles on the ground. Now in the air, we're able to crunch in those five miles and sequence aircraft and be able to decongest the airspace. So that's currently in play. How long does the data airplane travel five miles? Well, so smaller aircraft flies about two miles a minute, so that's about two and a half minutes. So five miles for a large aircraft, their approach speeds are 160 miles an hour, which is about two and a half to three miles a minute. So it doesn't take long, it's about a minute and a half separation. So factor in some pilot error and some ground-speed variations. It's pretty, but now we're able to tighten them up. But what happens if you lose contact with one airplane, the one between the two that are probably going to have one? Correct. Well, we're gonna talk about, this is, I'm gonna talk about that in a minute where, because there are aircraft right now that are operating in category three, as Joe had described them. They are operating right now. They've been operating that way for years and years. Now, where do we get to the next level? We'll talk about that in a minute. Thank you, that's a good question. From an industry level, again, high level stuff, because we've got 25 minutes and I know I'm standing between you guys in a break. So Uber Elevate is one example of transferring human beings. And we'll talk about infrastructure in a minute, but the theory is you just go to a location that's already been pre-approved. It's already got the infrastructure in place, which is power, it's fuel, perhaps, with hybrid aircraft. It's a pilot, perhaps, in the time being. And then it's the actual airspace clearance and getting this thing so you can lift people. That's for personnel. I know the AFL-CIO is here. I know the Teamsters are probably very interested in ground cargo. And what's interesting is, obviously, with today's regulatory environment with FAA, they're not going to allow for the drone delivery, but they're testing it up in Canada and other states where you're able to take parts and pieces, maybe a pizza, and deliver it. But there's those limitations, which we'll talk about briefly. You know, other industry course uses, everybody knows that drones are being used for just design-level survey. You know, you can do survey, you can do before and after construction photos and look at other things like infrastructure inspections, like bridges and roadways and things of that nature. Interestingly, there are companies working with VTrans right now that are going to be looking at, and in other states, load factors on railroad just by use of drones. It's really interesting. Some restrictions on the name SUAS, the simple first letter is S, which means small, small unmanned aeronautical or aeronautical systems. Small is related to how much weight there are. 55 pounds is the limitation. They're limited without a waiver to daytime-only operations, and they also have limitations of ground-controlled line of sight. So the average eyeball, define the average eyeball, right? And so the hyperopia or myopia, the distance and visual acuity to be able to spot a little tiny thing about this big at two miles away is nearly impossible. So the physical limitations of the drone and the ground line of sight is number one, the issue with autonomous vehicles because right now they have to be controlled. There are tests that have been done to just launch them and see how far they go before they disconnect in its mines. So, but it's illegal. So right now, the regulatory standpoint, that's where they're at. The technology in play, really, I was going to bring in, I didn't, is the quadcopter, the Phantom DJI-4, three small little quadcopter. It's about this big, it weighs, you know, probably about four or five pounds. Ironically, that technology is now creeping into the industry, and they are test-flying right now, adult-size, human-size quadcopters, where the person is able to just lift the aircraft off the ground, they're testing them, and the quadcopters was primarily started with the DJI. It was so good for them. They should have, they did patent it, they probably should have said the quadcopter, so it's only as part of our patent, but they didn't. The other designs that are ongoing right now for the larger size, really, is in the 90s and 2000s, and now I'm gonna start getting into regulatory now. In the 90s and 2000s, there was a big rush for what's called very light jets. So you think about infrastructure for a second. Runways are too short, and in lieu of spending millions of dollars of building more infrastructure and land acquisition and permitting and environmental impacts, why not just certificate smaller aircraft that can still carry four or five bodies and go from point A to point B, and go a thousand miles and under an hour and a half, and be able to put them on these small runways? Unfortunately, in the regulatory process in the United States to certificate those aircraft, 15 models entered, and only two really were the product, and it was the Honda Jet and the Cessna Jet. So not a lot of very light jets were generated, they tried it, now we're back to let's extend runways. Okay, so, which is very expensive and time, it takes time. So they've, the industry right now is now looking at, Uber's looking at, and others are looking at the VTOL, the vertical takeoff and landing technology, which is what the drones, a lot of the drones are using right now, but now as humans, rather than just a camera, or an infrared or a LiDAR system. Some companies that are out there are doing anything from completely battery-driven, which, when they take off and land from the top of the building, well, now you gotta get phase three power to the top of this building, maybe it wasn't in the original design, the site's great, but we can't get power to the top of the building, that's a siting issue. So others are not only doing the batteries, but they're also doing hybrids, so there's fuel and battery-driven as well. Range for these things, 200 miles, can carry a couple of bodies, and for now, they haven't even launched them in the U.S. because of regulatory issues. You're looking at the quadcopter, two people, 200 mile range, pretty nice. Can you give me a number for just a minute? Sure, absolutely. Can you ask me if we're on Skype sitting here? Okay. Nobody here is on Skype here? No, I, you know, it's fine. Can you hear me? That's gonna be, that is. Can you ask them, are they log, they can listen in on the phone, and there's a call-in number that was part of the invitation, so they can call that. Thank you. Should I continue? Yeah, okay. So, you know, interestingly, the battery, did you have a question, sir? No, no, no. All right, I was gonna say, if this was an option, you just bought a Porsche. No, no, no. Anyway. So battery-powered quadcopters are really interesting. The misconceptions that are out there is, battery-powered aircraft are a lot less noisy than, you know, reciprocating or turbine-powered aircraft. Although there are slight decibel levels differences, the actual noise from aircraft are predominantly come from the propellers and the fact that they're almost at the speed of sound out of the wingtip of the rotor blades. So that's a misconception. Other misconceptions is that if you have an Uber, there are many times I've been in New York City, we call an Uber, because it's raining like crazy, that folks are gonna be able to go and that the quadcopter's gonna be able to fly in all weather. Helicopters don't like ice and it's just, those are some limitations that folks will have to consider. I bring up noise because I'm gonna bring it, I'm gonna come back to regulatory concerns in Vermont in just a moment. So regulatory issues, just briefly and then we'll go into the personal perception and we'll go for questions. So the FAA right now currently does not have any regulation for this at all. In fact, it took close to 10 years to go from what was called a very special section of the Federal Registry called Section 333 that allowed people to go out and get a permit and then an actual certificate to operate a drone in the National Airspace System. Limitations below 400 feet, daytime, only, and line of sight. Recently in the past year and a half, they then certificated, they finally rode apart of the National Registry called Part 107. And so if you're 16 years old and you could be a 107 pilot today and you could just take the test and then you can commercially, for real estate agencies and the like, go out and do these things. But there's no, right now, there's no regulation for these unmanned aeronautical vehicles that are gonna be carrying passengers and perhaps cargo. So that's gonna have to be a rewrite at the federal level and then we'll talk about state in just a moment. The design and certification of aircraft we've kind of talked about, but a large issue right now which I think should be something that this committee, if they're looking at Uber flying is that there is a huge pilot shortage and there's a huge mechanic shortage. Now for pilots, some are gonna argue that these are supposed to be pilots so it's not necessarily an issue. However, it's the same for mechanics and there are schools in the state that are established already but I would think that Vermont colleges, perhaps Jeff Spaulding would wanna consider going in and looking at his curriculum across the state colleges to look at what is the current training that we have in all of our degree programs across the state should we entertain focusing on AI. The battery, let me just drive on, the local state, Title V which is Vermont Air and Vermont Transportation. Title V is very specific law and how to certificate very small landing areas and you had mentioned earlier some of the urban underprivileged can't get from one place into the towns. And the state does have a certification process in place or what's called restricted landing area and so when this is looked at more of a personnel and moving cargo perspective they should consider and I would caution not really changing that law much because the process is in place to restrict that even further already has that first word is restricted so to further restrict that may in the future if this does take off, no pun intended to maybe a little pun, but it's gonna be a problem to then try to certificate maybe an urban take off and landing area, transport people back and forth. I know public transit has buses but if this does actually become a wave of the future per se then they're gonna have to consider that. Lastly, I talk about personal perception. Years ago the Boeing 747 required four crew member and 17 personnel on the back of the airplane to crew the aircraft years ago. They then went down to, through automation went down to now two people in the cockpit with the fuel management systems that are up to speed, the navigators with all the current artificial intelligence and navigational equipment. So the perception used to be when you look in the cockpit if you don't see four bodies, people got nervous. Now it's down to two. Wiggins Airways in Vermont that carries a million and a half pounds of cargo to Burlington, I'm sorry, to Rutland and Montpelier airports Burlington is much more pounds of cargo every year. They have an aircraft that they've reduced down to a single pilot operation for two reasons. One for payroll and the other is less than two pilots in the cockpit you don't need a cockpit voice recorder so they don't have big brother watching. So the perception over the years have been that these aircraft need to be personed and they're getting to the point where they can be reduced down to at the minimum one in play now and even further in the future, it could be pilots. The FAA currently has approved category three aircraft. There are aircraft that at 800 feet, you click a button, it flies the entire route and it will come down, it'll land the aircraft, reduce the throttles, decelerate the airplane, taxi it to the parking and stop the aircraft. That's currently in play right now. So, but there are pilots that are still sitting in the cockpit to grab the controls if there's an issue. It's by use of, they use radar altimeters, they use GPSs, they use the autopilots and they're introducing LiDAR as well into aircraft rather than very clunky and frankly, sterilizing radar systems, so those are going away. So, but the technology is in play right now. Let me finish up with policy considerations. I kind of touched on the planets and seeds along the way. One of which is, again, I think this is a Vermont, this is kind of an ACCD, it's kind of a D-Trand, it's kind of a Department of Labor, Department of Education issue in that I think Mr. Spaulding should be spoken with and have a conversation about how, we have to think about education. I think it's very important. If it's not in the curriculums right now, I think it should be considered and I don't know and I don't want to speak for them but it's just something to think about. There's, I mentioned it before, there's a significant shortfall in mechanics, aircraft mechanics and pilots and there's also a significant lack of avionics which is the AI component. Avionics are all radios interoperability with ground stations and satellites. Those items need to be trained a lot more and there's a shortfall. It's documented and folks will tell you that. Focus on aviation education as well needs to be, I think should be a priority consideration for the state. With these pilot shortages, it's difficult. There's one aviation program, VTC has it. It's a four year bachelor's degree in aeronautics and we're looking at right now increasing the AI component already we're thinking ahead. Airspace I talked about, another policy consideration. Title V outlines restricted landing areas. I would caution any changes to that restricted, as the word restricted is already there, landing areas so that we don't reverse 70 years of a good policy. So it's something to think about. Last thing I'll say is this, is noise jurisdiction. I was so glad to hear you say the word federal preemption. Noise is federally preempted with aircraft, period. It's been argued at the highest ports. It's been argued in Vermont. In fact, the district down in Middlebury, I, we argued it with, when I was the aeronautics administrator, we argued and they said that they agreed with the Middlebury runway extension. A lot of people are concerned about noise around their airports. And we argued the point and they agreed and their opinion was that noise is federally preempted. So I say that because right now it's not standardized around the state. There are different districts under the agency of natural, I'm sorry, under active 50 who would argue different points on that case. So if we're going to use aviation point to point flights we need to think about these things when they have policy considerations for typing up on noise. So with that, I think I did all right. Questions for you? Yes, sir. You touched on work force and not having enough folks, mechanics and some of the other things you mentioned. What's the overall trend if you take out commercial flights like the Delta and Aero, United's of air traffic in Vermont? Is that on the rise, is it on the decline, has it stayed level? Well, I think it's a byproduct of how the airport's choose to operate, the leadership of the airport. The airport itself chooses to have a very verbose or a very strong aviation program and public awareness program, fly-ins, things of that nature. I think the airport, it'll be on an upward trend. And when I was the aeronautics administrator that was something that we pushed. We pushed every airport, that was the goal, one event per quarter per year. So it was four events a year and there's 10 airports, that's 40 events, a lot of flying, a lot of aviation education, a lot of fuel being sold, a lot of maintenance and it just really impacted the economy. But it also increased the amount of activity for the general aviation side of the house. So general aviation really, it does flow with leadership. Ain't I gonna be answered your question directly? I haven't been, I'm not in tune with it right now, but I do know that ACCD is currently working on a kind of economic development. There was a bill that was passed last year. I believe they're working on that. They had a meeting last week I heard. So that's important. That's a good question. I don't have the question. I have a question that generates out of a event a few weeks ago, but I'm thinking not precisely having details right. But it was essentially like this, as I recall. A new commercial airplane, now we're in the commercial side, a regular aircraft company, picks up a new plane. This is somewhere Sri Lanka, Indonesia, something like that. And it's flown two times. The first time it's flown by a pilot who realizes, or if there's a pilot in the seat, whether he's supposed to do the takeoff, I'm not sure, or when he comes in, but he realizes that the automatic system that is controlling the takeoff has a flaw. It's telling him something that can't be true. That is, it's telling him something's to do with happening, but the plane is headed for the ocean. And he knows to turn it off, he turns it off, he goes away, he does the flight, comes back. He doesn't tell anybody, which is the story. New pilot gets on, sees that the same situation, the automatic system and the plane is going down, corrects the information getting from the automated system. The automated system then re-corrects it back. And after a period of back and forth of this, it goes in the ocean, killing out all of the people. Well, do I have a basic with the story, right, is that? You do. You do. Okay. It can start with me that that was one of the stories about the risk of where we stand in terms of automated transportation. That is control without proper human involvement or interaction or what are the signals of that. What is the lesson for that from the industry? This is a particularly bad example of something that on the scale of a lot of people who fly. What is the symptom in terms of automation in running out of aircraft? Well, I would say that every law that's on the books with aviation was written in blood. And the NTSB's sole mission is to fully investigate incidents, make recommendations, and in fact enforce, the FAA will enforce those laws when the recommendations come down. I know when they look at that, there will be a bunch of regulations that will come down. And unfortunately, there was loss of life. And I'm sure with cars, there's going to be loss of life with cars as well. So I don't know, I'm not exactly an expert in that area, but I would say that I know that they'll look at it, they'll make the changes, and perhaps there are times when they say this particular type of autopilot is no longer until you can, you know, test it a little bit. In this case, I think it was a sensor male function that basically was telling the plane that he was doing something different than he was. Sorry. I don't know if you would... What's most interesting is one pilot knows he can turn it off, right? Yeah. Which gets back to this level. Number three to five. Right, exactly. When it goes back to human control, will that occur? Whereas the second pilot, he keeps trying to use the technology as it's given and ends up in the ocean. It's kind of like the guy that's drunk in the Tesla falling asleep and driving, because he maybe trusted the car that's going to do the right thing, maybe. The car was never meant to be driven, totally out of context there, right? So I want to get back to the... Could this be the last question? Who didn't want Reagan? Well, that would be good. Oh, yeah, just to make it a five-part question. Yeah, well, I meant. So I want to get back to the recorder box that you were talking about earlier, and that strikes me as that's for safety. So the idea that someone would not have two pilots in a plane so that they could override a safety mechanism is not heartwarming to me. Yeah, it's not necessarily overriding it. It's just basically the law is the law, and one of the considerations that were taken when they go down to a single pilot operation is that happens to be one of them. It's one of the byproducts of going down to a single pilot operation. They streamline the autonomy in the aircraft so that they can fly in just one aircraft. But a byproduct of that is that you no longer, the way the law is going to require a content watcher. Can I ask just one question? I mean, I think this idea, so we focused on this plane crash because of the confusion, and there's been definitely some crashes and some fatalities with automated vehicles. And so we tend to focus on those. They're kind of highlighted. But overall, in aviation, has the automation safe lives? Is there any way? I guess it's hard to know. It's avoided loss. Well, what it's done with certain aircraft is, and I'll give you a, well, I won't give you- Is it safe for overall? It is safe, absolutely. It is safe for overall, but you go back into the human factors where an individual will put themselves because they rely on all that automation. They say, I'm gonna put this aircraft into a place that it really shouldn't be in, but I got a parachute I can pull or I can throw the autopilot on. And that's a human factor. It's not necessarily an AI issue. Yeah. Yeah, good. All right. Well, Guy, thank you very much. Thank you. Everybody's been sitting a couple of hours. Instead of, it was 10 minutes for a break. We reduced that to maybe five minutes. So we started to get back on schedule. Thereabouts. What do you find the way? Go for it. Just go for it. Are you gonna be presenting? Yeah. Okay. I think what we've mentioned to Guy, if you have a little introduction by yourself and talk about any stuff you should like. Okay. And we're really looking, our mandate from the legislature is we want to know what the current state of AI is, future uses, pros and cons, and what the government can do to help out if anything. Okay, thanks. All right, so my name is Igor Sosky. I'm a CTO of the Avera Semi. This is a recent spin-off of the Global Foundries Group. This was originally IBM Microelectronics moved under Global Foundries when they purchased that group. And then now it's moving on away from Global Foundries into a separate company called Avera Semi. It's about 850 people who group. The majority of them are in Vermont. And what we do is what is called applications specific integrated circuit. These are A6 custom chips for specific applications. Now, typically the team has been playing for the last 25 years. So this is a group of folks that have been designing chips for 25 years. They're really experienced. We've been working in the, typically in the wired and wireless space. So these are telecoms like Cisco, Huawei, things like that. There's a strong push now to go into the computer chips that are custom for artificial intelligence. We've already taped out an introduction with some of them in our previous nodes. We're currently working on the next generation of artificial intelligence chips. So this is the hardware that's at the root of all this acceleration. So what I'm gonna do today is I'm gonna give you just my view on the AI trajectory. How does the AI and the hardware kind of work in this whole scheme of things? What are some of the trends, the hardware trends? Comparison between natural artificial intelligence, at least my view of it. And then I have two slides on benefits and challenges that I see with this technology. So let's kick off with just a slide on artificial intelligence today. So this is not a technology that's kind of enabling something in the future. This is really with us right now. We have IBM Watson that can beat humans in jeopardy. We have Facebook implementing face recognition to actually connect friends, Amazon EchoDots that are in our house that you can ask them anything and they'll get to the answer from the internet. Self-correction when you type your emails or you're searching something. We have now machines that can beat humans even in games like AlphaGo which are incredibly complex. They will be impossible for a software programmer to code a set of rules to actually beat somebody at Go at this point. So this is all artificial intelligence. As we move forward, what we see is a trajectory that's controversial in many ways but it is something that's at this point considered the average prediction of where we move from specific AI which is something like image recognition or driving a car or voice recognition or translation to something like a general AI or what we call a singularity. That's basically where we have a machine that's so intelligent that basically supersedes all humans from that aspect and it moves beyond that. So what I have on this slide is a set of predictions that were made in the 1990s by a fellow called Ray Kurzweil. He's a director at Google on artificial intelligence, working artificial intelligence. So he made these predictions in the 1990s and so far he's batting about 83% accuracy with where we sit right now. So you can take that the way you wanted but it seems that most of his predictions are coming through. So by 2019, we're looking at wearable electronics. We already have smart watches. We are looking at self-driving cars. Yep, you can argue whether we're level three, four or five but we're moving in that direction quickly. We have autonomous, we have an assistance like an AI assistance like Siri and Cortana and Alexa. We can translate even now. So 2019 looks like a likely predictions to happen. 10 years from now, we're moving with the hardware acceleration of Moore's law. So one of the predictions there are that a $1,000 computer buys you something that's 1,000 times more powerful than the human brain from the actual processing power. We have computers that are actually generating their own knowledge. So you actually give them a specific science project and the computer is actually chasing that piece and trying to discover new ideas and so on. We have mapped the human brain sufficiently that we can actually start moving peripheral devices like headphones, VR glasses directly implanted into our brains. Now this is scary to me and I'm pretty sure many other people but this is something that people view as a potential machine-human integration that kind of helps us not supersede, then nothing gets superseded by machines but incorporated into them. So anyways, that's a scary thought for me but when you move forward to the middle of the next century or the century actually you see the singularity. This is where machines are basically more intelligent than all the humans combined. They're creative and they're really effectively something that's more powerful in any way that humans are. So these are predictions. They are so far, they seem to be accurate but I don't know how they move forward as we move forward but it is a pretty steep trajectory. We're talking 25 years to where the top 100 scientists thinks that we're gonna have general intelligence and 50-50 chance that we're gonna have that general intelligence. Okay, so let me show you how the hardware works behind this. This is, I think there was a really good presentation Milo gave this morning. This is an example of image recognition. This is an example again of one type of artificial net. The way it works is you kind of, this is an image recognition so you feed an image on the left side here. The image is broken down into pixels. The pixels are provided to each of these nodes. So these are neurons and it's very much buyer inspired. It's kind of like the human brain. And what you have, these neurons are then connected to the next layer of neurons with the edges that have weights on them. And the weights are where the teaching of the programming happens. And what the weights are is you basically say that I'm gonna take the value of this neuron, multiply by a number that's associated with that weight and then add all these other neurons that feed into the next generation neuron and generate a new number. And then you do that again and again and again. And on one side you provide an image and on the other you have a classification of that image. In this case it's a wall hole car so you can actually have the image recognize this. And as Milo mentioned, there's a back prop where you actually reevaluate these images while you train. But once you're deployed in the field and you have this system that quickly classifies images. Now what you see here is that there's a lot of nodes and a lot of multiplication. Does the training continue? I mean so you get to a certain point in the training so it's ready to be deployed, whatever it is. But then as it's actually deployed, does it continue to train and learn? So if you look at something like a self-driving car that would be very dangerous to do that. But I agree with John and Milo that if you have like a Siri on your phone and wants to recognize your accent or it wants to look at your behavior, yeah you might be able to continue to train. But typically when for example Tesla does their software update, the car is no longer doing any learning. The car is just implemented. And what it might do is collect data and send it back to Tesla so Tesla can learn on that data and deploy the next software update. But it will not train. And in general there's actually a difference between the training devices and inference devices. The inference devices are massive computer chips that are basically bursting at the edges right now. They're as big as we can build them. The inference devices that are deployed inside the cars are much smaller devices. They're much cheaper, much more efficient. But the capability to do that, yes. It's awesome. So last time I know it was environmental sensors. We deploy these environmental sensors on the landscape and collect information, train them, and then use them to make predictions. But because it's not dangerous, we continue training on the new information that's coming in. The reason they have to do it with cars is simply because that wouldn't be dangerous. Yep. I just wondered if it's dangerous though. As long as you have to have this base level operating, it's just like human. We learn while we're out there and we improve it. You can fool the sensors, and you can fool the sensors, and then when you teach something that's wrong, imagine somebody giving you blurry glasses and then tell you to learn something about the system. You might learn something very differently that you don't want to use, yeah. So this is really, if you can see the layers, the first set of layers, the next edges, the next set of layers, the next features, the next set of layers, the text, even bigger features, and then you classify the object. So that is a, yeah, go ahead, sir. So the thing this feeds on, if anything, you want to, you know, it is, it's fool, is all these images. So it's all, this system analyzes what is massive amount of data. If we look at it as human beings, everything we're looking around the room, you're right here, all of this thing. So how far are we away from the collection of the data to make this equivalent to everything that I can see and recognize, or any of that? Are we close? So like facial recognition, I assume facial recognition I now understand is very quite accurate, but you're gonna have the faces to recognize. Where are we in that data collection? Yeah, yeah, so typically what this comes with is labeled data sets, they're called. So what you do is you show this picture to the train, to the system, and you say that's a Volvo, remember that. But then this Volvo might be looked at from a different angle and stuff, and the system eventually choose the way a toddler would learn to walk or to recognize pictures. A toddler, you show a duck, different ducks, but then the toddler eventually knows every possible duck in the world. It doesn't have to be that exact angle, doesn't have to be that exact lighting to actually recognize that. All right, let me go back to facial recognition. My face doesn't look like it did, unfortunately, at a younger age. And if I go back to all the way to the moment I was born, what I look like and what I look like today, how do you deal with that in terms of the recognition, capability to recognize that the picture of me as an infant is me as a adult? Yeah, so typically there's some parts of the face, like the relationship between separation between the eyes, the length of the nose, the mouth. So those things, the ratios are somewhat maintained, but it doesn't mean that this system is gonna be better at recognizing a baby picture to somebody that's old. You might have the same mistakes, but if it sees you from different angles, whether you're smiling or frowning, I think it will recognize that that is your face. It will recognize you. And you can see that already on your phones. I don't know if you have an iPhone or anything like that. They already classify images of you at different lightning with a hat, without a hat, wearing different clothes, it will still do that for you. So it's kind of, I think the way I look at artificial intelligence is in normal world, software programming, somebody writes a recipe of how to recognize something. So I would say if it has these features, those these features, this is what the person looks like. If I look at artificial intelligence, I just give them loads of data and label the data, and the program writes itself. So it's almost like a self-written software. It's not a coder writing the rules of the program. It's the machine writing its own rules to satisfy the disadvantage gives you that label. And you bombard it with thousands and thousands of images and thousands and thousands of labels. And then you show an image that the computer has never seen before, and it still labels a correct. But to get back to your question, I'm recognizing you at different ages. So we have a class where students will do this. And if you train an algorithm to distinguish between a car and a cat and a human, you can do that. But you really wanted it to get good at distinguishing between different cats. You would train it on a different set and more images of those cats. So you couldn't. I mean, I was actually thinking that'd be a great class project. You had a bunch of images and you would name them. It could get good at recognizing you at different ages or someone else at different ages. And that is the biggest leap in artificial intelligence. A lot of the algorithms have been with us since the 70s and earlier than that. The one leap is, one is the hardware enablement. We have the hardware that actually now can process huge amounts of data. And the fact that we have the data to actually process. So those are the two big leaps that have kind of enabled it. And now with image recognition enabling self-driving cars, there's a huge push from the industry to make this real. So we've, John mentioned the winters, the AI winters, those were really, there was not a lot of way to commercialize it, but now there's a huge push to commercialize it. There's a lot of successes, as I mentioned, the AI applications that are already here. Okay. So these are some of the devices that the processing is working on. Initially we started with just general purpose CPUs. And what this means is that when you have two cores, I can do two multiplications at a time. I might be able to do this really fast. But in order for me to solve that previous case where I'm actually doing all these multiplications just to generate one input, that slows me down. So what, as John mentioned, we move to GPUs that have many little cores that can do many multiplications at the same time in parallel. So that was the first speed up. And most of the AI is now in this realm. A lot of the self-driving cars are now using GPU devices. These are the same devices that you see in your Xbox and your kids video cards that they play and so on. So these are the GPUs that we use. The next step was FPGAs. This is what is used in data center cloud now like Microsoft and Intel and stuff. Can you explain the acronyms because I don't know if they're FPGAs. Yeah, yeah, so that's a great point. So this is a central processing unit. GPU is the graphics processing units. And then you have FPGAs, a field programmable gain array. And what that means is effectively that you can take a device and program it to do anything specifically. So it's bunch of disparate gains that you can program to actually execute a logical function. So this was the natural trend. And now the big jump is A6. So what A6 is, as I mentioned, is these application-specific circuits. So these are very flexible, but they're not efficient at doing AI, for example. This gives you an order of efficiency and performance improvement. So you will see Google and other companies already investing in putting money behind hardware to actually build these chips, which gives them much lower power and much better performance. So they can do more learning, more training, more complex tasks to basically do learning, basically, and improve the complexity of the problems they can solve. And this is really where in Vermont, the Avera Semi group is really focusing on AI. We basically build those devices for companies like Google. The growth of AI is absolutely huge. So if you look at, we're expecting a 30 to 40% increase in the business that we get for these devices. And here I've kind of labeled the CPU GPUs, and you can see the A6, where we're working on, that's a huge growth, about 30 to 40% cagger. And by 2030, we expected a worldwide GDP that's added is about $16 trillion. So this is really the wealth that will be generated across the world using AI. And the question now is, how is that distributed? So there's a lot of potential, there's a lot of growth, but there's a lot of also societal impacts that need to be really, they need your help effectively. So on this slide, I just wanted to do kind of like a human versus AI system comparison. So human brain is about 100 billion neurons. So those little balls that I had on the picture, they're effectively 100 billion of them. And each neuron fires about 200 times a second. So that's the communication between those neurons. The wires, the connected neurons are called axons in the human brain, and those transmit signal at about 100 meters per second. And our brain size limitation is really our skull. How much can you fit in there to actually do this processing? So the human brain has lots of tricks that we still don't understand. It really is a powerful machine to do what it does within such a limited amount of power, it's amazing. AI systems are about 30 billion transistors per chip. So, and that is just per chip. You can actually put many of those chips together to build systems that are the size of buildings and so on. And they're growing in what's called a Moore's Law, which is really the doubling of number of transistors every year. Now that is slowing down, but there's new mechanisms that are kind of extending the growth of devices. These transistors switch at about 10 billion times a second versus 200 times a second. So you have a lot more compute power. The electrical interconnect is running at the speed of light, and then it could build a computer size that could be any building. So, if you look at the trajectory of AI, X axis being time, Y axis being intelligence, and you look at a mouse, where a mouse intelligence is, then you see Stephen Hawking and the village idiot that they were very close together. We're not really on a whole spectrum of intelligence. We're not really making a huge leap between that. And if you look at AI, there's nothing really to stop it, it's a lot of strain. It will just blow past this. So I think legislation is really needed to really, before we get there. Can I ask a question about that? So, probably as a, imagine a type five car now. We're at the fully automated car. What percentage of that car in cost comes from this? Is this an expense, a big part of the expense, a small part of the expense? I know that's a very competitive business of which expense is a very big issue. Yeah, no, so I was just talking to Volkswagen and they're looking at how they address their next generation devices. And they're actually saying that 70% of their cars are eventually gonna be electrical devices cost. So these are all electrical components. Not just the brain portion, but the wiring, the sensors that you're using. What did you say, 70% of the car? So the whole, the thing we think of as the car, the brakes, the steering and all of that, so to say, is only 30% and this is the, so it's not just, this would be the brain of it, but you also have the batteries for example. If you look at Tesla for example, it's really a battery, a motor and a brain to it. There's not a, it's a very simple device from that aspect and is this VW or anybody thinking that when you get to automated cars they're gonna be more expensive than what would occur to you today or less expensive? It'll be less expensive, yeah, significantly less expensive, eventually. Right now the biggest cost is the batteries and there's a lot of focus on getting batteries. That technology's improved largely just like this, yeah. I like some of that. It's moving slower than the artificial intelligence with the batteries, it's moving much slower but there's a lot of focus on it right now. I've got to sense some of like the light arm for example is really 80 grand is what I heard. So what's happening with the autonomous cars is currently there is technology that allows autonomous driving but it's so expensive it's like $80,000 to enable it. So they're saying that the way they will unroll it would be first you would have what they call robot axes where you have like a car that's utilized 100% of the time before the autonomous because you don't have to pay the driver and you would be able to shuttle people around and then you would slowly move to personal cars if the technology starts to get cheaper and cheaper to enable that. Can I ask one more question for you who asked this slide? Yeah. You said before these questions you were pointing and you said legislation is needed before we get past a certain point. Before we get to here, right? Because why? The way I look at it is, well if you look at how we treat less intelligence and individuals in our world right now and if you look at even animals and stuff like that, right? I don't know if at that point it's too late to actually align on the control of AI. It's kind of too late for your dog to actually figure out whether you're gonna do something nice or not nice to it. So are you concerned that artificial intelligence might threaten human rights? Absolutely, I mean long term I am concerned about that. I think before we get here I think we have a lot of concerns here as well but I think eventually that is something that might be something that we need to think about for sure. That's, yeah. Kevin said that, I think that the possibility for good is just amazing, right? As I mentioned $60 trillion of wealth generated and you can argue that's, you can take care of everybody in the world in really amazing standard of living. You can, I've kind of tried to summarize this here. If you look at the AI you can consider thousands of parameters, petabytes of data and make decisions in government decisions. Humans can actually entertain maybe seven thoughts plus or minus two at the same time. So it's really powerful. You can really solve problems like self-driving cars. I think it's a 1.3 million people die because of human error across the world basically not in Vermont but across the world. Cancer detection is already here. We have apparently apps that you can, that doctors use to take a picture of a mole. It will tell you with the same accuracy of the best dermatologist whether that's cancerous or not. So we could actually do that. We use it for I think IBM Watson Health. I think it uses it for cancer detection, recommendations on treatments and so on. And then I mean all kinds of other diseases that we are to, we cannot take all this information to make decisions could be actually solved with these systems. You can remove dangerous working conditions. You can eliminate drudgery from our lives and really if you look at the benefits of it could be tremendous for society. The risk page has a little more small font. Prophecy is a big one. Everybody's concerned about that. So you have security cameras that are really detecting suspicious behavior even without any human interaction. So you have a camera that's monitoring, for example, a parking lot. And it's not somebody watching the video feed of that camera that determines whether something's weird. If the camera figures out that there's an individual moving in a suspicious pattern, it will alert the authorities and single out that. So you have now fewer and fewer individuals monitoring more and more cameras. So now you're kind of focusing the power into fewer and fewer people. There's examples of racial bias in image recognition depending what pictures we've shown. There's examples of AI that will amplify really our primal behavior and bias. So we had Microsoft deployed a chat bot called Tay that became a racist very quickly on the web based on the people that it was interacting with. So it was learning from the interactions that's kind of echoing that. And then the impact on democracy, it's really we're focusing power in fewer and fewer hands. In the past, you supervise the direction and employees can actually make a decision whether they want to follow that direction or not if they disagree with it. It was inefficient and perfect, but you didn't have some type of a group intelligence as a safeguard. In the future, supervising give directions, you have employed that could be an AI robot or something like that. It would be executed without any questions. It's very efficient exact, but there's really no safeguards from the human perspective to actually do that. So you need legislation to make sure that when this is there, you really have enough checkpoints and balances to do that. AI arms race is a big challenge. I think between South Korea and North Korea, there's a deployment of what are called sentinels which are machines that could turn on into a mode that's fully automated and actually can make a decision whether they take human life or not without any human interaction. Currently the switch is on human control, but they can easily enable that. Wait, what can do that? They can do that now? They can do that now, yeah. So the border between South Korea and North Korea, there's robots that are effectively cameras with weapons on them. They have the ability to be set in a fully autonomous mode. So when they see a human crossing the border or an army crossing the border, they can open fire automatically. That is not currently the setting of them. The setting is on human guided, but the capability of building such machines is really not something that's too difficult when you think about it. Now we have a drone that you can fly and can follow you while you ski and take pictures of you. Just connect the picture thing to a weapon and you have drones that can do things that we might not want to do. That inequality is something that's highlighted by a lot of these AI things. We have, without AI legislation here, you would have a level of inequality that's tremendous. You have people that can replace most of their employees with robots and although there could be wealth generated, the question is how that wealth is distributed. Yeah, that's a big concern. In the next couple of slides, I showed what jobs are the most to be affected. So this is a snapshot from a TED talk that I was watching preparing for this meeting. So they're kind of showing the two axes here. One axis is really creativity. The other axis is compassion. So really how much compassion is involved in the job. So this portion here where there's not a lot of compassion and a lot of not a lot of creativity, those jobs are really at the risk first with this type of deployment. You then have jobs that really need a lot of compassion. Those are jobs that would probably be the most AI, we're busting out the words, you might need AI to help you with certain tasks but they would still need the human touch. Creativity is still something that would probably remain a human thing for a while. But then you have the other side, something that requires creativity but doesn't really require a lot of compassion, that would also be affected. So this was the... So this was the really great jobs, a huge job that would be fully replaced. And for example, everybody's talking about that. And I heard a lot of discussions about self-driving cars and stuff but I believe if we're looking at the impact of millions of people being without jobs, once truck drivers, taxi drivers are all basically covered with self-driving cars, you don't only worry about that but you also worry about all the enablement associated with truck drivers like truck stops and anything else that's really built around them. So that's really at risk as well. And then as you see different levels of coverage is depending on the type of jobs. Question on that though. All these machines, whether they're autonomous or not, need people to take care of them. Like you need somebody to maintain it. So won't those jobs just turn into the maintenance of those things instead of the actual operation of them? Yeah, no, so there's definitely some level of maintenance I agree but it's gonna be definitely less than all the people that are doing the work right now that will be affected by it. And you can also have machines to maintain its machines as well, but I agree with you. I agree the jobs could migrate but there's gonna be a lot of jobs that they would not, right? What's the professional background of the person who created these charts? I put together these charts but these are actually, you can see the reference here. So you decided that a CEO needs more creativity and compassion than a social worker? I did not, these are the top charts. Yeah, so what's the background of that person? Yeah, yeah. What is their? He was the CEO. Oh, okay, I can do the record, let it be known. I'm like, there's obviously human bias in the individuals. I totally agree, I totally agree. That's how that helps. Artificial intelligence assigned the chart. If it had been designed by a politician, the politician would be right here. This is an interesting perspective of somebody in this business, which was, this can do great things, but don't let me get too good because we're all going to hell in a hand bastard day in the end. I mean, there's kind of a fatalistic ending to this. It's not fatalistic. I mean, I'm saying that you could have tremendous wealth. If you direct AI in the right direction and we have legislation to keep it in that direction, like I said, $16 trillion in GDP, nobody will have to work. Everybody can just go play golf and enjoy their life, right? If you direct it in the right direction. If you direct it and you're misaligned with AI where it's directed, if you don't have the zero conditions, the ground zero conditions, that's where you actually could end up in a bad position. So I think this is great that we have a task force talking about this early enough. Yeah. These are questions that only an AI computer in the end are going to be able to deal with because they're so large and global. I mean, I understand the level of risk. I understand what the outcomes might be. What I don't understand is how you direct them to the right answer and not to the end. And we were talking about this a little bit in the earlier one about climate change. Our problem doesn't seem to be the ability to be predictive about climate now. We've got more and more data going, analysis is getting better and better and better and better. It's the policy question of people who have to then say, sorry, now that I know this, we have to do this. That is the big weakness and it's the big weakness here. And this one moves at an exponential rate. So the policy would need to start following. I think kind of rain looks out and I don't know if it's possible. But it's hard to know what the legislate, you know, you say you need to have legislation like, legislation really what? I think legislation needs, as I think you was pointed out, I think education would be really key. It would be key for government to have very educated people in AI to actually be able to recommend the right legislation for it. And it would be important to, in my opinion, it would be important to actually continue to talk about this as more developments happen to be constantly changing direction so we can actually, end up with a positive outcome. The slide that I had with lots of wealth and lots of positive wealth, rather than the negative associated with it. So this was another thing I spoke from a TED talk. They said we had technologies that we came up with in the past. We had fire and cars. We can learn from our mistakes on these. You burn yourself, you didn't do it again. You got seat belts and restraints and airbags. You reduced the number of beds. I think we're on this side where we need to get it right the first time. I think we need to align well to make sure we don't make mistakes. This is an interesting website. I think it's a lot of great thinkers in this space. This is a mission statement. It's an interesting read if anybody wants to spend some more time on it. I think that's all I had. Any questions? Would you be available for follow-up questions if we emailed you? Absolutely. OK. I have a question. It's sort of broader than this, but it's related. I just thought about that. We've had people doing presentations like PowerPoints and stuff. Is it possible that we can have that collect those from what this isn't have that posted on the web so that the general public can refer to ICK when not in? OK, cool. So it's OK with you? Yeah, yeah, absolutely. So we're out on the slack. Great. Well, thank you very much. Appreciate it. Tim? No slides for me. I'm Tim Kenney, and I'm the CEO right now of AI, certainly a small startup company that I started with my son, Nate, two years ago. I used to work at AI. I'll tell you about it in a minute. But in many ways, I'm going to be the counterpoint to what you just heard. Let me just go from that standpoint. I'm going to roll way back to me coming out of Safe Night's College in 1986 when AI was hot. And it was so hot that I wanted to go study at grad school. And I finally got into it. There were like five really good programs in the US. I studied UW-Madison, and I studied our additional intelligence vision there. And I got my master's degree, and I was excited. And I left. I got a job in my lab, working for a big catacomber computer edge design, computer edge design. And we built a super cool piece of AI to get rid of drafts. Back then, you would get ready to make a part. And there was a step where a guy got paid to label the part so that it could be put into a machine by another job, a guy that would set up the machine. And it would stamp out these parts. And it was a big part of the industry. And we worked for two years on that piece of AI. We came out with it. It sold for three months. And then it got shut down. And the reason it did was they came up with a way to get rid of the next job, too, by just automating the whole plan right into the machine. So both of those jobs disappeared. And our AI left. And our whole unit got shut down. And I went off to IDX. Next. And then I did AI and IDX, too. So what did I do there? Well, I worked in the support. Well, I was a software developer running a team. But we worked with the support group because the calls coming in from the customers and the business was growing so fast. We said, can we automate this and make these calls so that the people taking the calls can find the answer faster? So we built what back then was considered AI. Today, you know what you call it? You call it as simple as the Google search engine. But it was AI back then. So we used all this machine learning. And we built some really cool stuff. And did we take jobs out of the system? Probably future hires that we took out of the system that never got hired to do the research on each individual problem because they could look it up, find it, and get to the customer very quickly. So we did that. Then I went on work for medical record systems there. And we started to work in doing medical record systems. All of this using machine learning at times. But AI had cooled off. I think you heard about the AI winter. It wasn't called AI. We called it object-oriented programming and support vector machines. And it was all these algorithms, tricks of algorithms. And we would use these. And we would build them in the systems. Well, GE bought ID Act. And I became a global vice president of R&D for Imaging. So I had 12 offices around the globe. And I would go visit these guys. And we had two groups in Europe that had some really cool AI vision that did for radiology, cancer detection, and lung and breast imaging. And it was great. And statistically, it was better than a doctor. Well, we installed that in a clinic. And they used it for a period of time. And it was better than them when we were tracking it. And they shut it off. And the reason they did was they were worried about liability issues when the doctor disagreed with the AI. And their reputation's on the line. And there were also false positives from the AI. And they hated that. Because then they had to explain why they disagreed with the AI. So that sort of faded away. And that was in the 2000s. And then, obviously, the rest of everything that was going on, we just considered IT. But there's machine learning built in. So many things that we're using today. There's no fear in the AI that's out there for me, of any of it. I will say, I don't really want to walk by an armed robot at the front desk that has bullets in it that's ready to shoot me if it decides I'm a threat. I'd like a person to make that decision. And I think we're a long way from that kind of ability. Now, getting to the tech talk on the hardware. The hardware isn't able for me to do something that I never could have done even 10 years ago at GE because the tech didn't exist. Which was, we came up with a way to combine what we call n-dimensional data to come up with predictions. And what that means to us is, think of it as an MRI image, the radiology images. This is how an actual radiologist diagnosis. They don't just look at an image. They look at your medical record. They look at the other lab tests. They look at the images. And we came up with an AI. We're waiting to see if we can get a patent. And it's very hard to get software patents these days. So I give us less than 50-50 odds that that money will have been well spent for the patent during. But we wanted to be able to predict those type of things. And in that process, my son happened to say, he's more into sports than I am. He's like, hey, Dad, couldn't we feed this sports data and use the same n-dimensional thing to predict sports? And I was like, yeah. Well, let's do an experiment. Let's find where we can buy the cheapest sports data. And it's funny because I'm not a gambler by nature because I can do math. So I don't usually like to lose my money. And we came up with the horse racing industry. So we happened to call the Kentucky Derby exacted pick as the best pick. And at one, people asked me, well, how much did you make? It paid $63 to one. And I said, well, it was a minimum $2 bet. I've got $120. So we're in the process of trying to license that tech out right now into that industry. How good is it? Well, we're 10% more accurate than the best experts. And why? Well, because like you heard, the software that we developed can analyze 1,500 data elements per horse per race and look through their entire history to say, how's it going to run today? And any normal person who sits at the track all the time and looks at the things and goes and looks at the horses, they can get an instinct for it. They can get very good, but they can't do that kind of analysis of that. But it's an algorithmic trick. And you can't pick up what we did. We heard about the labeling piece. You can't pick this up now and just go, hey, don't talk about the NFL. Let's do fantasy football next. It's actually a lot of work because you've got to label everything for it and set it up in the right way. It's not general purpose like you can learn it on its own. So when I think about, I happen to disagree with the predictions of a singularity coming in the next 50 years. I think we're hundreds and hundreds of years away from it. And I would be excited to say that none of us have to work again and that type of thing. But the automation is going to happen with or without the legislation because that's always what's happening. So when you go into McDonald's today, and I happen to take my little brother who loves to go to McDonald's. So I took him to McDonald's. And the front counter is gone now when you order out the devices. So they just automated those jobs. Some of the front counter jobs are going, well, can AI put fries better than the fried person? You bet it can. Never gets tired. Never gets. And it can always pull it out exactly at the right time, see if it's crisp enough, put it back down if it's not. It can. And we're not going to be able to stop this. And it won't. My personal opinion is in four or five years to say the next day I went there is coming. And you won't call it AI anymore. You'll just call it software. And you'll say, can the software run the fry machine? And the answer is yes. And can the software drive the car better? And the answer is probably statistically better. There will be situations where, I heard the transportation talk was excellent, where you say, I think you gave the example of the airplane crash. Those things are going to happen. They're going to be catastrophic. And the question is, do they happen less? The answer will be probably much less. But it'll be making a decision. And do we really want a machine making that decision? And that's the hard thing we have to decide. But I worry. And the reason I said yes to this is I think I hear some great things about education and some things we need to do. But I worry about Vermont and our tech industry here looking silly if we pass something that looks silly to the rest of the United States. We are going to lose jobs. It's already hard to recruit here. It's hard for me to find the people that I needed. I was also president and CTO of my own grocery while we grew before we sold into private equity group. We used to stay out of there, too, by the way. I saw 60 million transactions every night come in on the system from groceries. And I will guarantee you that I knew what you were going to put on your list better than you did before you went into the store. And we actually built that for one chain and creeped the customers out. We had to turn it off. So we would say, here's what you're going to buy next time you're in the store, and it didn't go well. I don't like that. Yeah, as well. Instead, we turned it on to the advertising side so we know when to sell the rights to push ketchup to the ketchup people. So we could say, do you want to buy this one? But I want to, as I would like you guys as your committee or your team that you're looking at to be extremely cautious about setting a tone about being negative to what I consider software automation. Because if you knock us down, we work very hard and numberless in this community over the last 30, 40 years or longer for some people. To build Vermont's tech invention, global foundry is a result of that. Companies like Dealer, IDXGE, My Work Grocer, those things take a long time to get the right people here to do the recruiting. And if we start passing legislation that looks like we're afraid and it's not wise, then we're going to have some real problems. And I think that's the main thing that I came to just give another standpoint and opinion. And just to be clear, I totally agree with you. I think we need more education for more people that are capable of doing AI so we can also need that direction to make sure that everything is moving. So everybody benefits from that. I'm totally not negative on AI. I think it's a great technology. I think it's just like any other technology has the power to be both used for good and evil. But we just want to make sure that we are able to do that. From the positive side, and the things that we could recommend that would improve the climate, you're worried about us recommend things that will make the climate less effective. Now I'm sort of on the other side. Are there things that we could recommend that would improve the climate for AI development and move them on? I'm going to address it a little bit more from the software side and a little bit less from AI. But education is certainly number one. So let me tell you about at my work brochure, we had 300 software engineers working for us in Romania in one location. They graduate in that city in Romania, more software developers than all of the colleges in New England and New York State combined, one city. And it's because they don't have any other main industry. So if you're super smart, you go to school for software. And these people are excellent. And I will tell anybody. Now, we wanted to keep the core of our technology in our control. And we didn't want to set up a complete shop there. So we kept a lot of engineers here too. But that education piece of helping people get that high-end education in software development, which or today we might call it data science, or it's not really AI yet, maybe it will be someday. I think that's important. But the second thing is access to money for startup companies is still a major problem in Vermont. And it's why you see these companies start up in Massachusetts. So we are at the stage where we're out looking for capital yet. But if I do, my expectation is I will have to set up the headquarters in Boston in order to get the funding we're looking at to go into other areas that I can't find it here. And we found that even as my old brochure at the stage where we took in venture money that the Terram Brothers did, that was a Boston-based company. And the only reason they were willing to do it was the company was so far along, they wouldn't have touched it in its early stage because they don't want to make the trip up here. So we need to find a way to free up more capital for startup companies too. I agree. I think we need to bring more AI startups into the company in need of Vermont. I think that would be a benefit to Vermont from the state, especially as jobs transition from one set of jobs to others. So I'd like to have this conversation, I think, from my idea, how's the fastest internet service in the world or just about? That's another gigantic problem for us. We're going to have the remote areas. And I look at that anytime my wife likes to look at Zillow too much. I'm like, no, no. It's not a pass-up internet connection. I have no question for you. So you mentioned concerns about this task force recommending legislation that might be Vermont look silly or that might hurt the economy. Our mission is broader than just recommending legislation. And we've been asked to look at how Vermont could be a place that would promote development of ethical AI and how AI could be used to improve the efficiency of government and to improve our economy. And also whether or not a commission should be established for ongoing work. So it's not just like within a year we're expected to propose laws or regulations or not. That's one of the things. So having put that out there, I'm curious, what do you think about an ongoing commission to look at those things? Or do you think the government should just stay out of it and trust corporations? I think it falls into the individual areas of government for like the transportation one was an excellent example of people who oversee transportation saying aren't the cars taken up to drive themselves. But to look at AI as its own thing to me, I don't know how you do that. Like there's AI in the washing machine when you use the smart wash cycle. There really actually is. It's machine learning. That's how they train it. I don't know how you say is it ethical or not on as you get into software and you say it's showing you an ad. Is it ethical to show you an ad to buy a ketchup right now? I don't know how we do something like that. You're concerned about having a robot meeting the checkpoint, right? Yes. So it has ethical components. And to me, there are ethical components to those type of things. But I don't see it as AI as much as it's an armed device. I think we get scared as we say it's artificial intelligence, but it's not general purpose AI. It's very trained and detailed in one area that is specific. And to me, it's about the safety of a device, not the technology underneath the hood to call it AI or not. Yes, I agree. I mean, AI could be just an optimized software program. But that's how you get to look at it, right? It's not good as optimized. The only difference is that nobody has coded it. It's the machines that actually grow in control effectively. So the testing is definitely there, but that's associated with any device that I'm going to do. I think the government would benefit probably from looking at the technology direction where we're headed and kind of look ahead and try to see how to meet the problems that might be arising. For example, by self-driving cars, taking people out of the jobs of driving trucks. So I think that would be the benefit of actually having experts like us to present more of the technology we think that will be going on. One other thing I just want to throw out there to think about, I hear your point about how it could be handled in all the different silos of government. But one of the things we hear about a lot of problems with government is that historically, we humans for whatever reason, it's probably, one could say it's the way our brains are designed in the cognitive bias that we have. We break it into categories so that we can understand them in that process of separating things. We have created problems for ourselves in that, I guess like the question I would have, it's not for you necessarily, it's for our group and for our society is, do we need to be thinking more interdisciplinary and do we need to be thinking about the intersection? And this, you know, artificial intelligence could be viewed as a tool that has the capability on its own to exceeding its maker. And if that tool, if that kind of tool is happening across disciplines and we're not communicating between those silos, does that undermine our efforts? You know, this goes back to your presentation, you know. So I just want to throw it out there. It's a question, and I've heard sort of people present for both sides for the last two years. So I hear your point too, what it's like. My question is, you know, can we trust that if we break things into pieces that we're gonna be able to stay ahead of it? So, I don't think you've had the answer of anyone does yet, you know. I think we need to look at this as a powerful technology and you need to figure out how to best control it, to some extent, but it is a powerful technology. And controlling it might mean actually like bringing a lot of AI to the bond to actually educate people about it, to actually have to reap the financial benefits of it, but also have some type of better educated legislation to actually address it to the world. Thanks. Thank you very much. Thanks. Up next is Joey Appleton. You're right on schedule. Oh, yeah. You're right on schedule. We're not really on schedule. Oh, all right. Oh, all right. All right. All right. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. I'm sorry. You can all rest this year. That's for the last 100 years. Excuse me. Joey Appleton is my name. I work with engineers and construction, ECI. We've got hundreds of vehicles on the road throughout the state of Illinois, so I wouldn't probably see them. We do a lot of similar infrastructure that we all drive on a new date. We look at building projects, building playgrounds that are not very much useful right now to feed. So we get involved in a lot of different projects all throughout the state of Illinois. A lot of the things that we do, we rely on people with pretty much everything. We have equipment that can move things, but we right now, we rely on people that are very dumb tools that require very well-trained people and some of the problems. We do have right now those getting well-trained people in the door. We do struggle with the whole industry of struggling with these questions over and over. My background is civil engineering. I've worked in Boston, I've worked with Pyrrhus in Boston. To the gulf, I've built a lot of projects in the Middle East and seen some of the projects that are individual projects that are larger than what the whole entire state of Vermont spends in construction and different. So I've seen things across the board of different magnitudes of work in different technologies most of the time. And some of the things that we are actively doing right now in construction at ECI or in general, that is actually consider artificial intelligence. And I, in general, consider artificial intelligence and anything we're relying on a computer to make the decision. And it's not just a linear path of going from here to be. So right now, for example, a lot of things that the biggest area of AI practice right now is machine control, equipment, it can control itself, it can drive itself. Right now it's, we have in our programming, we have several pieces of equipment that has an operator sitting at the seat, but it's got a computer that guides it that tells it how to dig down, it tells it bulldozer how to lower and raise the legs so it can sculpt the runway exactly to what the model tells it to do. And that's all GPS control and that's pretty, that's as advanced right now as people getting in the state of Vermont. Japan and some of the other Asian markets are a little bit more advanced where they actually have guys controlling pieces of equipment remotely from other workstations from other countries. Bigger projects, bigger, larger mining operations and things of that nature. We have tremendous amounts of very large, very expensive equipment. That's where you kind of see more of the overall automation and the overall, you know, lesser human impact on the NIMP, I mean the operation. So construction, in general, construction's about 10 years behind the manufacturing industries on technology in general. And then the state of Vermont, I would say, based on my experience, probably another 10 years behind some of the other states and some of the other countries in the world as far as technology goes. And the construction sector, some of the biggest opportunities that we do have, you know, where we're, why do we recognize to be, you know, most of our jobs are over budget and over time. So we're not the motivation of this group. For many different reasons, we've never seen the same building twice. We've never seen the same jobs like twice. We've only built one kind compared to the manufacturing industry, so that's a big reason for that. But some of the biggest opportunities that we, you know, that are out there right now are related to, you know, software systems that can, you know, optimize schedules that can understand all the different widgets that go into a project and then build a schedule that no human can possibly build and identify the steps. So there's ways that we can streamline construction for sure by introducing a few more people on the ground that makes the job site safer. So there's certainly a lot of safety aspects to it. That can be seen in the fund. We, there's, every day we're making dozens and hundreds and thousands of different decisions on, how to deal with the situation. So there's definitely plenty of room for computer guided decision-making, for scheduling, for sequencing, for material procurement, and then even in the design and the engineering phases, you know, there are, that's one of the first things we do when we get a project is, you know, in the drawing set, for example, we look at it and try to understand the scope of the work. And it doesn't take us very long to ask a question and say, why are we doing it this way? Why is it being done this way? Why did the designer, you know, maybe try a job like that, if we do. So I think there's a lot of area for artificial intelligence to be applied into helping, like the agency of transportation, for example, to decide what projects need to be put out there next. What the scope of that should be. Should we replace that railroad bridge or should we rehab that railroad bridge? And you know, there's millions of different decisions that can be made. And I certainly think that will help us overall be a little more efficient as a sector but at the same time, the more efficient we are, the less waste we have. You know, some of that, that shift is happening somewhere. Some of us, the labor staff are getting paid less because they're working fewer hours because the jobs are getting done quicker or, you know. But so there's lots of opportunities in that respect. As far as the autonomous vehicles go, you know, we've got dump trucks driving down the road. We've got the equipment being hauled back and forth. So there's certainly, you know, that technology is certainly something that will tremendously impact the construction industry. It's scary to think that you'd have a dump truck loaded with, you know, stone driving down the highway next to you with nobody running it. So there's certainly, so we have all the legislation and all of the effort to go into keeping the transportation industry safe in respect to that is certainly something that will need to be considered and continued after the new construction cycle. Definitely a state of Vermont. We work all over the state and there's more. More often than not, we're in an area that has zero connectivity. I get in trouble all the time. I've been in the state of Vermont for two years now. I'll mount myself to the job to go look at the job. But then once I get there, I don't realize I probably should have been paying a little bit closer attention because I got a known service now and I can't mount myself back. And I ended up navigating through some of the back roads. So I think definitely, you know, some of the things that we should be doing in the state of Vermont will definitely expand my network. And that's, that's a, that's a, if we want, if we want AI and we want to make big data collection and big data use, connectivity will be a huge, huge thing that we need to invest in. At the same time, myself, I actually do, I actually bought a can in Vermont and bought this premium several years back. And I bought it because it was in the middle of nowhere. My cell phone didn't work. I had no one to call me and it was nice. At the same time, you know, that's where we go to Vermont. So I'm going to give you this way of the way of this balance. The agency of transportation, it's definitely the company to come on. The biggest vendor in the construction sector in the state as far as the single agency or single entity. So they'd be, you know, collecting, starting to get data collected, starting to actually build up our databases and information. We're a very slow industry at that. So I know there's some big, big pushes right now at AOT to expand their automated systems, data collection systems for managing and the industry and construction projects. So that's huge. That's definitely something that should keep up. And in my company in general, we're in a transformation right now filling out paper time cards at the end of each week to go to an automated cloud-based data collection system. So if you build things like that, we can't do anything without the big data sets. So definitely as an industry and start collecting more data that's actually available to be used. If we can't access that data set, then we can't do anything about it. So I think that would be good. If you want to encourage technology producers to actually invest in the data sets that we'll be able to see. Education is going to be huge right now. Right now we're going to jump at our office. We'll take anyone who's got a pulse right now doing their job. We are in the need of people still. So the fact that it's going to be very many years down the road, we're not going to be very, very, very reliant with what I'm seeing. 25, 45 years from now, we'll see what's going on. Maybe we'll have a few jobs shifted and maybe five to 10% fewer jobs. We'll do the machinery and software taking over if you would. I don't see anything in general that we'd be able to move beyond that. I think we've got a long way to go even before we even start to make progress. So as far as legislation goes and as far as contributing to this growing industry, connectivity, education, we've got to educate people to get them back into the industry in general because they actually care about the construction industry and actually contribute to it. So right now, in many respects, it's kind of a dying trade. So getting people back into the trades and getting people educated on the software side and the technology side actually won't put their effort back into it. Does N1 have any questions? So it almost sounds like you almost need to invest in artificial intelligence due to the lack of labor force. Right now, I would say yes. I would say unless we get more people coming in the door that are looking for jobs in this industry, it's not, if you have no training, no education, no formal education, it's not a very pleasant working experience. It's cold at the yesterday, it was before zero for the guys out there. It's not like, some people love us on the work. So in the short term, it sounds like you need more workers, but as technology progresses, you will inevitably need less workers. So the rest of what we need to think about is like we might want to encourage people, in the short term, but in the long term, if we put too much energy into pushing people into construction, they're going to be out of jobs in 20 or 30 years, so we have to think about the balance. I think it will definitely be a huge shift. There'll definitely be a big shift before we start meeting people. I think that's probably something that is inevitable, but probably forecastable. Definitely getting more people into the trades, getting more people with technology backgrounds into the trades. Everyone doesn't have to come in and be a labor or equipment operator. We do need more people who show up and have an interest in. And it's a very complicated, it's even though we'll take anybody with a pulse to come and join us, it is a very complicated and complex industry, it's not like manufacturing. Everything is different, it's a different location, there are different constraints, there's different environments and conditions. For those reasons, it's definitely going to be very slow at adapting the technology compared to others. The more we can take the lessons learned from the manufacturing industry and apply that to the construction industry is huge and it's going to give it to people to actually put the effort and the time to making those connections. And then not being on board either the designers or the architects and everybody who's got to see everything. I mean a lot of those are big, not my backyard type mentality at a lot of times. People don't want to go look at ugly bridges, so they want to look at a bridge that blends into the environment. The environment is different every square foot of your image and it's tough to introduce manufacturing. But the type of practices that do mass production and mass relegation. I do have one more question, it's a simple one. You've said more than once that you're looking for help. If people in the general public were watching us or watched this recording or looking for a job, how could they get in contact with your company? Website. Can you say it again just so that there might be people that watch you? Yeah. I think people are watching you. Yeah, engineers for destruction now. We'll be able to see our team.com. Thank you. One quick question. Yeah, go for it. This is on behalf of some young people I know. You mentioned the Montpelier Elementary School Playground. You also said that a lot of products come in over budget and over schedule. I just wanted to know on the record, is that project your most important thing on schedule? That project will come. That project will come in on schedule. That's great to hear. It will come in on schedule. That's wonderful. I think that could be a total budget. Just being raised is such a good donation. Great, thanks very much. I imagine because of your lack of labor force and your continuously, you need people, you're turning down the work, right? All right, we're having to be more creative on how to have a staff jobs. We're having to be more creative on how to optimize where we send people and weigh the risks of one client getting upset because we left their job to work on another one. So another area for AI to help us that's good management resources. So are you being more efficient and is software scheduling and sequencing helping you? Yes, right now we are introducing software as we speak in the implementation stage right now in order to reduce the amount of time we're spending doing some daily work. Reduce the amount of time it takes to come by payroll department to process the paychecks for the 150,000, 75,000,000s. The more time you can get people to free up the more productive you can feel, the more we can do with what we have. More or less, more or less. What is optimum employment if you're a company? Optimum employment? It's a very, it's a very, fluctuating industry, especially in Vermont, especially in the middle of the season. So right now we are going to just keep that 200,000 across the middle. We will drop down to 100,000 or less in the middle of the month so we can slow down. We are in a growing state though. We are looking to expand our market sectors that are drastically under ripped or there's a lack of talent and expertise for certain sectors, especially in Vermont, especially if it's not seen in the private food side of things, so that's an area that we are looking to expand on the property, the property view and how we can see this as well. Thanks for your time. Thank you. Thank you. Thank you for your time. Okay. So my name's Jill Sharfman, as I mentioned before, I'm president of Vermont AFL-CIO. I am a retired letter carrier, which means I spent 31 years outdoors working. And I'm here to tell about it, but I'm not going to. And you know, this is like Charles Dickens. It's the best of times. It's the worst of times. You know, AI offers us great opportunity, but we are going to have to set a standard of living for it. Here we are at the richest nation on the planet. We need to set a standard of living for people as AI moves forward. And right now, and many people in this room that know a lot more about this than I do, but I look at it like AI is kind of having hiccups and burps. And boy, when this stuff unrolls, it's going to go fast. And we are going to be faced with some incredible decisions that we should be making now. And so essentially, I kind of scanned some information from the Transportation Trades Department of the AFL-CIO and they respond to legislation in Congress. And they make remarks along the lines of the leaves of the existing regulatory regime is inadequate for these vehicles, talking about automated vehicles, self-driving vehicles, one, two, three, four, five. When Congress passed the National Traffic and Motor Vehicles Safety Act of 1966, the law vested the national NHTSA with the responsibility for protecting the public against accidents created by improper design, construction, or performance of motor vehicles. For automated vehicles, NHTSA's voluntary, unenforceable guidelines do not uphold the agency's founding mission of ensuring safety and protecting the public. By allowing manufacturers to deviate from or otherwise ignore the guidelines, this approach may create a dangerous patchwork regime of non-compliance. These are the kind of things that is attracting the eye of labor. Then they want to be in the mix. So when they develop committees like these at the national level, they want to have a voice in how this unrolls and what happens to displace workers. 2008 to 2010, they did a little examination of workforce and they found that workers were when these jobs are being displaced, that we have proper training. And proper training for a person like me who's technologically inept, that they have an ability to have a meaningful job and also in geographic locations. One industry may shut down and look at the auto plants all over Michigan. And what happens to those communities? So how do you keep building a community when the industry leaves? And how do you, these are all questions that need to be answered. And we should be prepared to answer them and have solutions. And we envision a work day that's short. We envision more flexibility, more time for people and their families. And right now, we at an odd place where people are working more, like there's no workers. Because automation is out of point. And we're still needs a lot of workers. But we're going to go past that. And when we get past that, how do we do it? And we're not doing a great job right now where we underpay people and people work for three, possibly even four different companies are never home to raise their families. So on and so forth. So it's time to take a good look and hopefully our legislature will focus on some of these issues. And not just on the, like with the displace with the NAFTA trade. The amount of money that workers who were displaced were eligible for, like this big and their loss was like this big. We need to do a better job of ensuring the standard of living for the global community and certainly for the richest nation on earth. Because we create pockets of third world poverty in this country. So that's the short and long. So what I wanted to say, and keep us on time. Is there a specific model, so what? I'm just going to get a little bit of that. So we're talking about here. Let's say that 50% of people who are now in the right are crooks. Because it's like 170,000. Yeah, it's a lot of people. So let's say 50% of them lose their, you know, that's a huge thing in terms. And obviously it's more so in some places than others. But it's that big. What is that? Is there an example of a kind of program that has work with displaced workers that would be an effective substitute? I get that training for is part of it. But then again, if you're in an overall reduction of workers in the economy, training is only going to be part of the answer. You don't want to say, I would think that more of these people can go out and form a welfare. That's not really what you're looking for. No. So what is an example of a kind of program that could meet this need and has an effective impact? So I think, well, how do you tell you is what I think of when you ask this question to me is sitting in the legislature and listening to people talk about the need of 10,000 workers between Vermont a year. And we spend, I don't know, we spend a lot of our budget on schools. And we work really hard. And our kids do a good job in school. But still, we haven't met that training need or what we need for today in the state of Vermont. So I can't say that there's a magic bullet, silver bullet, or anything of that nature. But we have to be more thoughtful. And something that we need to do as we, this problem of not having workers in Vermont, it didn't just happen today. But we have to address it, and we have to keep up with it. And how we did that in Vermont, in particular, and look at the technological people that we don't have. We have an aging population. And this population is going to need some of those people from the TED talk to help take care of them. So we really have to start gearing ourselves in two directions, one in the nurturing field and one in the second life. So how do we do that? Well, Romania, apparently, has an idea of how you get this job done. But they have better internet than we do, because they invested in it, too. So how do we best, where do we best? How do we do that? And why don't you know the answer, Mel? But I know that we are a thoughtful state, and we will work on it. And hopefully, we will do something better than what we have done. I think the curriculum is a really key piece to my opinion. I mean, to some extent, you want to take a big chunk of that, $16 trillion worldwide and put it in Vermont, ideally. So the curriculum, the training, the people to actually take a piece of that is a huge thing. And I think to start as early as high school, I mean, we've described a very complex principle, but it really isn't that complex to actually use. So that is enough. Training is penetrating. I mean, you said, well, along your visions here, it could be that we just have less work. People have more time. There are fewer workers needed in the economy. And how do we then deal with the intercession? So how does Jordan work then? So the question becomes, really, as you transition towards that period, you're going to go through, yeah, drivers losing potentially their livelihood. And but you will also see a lot of people still be able to work in the creative fields and the passionate fields. And a lot of these fields where you need the human touch. But at the same time, you also have people that do creative things that need help. And that would be where a lot of the training for AI would help. You would shift jobs in those positions away from the positions that really don't need as much creativity and also don't need as much compassion. So maybe you can target those groups and start training them to move into a direction where they will be ultimately needed. But eventually, ideally, if we get to the point where you have all this income that's coming in and you don't have people needing to work, then things like basic universal income come to mind. And I know a lot of countries are throwing that around component as well. That sounds, not a judgment, but it just sounds like socialism in a way, right? It's like, somehow we're generating all this revenue. Because we can't picture that because we need to spend billions of dollars in infrastructure. And we need roads that we don't have. There's just so much work that needs to be done today. It's hard to picture it. Yeah, I know. But still, it's generating all this revenue. I get that. And it's going to all be robotic. And somebody's still going to pull those companies that's generating all that revenue. And then it gets redistributed to all the rest of us. So we don't have to work. I mean, it sounds wonderful. He put the industrial work. How many times does that have to happen for us to head up? That sounds great. We'll trigger up that and fill people with their jobs. But at that point, what do you do for those people if they don't have income? So I was saying, just in the short term or practically speaking, we've had an easy interpretation of our time by the people that can drive clouds. And we've sat two people on each of our trucks and they've found a one. And it's really difficult. In recent times, pie farmers, they would come up. And they had those basic skills. And it's hard to find people now. And it's hard to find people just to drive trucks, period. We heard that. We heard last time it's hard to find people who want to farm. And so I mean, I'm kind of worried that's more of an opportunity than a risk at the moment, which is people don't want to do those jobs. Technology is kind of coming into that over time. And so why push people in that direction? And so it's less of a retraining thing because we don't have enough people in those jobs anyway right now. So it's not like we need to retraining, or I have to do something different. It's more about how do we train workforce in the future? We have 4,000 people. We need 10,000 workers a year in Vermont. We have 4,000 people each year who fall off the rolls. They need a job, a purpose, a reason to get up and go to work and feel productive when they've got something at the end of their day, I would say. So the situation already exists. And we need to address it and find a way to get people to out there at work so they feel like there was a point for them to go to work. Is that the result of innovation and AI, or is that just the... I think there's something to do with the education system, okay? I would say that's my... Is there really going to be more of a need for retraining because of this? There's a need for training right now. Forget retraining. There's a need for training. Are we going to have another opportunity to explore this topic more in depth? Because there was a few things you both have said that I'd like to get into and we don't have time, which are like one of them I'm not going to get into, but I'm just going to state those things. One was this idea. I heard you say something about socialism and redistributing wealth. I would ask the question, is it fair that we have a society set up that's allowing certain people to develop systems to extract wealth on the backs of others and accumulate that wealth? We always have that. Let me just finish my question. You both just had a big chunk of time. So is it fair, is it a question for the group to think about as part of the AI and especially I think? Is it fair that we continue to allow systems to exist and let AI enhance that? Is it fair that we have AI that's being used to exploit and extract and accumulate wealth without a plan for humanity and the earth after? That's a question. It's not, I don't expect this to get into it. The other question is, are we going to look at a universal basic income? I am not advocating for it. I'm asking about it. Because it's something that comes up often and there have been people who asked me if this group is going to look at that. I said, you know, we're looking at AI very broadly and that's a whole other thing that's separate from AI but is it going to come up? And so those are two things from your sort of both you two going back and forth that was coming up in my head. Well, I think that that is, who knows when that's coming? I guess my point is it's hard to picture right now when we have all this work that needs to be done whether it's training people or fixing roads but it's hard to picture it. But when AI gets its wings, it's going to fly and it's going to happen fast. That's what I would say to you. So do we have, can we have, maybe this transitions to the last part of the day because we're going to talk about, Senator General, we're going to talk about future topics and kind of break up tasks. So maybe we could just go, I'll stop and ask at that point, can we look at some of these things more? Because I feel like we just opened something up. The idea I understood was that in these earlier meetings, we were going through the areas of application and by doing that, we would draw out these general issues that cross all the lines and are ones. And this is one of those things and that we would use the rest of our time. Having, we'll have a few more meetings after that to start doing exactly what you're talking about. And I agree, this is one of the, we need to talk. Okay. That's my perspective. We need to talk about it, but as a person who spent their life in the environmental, the health care, and the education systems, for those three things, I'm a socialist, but I live in a country that's market driven and I don't feel that we're going to be able to solve the fact that we live in a country that's market driven. We're going to worry about what we can do to come on to maybe decide the role model for what can work in a system we're stuck in. Not stuck in, I mean, it's a way of being very free of. Yeah, we may not. Yeah, that we're not going to change the rest of the country. We may not solve much of anything, but I think that one of the biggest things we are, we will do is bring to the light issues and that might spark national and international discussion or feed it to that, so. There was actually, I noticed in the first book, I might not have put that in the thing, there was a lot of international work going on this subject, there's a lot of US work. That's insane. That's insane, my friend. That's a lot of international work going on, I think that's what it is. I think we started this by saying, I think we can steal IVs, this is probably scary, so they know the work in our, is that for now? Okay, can we move on? Yeah. Okay. Have we actually approved these minutes? Yeah, we have, no, we should do it now. Yeah. Let's do that right after this one. I see he's putting his stuff away, so you're good. You got seven, you got seven. You got seven, you got one, you got one, you got one. Yeah, I've done it, so. One, two, three, four, five, six, seven, we have seven. Yeah, can you sing around for a little bit? Yeah, I just want to come in closer. I'm so excited, I love how you all look. I think it's a little more important than the approval of the minutes. One thing, back in, I think it was October, October meeting, we decided to talk about all these issues, we need to talk to all these speakers about the industry, in these industries, certain industries we have not allocated for future meetings, and I wanted to discuss if we should, and if we should, which is quite a lot, the industries, and some can make chair, members. One issue, one proof that come to mind without looking are criminal justice. So in the last week, we just organized the first three of them, and there's, I think, two or three of them left. While you were talking, I went back to the minutes on exactly that point, and I understand that we agreed to five areas, agonized resources, the number one we did, transportation technology manufacturer in today's, social education, criminal justice, medical, health insurance, services retail, and food. So I'm now taking that from the minutes that they're, at the end of the year, for the time being. So we've got, if we stay on that schedule, we've got one more of the three that is the medical, health insurance one to do, and then we're gonna do two others, and we just need to get together the committees for that. And I look, because I'd like to do the work on the criminal justice, and I don't care for it. So Jean and I have been working on doing to help health insurance and medical. So we plan. And you're next time? Yeah, we plan for that, so we're gonna do that. I would like to participate in that, too. And there's two more after that. Go ahead. We have like five to seven people we've been reaching out to. I don't know if we've got a confirmed list, but we're doing that work, and we're happy to that. So it means, what we have to do is call Jean. He's kind of coordinating it, and talk with him, and then bring storm with him. But we will end up by our next meeting with some witnesses. And we may not get everything we want, we're trying. There's some really good witnesses popping up, though. Good. So what's happening in the moment? I heard your motion for yourself to be the subcommittee chair of the criminal justice. I know, we're talking about. I think so, that's what I heard. That's what I heard. That's what I heard. Discussion? Yeah. Okay. I think that's exactly what's occurred before I could say no. All right, yes. So we need, does anybody else interest in this? I would be doing that, sir. Yeah. We need the social, your views seem to be, on most of these subcommittees, that this is very fair to support, though. But I didn't do much on theirs, because I didn't really have expertise. I'd like to help you. I may not join your meetings, but I have some ideas. I talked with the judiciary, and maybe we could just talk outside of this meeting about what I've heard. Okay. And the education side. Okay, deal with it. Like a student? Like a student? Or a student in the room? Oh, my Lord, you're having, you're having a call. I'm interested in being a part of that, but probably none of the chair, I can't. No, no, this is, now you're recruiting other people. I accepted. Oh, I see. First of all, people who aren't here, right? Right. Yeah. That's very just a way to do it. Yeah. Okay, well, we'll get that. And we're talking, we haven't set a meeting yet for the next two times, the next two, right? February of March. We went for January. Yeah, we have January, I see it. So this would be February. So we want to try to do that now. Yeah, because we start recruiting people, we need to know how we're recruiting them, right? You're finding that. It's what happened with the rescheduling, actually, that you know clearly you're recruiting. Yeah, I would definitely suggest we pick the February of March dates now, even if we don't know the exact topics for March, maybe February could be criminal justice since we have an eager volunteer, you know? And a few people are ready to see we're all today. So I think there's eight of them. Eight of them. One, two, three, four, five, six, seven. There's a six in the same. Okay. One was John. One was John. Gene, Trey. Oh, John was here. That's right. Yeah, I know. ACCD. Yeah, I'm playing, so. Yeah. We're sticking to Fridays, and it's still generally the best Friday for you. It's just for simplicity, and we can see if we work the best or no. Is that our next one? Is it? Yeah, it's a Friday and January is the next one. Next one's next term. When is now? I don't want to have it. The next one's Friday, the 18th. 18th of January. So about a month from the beginning. So I'm looking up on that. I'm trying to find all of them. If we start with Fridays for the February meeting, it would be either the 15th or 22nd. I think I would rather do the 22nd. Is that okay? I think that's him. Now we'll, you know, legislatures in session, people are going strong. The idea of Friday afternoon was that, that would hopefully be more, but. Okay, 22nd. Now. That would be criminal justice? Yes, criminal justice. Social services and education. That's the three, as I understand it. Now the last one, I think was last, last form that is services, retail, and food. Now I don't remember what goes in that, I guess we did that. I don't know, so I'm not sure what this is. I guess that McDonald's story we heard earlier. So this might be, this services, retail food might be a place for us to talk a little bit more about the labor issues that we were just starting to touch upon. Yeah. What were we thinking March? Yeah, that was the, I don't know. I don't mind serving on this committee, but I am not driving to Montpelier to sit in a room with a telephone again. Okay? Just saying. We had a little subcommittee about getting the word out about these meetings, and I drove myself to Montpelier to sit in a room with a telephone while everybody else called in. So. I don't know if that works too. But you have to. I live in Middlebury, so I don't wanna do that. We don't have to have meetings, though, because there's actually what came out of me and John and Donna's effort, was that it's way more efficient if the chair just calls people individually, and that's what Gene's been doing. We haven't had one subcommittee meeting for healthcare. He just calls people. Well, I was on that committee, and so I haven't been called yet. So, because I did sign up for that originally. You can check with Gene about that. Yeah. And I would actually suggest you do, because I don't think he's intentionally leaving you out. I was saying that you don't, because you don't put the word out, it doesn't get to everybody. Come on, guys, that's fine. So I guess what I'm saying is I would recommend that instead of just having subcommittee meetings and doing that huge process, maybe it could just be the chair, because we're just talking about lining up witnesses, not making huge decisions. So maybe the chair- Well, no, yeah, none of us actually is, yeah. All I did was ask people if they would attend to the line-up. Yeah. Because I had to be frank with them. Yeah, we're interested in that. We never physically met. We never went into everything we did. Yeah. Back to scheduling. We had the February 22nd for criminal justice. Would it make sense to do Friday, March 22nd, or 29th for the following one? I like the 22nd, because it's spring. We've got a spring celebration. Well, when's crossover? Crossover is the week before town meeting, then. So that's like two or three weeks after the crossover. And we had talked about after town meeting, they're doing more step around in those final months, doing some more stuff out in the community. But I do think, not to get distracted, I think we should do a March 22nd meeting, and maybe in February, can we put it on our agenda to talk about what we're gonna do in March and April for outreach? Yeah. Is that fair? Yeah. And we, I don't know if you can get a committee, do you wanna get the match committed together or do you wanna do that next time? You mean like pick the topic? Well, the topic is, was the sign that services retail. Oh, okay, okay, my God. Right. I didn't know what to do. In a way, retail is the number one money issue in AI, I would say, in the sense that all that stuff we get, and all those avatars, and all that kind of stuff is all generated out of us, assuming we're putting together data and then manipulating it to try to get our, and the worst impulses to determine our spending. So in a way, retail is actually a very big AI issue, I think. And I also think it's where you'll see the workers, the fewer workers. Oh, sure. That's part of, that's part of the issue too. And we could talk about exact, more directly talk about that, but it comes up because it's not been a specific subject, it comes up on the margins so far, we could do it in the number one. Should we put off forming that committee just for a meeting for there maybe to be people or people who are here? Sounds like there's a lack of interest. We're good for, we're good for the next one and the February justice, I think we can wait on that. So we'll have to do it on the next one for sure. That's, that's fine. I mean, in the next one, I just, someone made a comment how I've been on every subcommittee, so I'm trying to step back. I hope you get that. Yeah, yeah. I'm happy to help more. And if, for what it's worth, maybe instead of being on a subcommittee, what if I just ask around in the community, like, what do you think we should be, who should we hear from? So whoever is the subcommittee, we might have some ideas where they start, because I didn't see anyone jumping up like, I know exactly who to bring in. You know? I have no clue who to bring in for that. For the, for the, for services. I know I'm going to ask about education too, but I don't know. Yeah, for the education, I'm actually, for me, I've been there to be the hardest to find people in the community. Yeah, I mean, I kind of, I never thought of that. So, so we're not going to do the March one. That's the only reason why the public's here for you. I think that's a whole, and the day is good, right? So I'm just thinking about, so we have 10 meetings, and so we're talking about January, February, March, so three, this is our four, so that's seven meetings. And then the last three will have to be much more directed at getting solutions on the March one public. It seems like, I don't know what much more public engagement means if it's, you know, one meeting or... Well, I think you need to put these things out, put press releases out, let the public know healthcare is a hot topic, okay? And when there's a meeting about healthcare, I think you could probably fill the well in the state house. But you got to get the word out. I mean, seven days, wants to know what we're up to. I don't see them here. One way we can do it, and I've said this before, is that we should be announced by the Department of Digger that is announced in September for each meeting. And so what was it about? And all of that, I think we would get... So I did communicate to the Agency of Commerce and Community Development, and they said they don't do press releases. So I didn't, I kind of put a press release out from my AFL-CIO, and I'm willing to do that, but I didn't want to do that without talking about it first here. And we had had, I don't know who else was in it with you and I on the phone. We did have that one meeting you were referring to, and we talked about some ideas. In our last meeting here, you were absent. All I said was we were gonna try to bring the public access in. But perhaps there's a value to us reconvening between meetings and coming back to the next meeting with very clear things we're going to do to increase that. So I guess I would ask, does this group, is it okay with this group if I put a press release out about our next meeting, January 18th? Sure, hi. Okay, thank you. And Milo, what did you hear about the public meeting? If you think there's gonna be people coming, then I miss the space at least a little extra room. Yeah, yeah. It's better than the one upstairs. Yeah. I think health care is a good temperature. Well, all I would ask is that, I mean, maybe this is asking too much, but I would ask because it is on behalf of the whole group, you send it to us with the BCC, so we're not breaking public meeting law or use the Slack, and let us give you some feedback before it goes out. And also that we make sure people know that there's only gonna be a 10 minute public comment because it's common with health care where people start coming and arguing for universal health care if we can't stay together, which I support just for the record. But I don't think this is the space to have 100 people come and try to say that publicly when that's not what we're actually talking about. I'd like to say that we are authorizing you to do this as the subcommittee chair on public meeting. Well, then I would put it out on the AFL-CI. Because it looks like it's an agency. And I didn't think it was a public meeting at all. Yeah, yeah. So if you put it in Slack and then you call yourself a subcommittee, whatever, that would be great. Taylor? So they seem to prioritize picking up the bullet room. Would you guys prefer to be here? I think we should meet here if we want to invite the public because otherwise we're setting people up for, I don't know how others feel. I mean, I like that it's much more glamorous, but... There's more space here. I'm not supposed to come over here. Yeah, yeah, yeah, yeah, yeah. So I'm glad we're just kind of looking at it. Maybe on the, is everything settled here? Maybe the last item, a little real quickly. We'll get a couple of minutes from last meeting. Does everyone have a copy? Thank you. Yes. Thank you.