 All right, welcome. For those of you who have already heard me earlier, I just wanted to issue a warm welcome again, especially to our returning alumni. Are there alumni here? Just curious, fantastic. Thank you so much for coming back. It's wonderful to see you. It's always a pleasure for me as dean to get to meet alumni, especially if they've been super successful and can share some of what they've learned and give me feedback on how we can do even better here in the college. So I'd like to welcome our current students and their parents. So that's the majority of the people here. So wonderful to see that we have really a warm support of Berkeley Engineering family. And that's really what many of my faculty colleagues and I view our community as really a huge family. So thank you for being a part of that and joining us here. So I'd like to welcome you to a panel discussion on a very hot topic, the AI revolution. And in the next slide, I just want to give you just a brief update. You might have seen the latest rankings of colleges and universities by US News and World Report. Happy to report that UC Berkeley is retaining this position as the top public university in this country. Of course, that wouldn't be possible with outstanding alumni and students. And the students are the ones who attract and keep our outstanding faculty here. You are our inspiration. So I just want to mention because the topic is AI today, we do have consistently top ranked programs in computer science and electrical engineering. These are the foundational fields for artificial intelligence, which will be the focus of today's presentation. So Berkeley, you might know, is also known for entrepreneurship. So artificial intelligence today is really having a revolutionary impact, not only on our personal and social lives, but also in pretty much every field of human endeavor. And at Berkeley, we really liked this culture of collaboration to solve the world's biggest problems, because as engineers, we want to really benefit people and society. So a lot of multidisciplinary problems that we are working on today include health, sustainability, and democracy and equality. So as people defend more and more on information systems, AI, and so on, the challenge of maintaining a democratic and equitable future for all becomes greater. So AI is accelerating progress, but also presents new challenges in itself. So I think these will be themes that come out of the panel today. And to translate our innovations here more quickly to commercial products, we have become a very vibrant engine for innovation and entrepreneurship. So today at Berkeley, we have a really full entrepreneurship ecosystem. Oh, a little known fact is that Berkeley today is now the university that has produced the most alumni who go on to start companies that are venture backed. So we're the top entrepreneurship university in the world today. The amount of money that our graduates raise is less than the second ranked school, which is Stanford. But I think that that shows that we train our students to do more with less. And I think that's important if we care about sustainability. Using up the world's resources faster is not necessarily a good thing. So I think it's all very consistent. I just want to point out the Satarja Center for Entrepreneurship and Technology has all kinds of programs for our students in the College of Engineering, but we welcome students from across the campus. So the Satarja Center educates over 2,000 students every year on entrepreneurship and Berkeley method of entrepreneurship, give them opportunities to work together to come up with new ideas for new products. And those over 2,000 students represent over 170 majors across the campus. So this is pretty much open to any student across the campus because we want future engineering leaders to be able to work together collaboratively with all people across society. And the Satarja Center will be moving to the new engineering center, which is under construction today. And so that's like the most visible project that is underway. And that's just epitomizes or embodies the cultural transformation that has been happening within the college. We want engineering to be welcoming and inclusive because that's the best way we can ensure that a future shaped by engineers is going to be equitable and sustainable. We have the ME Management Entrepreneurship and Technology program, which helps students earn both engineering bachelor's of science degree and a business bachelor's science degree all within four years. And to help our faculty, we have a Baker Fellows program to help give them funding to bridge that gap from the research lab stage to the VC or commercialization stage. There's kind of a valley of death where we need some funding. It's not really basic research, but you need funding to show proof of concept before we can raise money to start a company. So the Baker Fellows program has been very successful. Most of the faculty in that program are engineers. And we have the Berkeley Skydeck, which is a joint venture between the College of Engineering, High School of Business, and the Vice Chancellor for Research. It's an incubator and accelerator for our first startup companies. And a lot of the companies actually have AI-based products. So without further ado, I'd like to move on to introduce our. One interesting piece of good news is that Time Magazine recently published its list of 100 most influential people in AI. And among these people, out of 111 are members of the Cal community. That's pretty amazing, right? Includes our current students. So current students, alums, faculty, and a member of our Engineering Advisory Board. So to give you an even deeper picture of the work happening here in the college, I'm delighted to have my colleagues here with us today. So let me start to introduce them. First is Jill Finlison. She's the managing director of the Citrus Innovation Hub, which is, Jill? So the Innovation Hub is dedicated to accelerating IT research in the interest of society. So Jill co-leads the UC Systemwide Inclusive Innovation and Equitable Entrepreneurship Initiative. She produces the Edge in Tech blog and the Future of Work podcast. And she's also a Cal alum. Thank you, Jill, for joining us. Next, I'd like to introduce Professor Saif Salahuddin. He is the TSMC Distinguished Professor of Electrical Engineering and Computer Sciences. He's really doing leading edge research, advancing next generation microelectronics, which essentially includes the integrated circuits underpinning the AI revolution. So the work he's doing is really opening doors to enable these computer chips to do more with less energy and less power, less time. So much more energy efficient computing devices in the future. Now you might have seen last week an announcement from the US Department of Defense, their CHIPS Act program, the Microelectronics Commons. It turns out Saif is the leader of the Berkeley portion of that program. So Berkeley and Stanford together are collaborating to lead one of the regional innovation hubs for the Northwest Pacific region of the United States. So Saif is a leader of that program, which is bringing many millions of dollars for research and to upgrade our research facilities here. So really appreciate his leadership in the foundational technology that underpins AI. And then finally, Professor Claire Tomlin is here with us. She's the chair of the Department of Electrical Engineering and Computer Sciences. Thank you for your warm welcome. She's also, she holds a distinguished James and Catherine Lau chair in engineering. And she's a member of the US National Academy of Engineering. And she is one of our, I won't say many, but not uncommon, MacArthur Genius Award winners. So she, yeah, this is a really prestigious award. And we're really proud to have her in our community. So Claire has led the research of hybrid systems and control theory, emphasizing unmanned aerial vehicles. So this includes drones. And she really has helped to make sure that air traffic control systems work safely. And also her control theory and hybrid systems pertain to power grid, electric grid control and modeling. And finally, modeling of biological systems. So she's really an expert in modeling and engineering systems and no matter from biological to physical to virtual. So please join me in welcoming all of our panelists. And I'd like to just have them each come up to give a really short introductory presentation starting with Jill and then Saif and then Claire. And then after that, we'll have them sit down and we can start asking them some questions, okay? So Jill, would you like to start us off, please? Welcome. Great to see everybody here. And thank you, Sujay. It's so wonderful to work with you. Really a champion of expanding diversity and gender equity in tech. And so I'm thrilled to be here. And I'm gonna give you kind of the thousand foot view. And then our colleagues will go into some of the more nitty gritty engineering stuff that will get you very excited. As you might see here, these images are generated by Generative AI, the Dolly, not chat GPT, but I put in things like I wanna see a real student and a bear in artificial intelligence on a campus, right? And so you play around with these technologies and you see what you can get, but you also see what doesn't work and what the problems are. And so these are a couple images generated. And it's interesting because now AI crosses everything. So I'm part of Citrus, which is, as she said, the center for IT research in the interest of society. And we focus on aviation, health, and climate. We used to have a separate initiative called People and Robotics, but we've decided that's so important across all of these features, all of these categories, that it's really a cross-cutting initiative. So how do we look at AI and health? How do we look at AI and climate? And so those are cross-cutting along with things like diversity, equity, and inclusion, workforce development, and policy issues. So Citrus actually has a policy lab to look at tech policy, to look at how are we creating the guardrails that we need with these expansive and growing technologies. So I got really curious about AI from this equity and inclusion lens. And I started to look at the fact that AI was being used by almost every company to do hiring. And this is very interesting. Let's look at all the different ways it's deciding who sees the job, because you use AI to optimize where it's shown. It's deciding who is applying and is it increasing efficiency? It's filtering. And one of the big challenges with filtering is if you take an old job description and keep adding requirements to it, it becomes really hard to fill that job if you have to meet every requirement. It becomes this unicorn job description because there are so many requirements. So we really need to rethink how we write job descriptions in the era of AI because you don't wanna filter out all of the people who are qualified for the job or people who meet 98% of their requirements. You don't wanna filter those folks out, but that can happen today. So it was really interesting for me to think about this and it really raised the question of equity. How is AI going to affect equity? But it's even bigger than that. It's survival. And so this was a statement that was put out and signed by many, many scientists including Don Song here, who's part of the computer science. And they say specifically mitigating the risk of extinction from AI should be a global priority right up there with pandemics and nuclear war. And this is the important part. It's not an either or, it's a yes and. We have to deal with climate change and we have to deal with AI. So how are we gonna do that? So this is what we've been told about what's happening because of all of the automation and AI that's happening. So 85 million jobs displaced and primarily that's gonna be low skilled jobs that are impacted, right? Anything that can be automated will be automated especially post COVID. And we talk about the skills needed, the workforce needed, all of these things are gonna change. And then generative AI came out and that was game changing, right? Because now it's 300 million jobs. Now it's this idea that it's not blue collar jobs, it's white collar jobs. It's green collar jobs, it's blue collar jobs. Every collar job is gonna be impacted by AI. So we have to again think about this a little bit differently. So I had the opportunity to speak with Itai Xu who is a faculty at the Berkeley extension and he was talking about the fact that AI is creeping into what the students turn in at school, right? In the university, it's in their papers that they're turning in. And he asked the question, how do academic institutions respond to the fact that AI is out there? And I loved what he said. He said, I don't know what the right answer is but I know what the two wrong answers are. One is to put the responsibility on the individual teacher to figure out a policy and the second is zero tolerance. You can't use it in my class. Why is that such a bad approach? Because when they graduate, the companies are going to expect them to be able to use AI, to optimize productivity, to be a better programmer, to do their work better and more efficiently. You can't say don't use it in school and then expect them to be ready to use it in the workplace. So students don't see this as a problem. This again was with Itai, he said 72% of us as students don't associate it with dishonesty or cheating and that by a significant margin, they see it as a way to get unstuck to figure out what they're doing wrong, to problem solve, to get answers, to see different ways of solving things. So the other thing I find super interesting about AI is that this is not a tool in the traditional sense of a tool. It's really a collaboration partner and what do I mean by that? I mean that when you ask a generative AI to do something, you're going to revise. They're gonna give you an answer, you're gonna look at the answer, you're gonna say no, I need it to be shorter, I need it to be more professional, I need it to be more scientifically based, I need this, you wanna create guardrails so you're what they call engineering the prompt. But you're also iterating, it's a back and forth collaboration. And so I noted a couple of things to highlight here, code generation, you'll start coding and it'll be, I think you're trying to do this, will this code work? And that code will pop up and you'll be like, yeah, perfect, saves me time, no typos, it's out there. Or you might look at it and go, nope, that's not gonna work, I need to change it, I need to iterate on it, I need to adapt it to what it needs to be. And then I also wanted to highlight this one storytelling because I saw another panel that I thought was absolutely fascinating. This kindergarten teacher at a low resource school said he was using AI in the kindergarten. And I'm like whoa, take a step back, tell me more, why is this the case? And he said, I'll sit there with my students and the little four year old boy, I'll say what do you wanna tell a story about? And he'll say, I wanna tell a story about a four year old boy who goes to the moon and plants flower seeds. And so they'll be like AI, write a story, kindergarten reading level, for a little boy who goes to the moon and plants flowers. Boom, there it is, that's how fast it is. And he's like, is that my story? And he's excited, what has it done? It's debunked the technology, it's made them comfortable with the technology, it's also made them want to read. It's also made them creative. I wanna write a different story, let's come up with a different prompt. It really fuels the imagination. So I think this is part of thinking about this is not a grammar checker, right? This is not just a run through and see what you get. And you have to be very critical about what you get back. So as we were discussing earlier, Berkeley is addressing this in a couple of ways. One, we want everyone to innovate. We want everyone to think about AI as their responsibility. And so we do that by having this very robust ecosystem. And yes, this does say number two. But as she said, boom, number one. So this is new, yeah. So this is fantastic. And this speaks to not only the robustness of the ecosystem but the collaboration in the ecosystem. How do we work together? How do we make sure we hand off entrepreneurs to other people? And how do we bring more people into it? So as you were saying, we have the Satadra Center for Entrepreneurship and Technology. And part of that is we run challenge labs. We ask students to create startups in 15 weeks. It's crazy, it's amazing. And it pulls from all of these different majors. So you as a technologist can leverage a journalism student for communication and customer interviews. And you can utilize somebody from sociology who really understands the problem space. It's fantastic. And so we actually use the students here as innovators in residence to peer mentor other students who are doing startups because they've been through the cycle of it. We also really focus on that applied learning. So our workforce innovation program, placing them in jobs in semiconductor companies, placing them in jobs where they're using data to solve real problems. And we also try to debunk the myth. So this is Brandy Nadeke who heads the Citrus Policy Lab. And her podcast is all about those myths that we believe about technology. And how can we debunk them? So really building in the changemaking. So everybody who comes to Cal sees themselves as a changemaker. They have agency to make change. What do you want to change? I want to change education. I want to change government. I want to change policy. Whatever it is, you can be a changemaker and we're trying to get them when they first come to Cal and when they transfer to Cal because we want everybody thinking this way right when they come in. So I would be remiss if I didn't leave you with five things for all of you in the audience to think about right now. So the first is in the era of AI, major in being human. So this is about empathy, creativity, unpredictableness because you're the ones who are gonna problem solve. Learn continuously. This is Suje running a LinkedIn course on applied AI. So this is about responsible AI but looking at in domains like health, looking at climate, looking at social media, looking at HR and looking at how do we need to think differently about these sectors? And you need to play with it because if you don't know what doesn't work, you can't solve for problems. And so this was actually Brandi Nadeke. She is a ballet person when she's not running the Citrus Policy Lab and she wanted to design a shoe that would have play different music depending on the angle of the shoe. And so she wrote the code but it didn't work. She went to chat GBT4. It debugged it for her. It taught her what she was doing wrong and now she has this shoe that can actually do what she intended it to do. So think about it as unleashing potential and think about it as teaching critical thinking because you really have to ask questions. It hallucinates. It comes up with credible answers but is it true? And so we really have to teach those critical thinking. You need to understand the bias in the data because underlying all of this amazing technology is data. And if we're not mitigating for bias in the data sets we could actually amplify and make problems much, much worse. So these are some of my favorite ones and unmasking AI just came out. And lastly, the reason you have to read those books is because we need everyone in this room to be the voice in the rooms that you're in. We need you to be asking the good questions, championing inclusion, safety and ethics by design upfront. And it's so important because otherwise we're gonna have a future that looks like this when we wanna have a future that looks like this. So thank you very much. All right, next Saif. All right, so AI is everywhere today, right? So everybody knows about it. In fact, even things that probably traditionally we would not call AI. AI is so popular today that we even call those AI, right? So yeah, so we live in this new reality that AI is everywhere and it is enabling many, many things for us which is fantastic. But there is a cost, everything has a cost, right? So AI has a cost. And if you think about, we just heard about Jack, GBT, Dali, all kinds of models that you're running and you're doing things. Something is computing to give you all those results. You have these large computers that are computing, that are looking at data, that are doing all kinds of mathematical calculations so that it can generate all those results for you. And that computing needs energy. So how many of you have tried to do Bitcoin mining? Some of you, right? And so those of you who have tried to do that in the recent years, you know that at some point it became very, very difficult to do so because the miners that you will use, like ant miners and other things that you will put in, their energy cost became too much in terms of how much coins you can mine and how much power you are spending. And what is your electricity bill? So actually to me, that was the first time I think that rank and file, we kind of experienced the energy cost of computing. Typically these big servers are held by the big companies. They are taking care of their servers, the electricity that is needed to run those servers. We don't see them, but this Bitcoin mining, when people started buying, specialized machines to put in their garage and do this computing to do the mining, very, very soon they realized as the Bitcoin mining became more and more difficult, so you needed more computations, people realized that paying for that electricity is becoming the roadblock. And so for our servers that are doing these AI calculations, one has to actually take care of it. So if you think about, for example, these big servers that department of energy runs, they often call these petascale servers. So basically they're doing 10 days to 15th instructions in these servers. Today these take 60 megawatt, and the projection is that in the next 10 years they would like to go to exascale, which is 1,000 times more than that, which means that if we keep doing what we are doing, one of those machines will take 60 gigawatt. That's just, that's not possible. That's just not physically possible. So if you look at some of the data in terms of how much energy we need, so this is where I'm showing it, this is how it is increasing over the years and it is projected that by the middle of next decade, sorry, I just did what I was told not to do, which is to click on the slide. Yeah, so if you look at this data, what it shows is by the middle of next decade, if we keep going this way, the energy taken by our computing machine, this is just the servers, not even all the other gadgets that we are using, just the servers will become single percentage of world energy need. And world energy needs include everything, like lighting, all the electricity that we need in production flows, everything. So that is unsustainable. It's just not possible to give single percentage of our energy and it's to just doing computing, although computing is very important. The second part of it is that it also shows, let's see if I can point to it without clicking. Yeah, this green trace there, that shows the rate at which the world energy production is going up. So you can see that the energy taken by the computers is much faster, the rate of growth is much faster than the world energy production. So if it keeps going like this, again, that just shows that it is not sustainable. So we have to do something about that and definitely there is an heightened awareness about the energy required for our computing. There's a lot of research all around the world which is looking into the basic hardware that builds our computers and that tries to increase the energy efficiency, okay? And there, the exciting part is you really have to work with nature because where we are today, our basic building blocks for our computers, we have to work with dimensions that are 40 to 45 atoms. Okay, basically if you look at the minimum size that we have to work with, there are 40 to 45 atoms there. So you are really trying to put atoms exactly where you want them to build these computers. And once you go there, the classical physics that helps us to do computing also starts to become somewhat marquee because you are really at the atomic level and you have to understand that physics and try to think about how you can control that to improve the energy efficiency. But energy efficiencies, at least in my view, is going to become one of the pressing challenges of our lifetime because of all these reasons that we just talked about. And so we, it's an exciting time to be around because whatever we do in that direction is going to have a long lasting legacy. And of course, US government is also very much, very much serious about this and you probably have heard about CHIPSAC and many initiatives that US government is starting. So we are happy to say that we will be one of the eight hubs nationwide that looks into next generation computing hardware for AI and starting from these basic building blocks all the way up to how we design energy efficient computing systems. And so I'll just say that again, that this is an exciting time to be around if you are interested in controlling nature for energy efficient computers. Thank you. Thank you. Welcome everybody. It's a pleasure to be here and to see all of you, especially former students, current students and your families. My name is Claire Tomlin. I'm a professor. I'm the chair of the Electrical Engineering and Computer Sciences Department. And maybe just a little bit of a story. So I was a graduate student here working in control theory and I worked a lot on safety critical systems. So throughout my PhD, I was working with NASA on air traffic control. How do you automate some of what air traffic controllers now or then did manually? I graduated, I got a job at Stanford. I was a professor at Stanford for 10 years and then Berkeley gave me an offer to come back and I did. And one of the reasons I came back is so this is around 2005, Berkeley was building up one of the best AI groups in the country. And they were, they'd always had a very strong like theoretical AI group, machine learning, but they were bringing in people who worked in AI systems, really committed to the development of AI systems. And from a control systems perspective, when you're thinking about automating things, and this is back in 2005, it was pretty clear that AI was going to be an extremely important component in that. And we had to think about designing control systems that took into effect AI and understood how to integrate those systems, in particular, integrate them safely. So that's what I work on. I've built up a lab in safe AI, thinking about how you design control systems that integrate learning but do it safely so that the systems that you're designing operate safely. And this has become a huge topic. So we're familiar with autonomous cars, but a lot of, I mean, I've got an interest in aircraft, and airspace, and air traffic control. And so in addition to working in autonomous cars, we've got some pretty exciting systems that are now being automated from autonomous aircraft, really interesting new air taxis that are tilt rotor. You take, the aircraft takes off in a vertical mode like a helicopter, and then the rotors tilt so that the aircraft becomes a, it goes into forward flight, working with autonomous ships and thinking about how we protect our waters around the US, and also continuing to think about air traffic control, and also thinking about autonomous vessels that are doing other things, like for example, towards energy efficiency, as Saif was saying. Okay, so we think about kind of control in AI from this point of view of you've got a system. So we do a lot of work with Boeing, and this is actually a picture of a Boeing aircraft. It looks like a Cessna because it is a Cessna. It's one of their research aircraft. So they've equipped this aircraft with a bunch of cameras, and we work with them on designing algorithms that you put on board the aircraft so that you can do autonomous flight, autonomous landing, using that onboard perception from the cameras only. So for example, next month, we're going out to Montana to do a number of flight tests of our algorithms on this aircraft as it's coming into land when you have a bunch of different things going on the runway, like other vehicles moving around. So we think about kind of the design of control systems that integrate AI from this safety filter point of view. How do design safety filters such that, in all of the other actions that you're doing from perception, what your sensors are looking at and how you interpret those to prediction what other vehicles around you might be doing? So perception and prediction are two blocks that are now primarily done by machine learning algorithms. They perform much better than the traditional design, like the traditional computer vision stack or the traditional model-based prediction. Through planning and control, planning and control are still, and I believe will remain, to be largely model-based methodologies. So how do we integrate that with safety filters that basically ensure that the actuation that the vehicle that's flying around is going to perform so that it completes its mission or its task or its flight is safe? And so these safety filters, this is actually from Stanford, so I don't know if you recognize Roebley Field, but I mean that's where I started. But they have, these are actually four quadrotors. This was, we built these. So this was before you could buy a quadrotor on every street corner, I guess. But they are running our algorithm. So they're flying around. Actually, students are sitting there. Each one is controlling a quadrotor. And when they get within a distance from which you can't prove that the vehicles will stay safe anymore, the automation on board each vehicle takes over and guides the vehicle away from the other vehicles. And then it gets to a point where the human pilot, one of those students sitting there under the tent takes over and controls the vehicle again. So these are these safety filters that we designed. These are model-based methods. They use traditional optimal control, dynamic game theory to take into account disturbances. So this is kind of more like historical control. So how do we use these? So I'm just gonna now at the, for the last few minutes, just talk about these couple of examples. So this is actually a picture from Joby Aviation. It's one of the air taxi companies that's in the Bay Area. And one of my students spent the summer there. But we're working with them to understand safety of that transition maneuver. When you're up, you've taken off your vertical flight and your rotors are tilting so that you can go to forward flight. It's an incredibly difficult maneuver that they'd like to automate. But the flow, the air flow around the rotors and how that affects the lift of the aircraft is unknown. And using machine learning is a way to better develop models for these systems so that you can design safe controllers for that regime. So basically trying to understand how to compute these safety filters throughout that transition regime so that you can always guarantee that the vehicle will recover lift as you're transitioning. We're also working on a project that I mentioned earlier to think about how you might learn the flows of oceans and then design vehicles and design control systems for them which are kind of hitchhiking these flows so that they can in a very low energy way maneuver to regions with high nutrients and basically use the vehicle as a platform to grow seaweed which is a way to collect carbon, so do carbon sequestration. And then at some point that seaweed is cut off and it's deposited in the deep ocean floor. So this is a project that we're working on with, well it was Google, now it's a spin-off from Google called FICO's where we're actually looking at how you learn, we have a lot of data about the flows you need to predict ahead of time. So usually these forecasts have errors in them. How do you learn what those errors are over time so that you can basically get very energy efficient control to guide the vehicles through these flows? So for example, if you just used a naive, you didn't actuate it at all, if you just pointed the vehicle towards the goal for example where the goal is that green dot representing high nutrient areas or if you hitchhike the flows, you predict them accurately enough that you can just nudge the system onto one of those nice flows to get you to your goal and use very little energy to do that. Finally, this is a series of experiments that we've done with Boeing where we're going first from taxi to landing to autonomous flight where this vehicle has cameras on board and we're designing safe control systems which even if the perception fails, the vehicle will be able to safely either taxi or land or fly in the presence of other aircraft. So here the vehicle is taxiing, there's another vehicle that is actually on the runway, it's guided by its cameras and all of a sudden the cameras, there's some error, the camera blacks out, there's something blocking the camera. You'd still like the vehicle to operate safely in those regimes. You'd like it to be able to additionally understand if there are other vehicles around even if it hasn't been trained on those types of vehicles. So you wanna be able to use the uncertainty in perception as kind of a first class opponent in your control system and create a safe control law under this uncertainty. And then finally, you want the vehicle to be able to land even when there's other vehicles on the ground and I think Jane is standing here, we're gonna move over to, no, I can go, okay, well anyway. So this is the test that we're gonna be running next month in Montana where you've got a number of vehicles moving around on the runway and basically the aircraft is coming into land, it's doing an autonomous land. It has to decide whether or not the runway is clear and so it has to make a go or no go decision based on its perception of what's going on on the runway. So we need to be able to have accurate and safe prediction algorithms that are typically, as I said before, they use machine learning to be able to do that prediction and then you wanna be able to land safely. Okay, so where are we now? We're doing a lot of work in automating platforms. We're not there yet but I think we're on a really good track to think about how to automate platforms so that you can make guarantees about these individual systems and these are some of the systems we've worked with but you also have to think about how they interact with others and so finally as we move into the future and as part of this Berkeley AI Research Lab, I think one of the big things we're thinking about are systems. So now we talk about systems at different levels but here we're thinking about vehicles interacting with each other, humans and vehicles interacting with each other. How do we design that automation so that these systems interact correctly and interact safely? So that's what my lab is doing, a bunch of current and former students working on this and again, thanks very much for being here and it's great to participate in this panel. Thank you so much Claire and I'd like to invite all of our panelists to come back up here. Thank you for giving us an overview of the many applications of AI including automation and how to make automation safe and secure and also some of the challenges for making AI as it proliferates more sustainable. So I can start off by asking just one question of the panelists, okay? And I hope the audience will have a lot of questions too and we'll give you the opportunity to ask. So maybe my first question hopefully will be my last will be what is the greatest challenge in your area of focus? Whether or not it's impact on society or sustainability or safety? And yeah, what is the biggest challenge in your mind that needs to be tackled and how we can work together with, involve our current students, our alumni, our parents, like how can people get involved to help solve that most complex challenge in your area of specialty? Yeah, so I would actually say there are two areas and they're intersecting. So one is we don't have enough workforce and the second is we're not including everyone. So you can see how those two work together. If we can expand diversity and gender equity and technology, bring more people and cross-disciplinary. If we can bring people from the humanities into working on these AI solutions and make sure that we're designing systems that work for everyone, we're actually solving two problems because we have a giant lack of workforce. I actually was talking with a gentleman at UC Davis. He's starting a master's in power engineering and he said the reason I'm doing this is we can actually solve our energy needs. We just don't have enough power engineers to redo the grid, right, to fix the problems. And so this is why we need to think about inclusive on-ramps, non-traditional ways into technology and really leveraging the skills of everyone to develop AI that works for everyone. Yeah, so I mean, from, you know, I will go to my comfort zone, which is the hardware. And from a hardware point of view, the thing is that it's very similar to this, the previous comment that you heard is that in the end, the innovations come from human beings and we are still not at a place where machines will come up with all the innovations that we need. And so we need very smart people to come into this field and essentially participate in this innovation journey to some extent to figure out how we can do computing more efficiently. In computing side, you know, if you follow, you will often hear that people are saying, okay, we are reaching the limits of physical dimensions. I mentioned that you can only, right now we are working with minimum feature sizes, which are only 40 to 50 atoms. So you can keep, you can try to keep going down, but at some point there you will be running out of atoms. So we definitely need completely new ideas. Often in this field, people use the word radical in you. So we need radical in you ideas, but that needs smart people from all fields to come in. And if you, again, look at computing, it is not only an electrical engineering, only discipline. If you think about how chips are made, how they work and all of that, it requires the entire village. It needs physics, chemistry, material science. And in fact, you know, some of the principles that we use today came from mechanical engineering. You will not often think that that is relevant for computing, but that's what we do. So we definitely need people from all different disciplines to participate. And to recognize that this is becoming a very important challenge for our time. So I think the two biggest challenges technically in my area are traditionally to be able to prove safety of a system, you have to be able to predict all of the possible things that could happen. And that's impossible, right? Because all of a sudden some accident happens and you're like, wow, nobody ever predicted that could happen. So it's been, I think, kind of before AI came in, a modeling and enumeration thing. And the thing that people are quite excited about with AI is that there is, although limited, an ability to generalize that, you know, that AI learns if it's done properly, which it's still not done properly, that it could predict things that people haven't thought of. And that allows the incorporation of new cases that we could prove safety against. But we're still not there yet. So it's like continually reaching and trying to develop methods for capturing all possible cases of things that could happen. I think that this, and this ties to Saif's point, the other thing is computation. I mean, we are limited in the amount that we can do by the computation that we have. And we're crunching, you know, most of our grant money is being spent on computation now. It's, you know, more than the cost of salaries. So I think that that is a huge challenge. To Jill's point, I think that the, you know, we don't just need more people who are narrowly trained in AI. We need people who understand, like, mathematics more generally. Coming, you know, that's why I feel like coming from a control systems background where we did math, math, math all the time before we were allowed to actually do something. I felt like that, you know, it's a really, it's really important to have a broad background to be able to think about new ways to solve these problems. Too often people get into niches of their specialty and it gets, you know, there's so much work to do that you get more and more narrow and you forget how important it is to have ideas coming from other areas. So I really endorse what both of you said in my area too. Thank you so much. All right, why don't we open it up for questions from the audience? If we can get the microphone up to you, we will. Thank you, I really appreciate this was fantastic. I have two questions, I'll ask one and then wait for my turn if it comes to the second. There's a lot of discussion concern about singularity when AI will essentially be able to have higher intelligence than humans. There are various measures of intelligence but let's start with one, which is being able to do hypotheses, generation, not just hypotheses, testing. Even the generative AI, et cetera, that we have today is essentially doing hypotheses, testing and putting options for that, not necessarily hypothesis generation. I'm curious to know what do you think in your mind will be the leading indicators for a hypothesis generation being done by AI machine learning that is not happening today? Thank you. Well, maybe I'll start. This is not hypothesis generation but one of the key methods for doing control system safety is to develop a control barrier function. These are, barriers are basically what they look like. They create spaces on which you can prove properties and that is, so we do that computationally but there's methods to develop low dimensional representations of those but there's no constructive method to do those. So as soon as this started coming out, my students started, hey, can we ask GPT-4 how to construct barrier functions? So that is going beyond the testing and it's going beyond the kind of prediction. It's more at the level of generation but what is it doing? It's searching all sources and it's been trained on those sources so you have to prompt it really well through a sequence of prompts to get it to its expert region there and then it generated a control barrier function. I mean, that was kind of surprising to us that this was back in April. So I think that being able to do something that is something that we don't know how to do now computationally and so there was a lot of computation that went into that but that was my first surprise of how these tools might be used in a way that allows us to go beyond what we as designers have done. We have a question back here. Hi, I had a question about the automation that you're doing in terms of the data that we have, right? Is the research approach to throw a lot of complex data at it because that's where you have to eventually get to solve, right? In a real world scenario, is the research approach to go with smaller data sets, solve the simpler problems and then build it up to complex or is there another approach starting with the complex? From my point of view, that's a great question. How do we, I mean, as we start to introduce learning-based components into our control systems, data is all important, right? Because these are basically components that are encapsulating the data that it's been trained from. There's different approaches and both of the ones you mentioned are being used. I think it's still very much the Wild West trying to figure out what works, what generalizes, what can you make some statement about probabilistic guarantees about? But I think maybe one point is one of the methods that we've seen to be very successful is when you design, you develop a structure for a deep neural net. So you use an architecture that already has some of the model-based, some of the kind of systems that, some knowledge of the system model in it, whether it be through the structure of the layers or the loss function that you're using to train it. And then you use the data that you have, but the data is then being almost filtered through this model that has a representation of the system. So it still allows a generalization, hopefully, but it kind of gears you towards the problem that you're trying to solve. Deal, did I cut you off from an earlier answer? I'm sorry. Well, I was just gonna add kind of to both of these points an example from Professor Grace Goose. So she was looking at bio-inspired technology. So she looked at the Mako shark, because it's super fast, it goes through the water incredibly fast, and they're like, how is it doing that? And so they looked at the microscopic level of the skin and they found that there were these denticles that allowed it to go swiftly through the water. And so she used that inspiration to create denticles that can go in infrastructure. So like pipes where there might be swirling or there could be clogging. And so by putting these denticles in, it reduces that from happening. But then also other applications on propellers or windmills, different things like that where reducing friction is really valuable. And the interesting part about that was that she used bio-inspired, but AI optimized. So then she would run different scenarios, she would see what can we do within our manufacturing limits, right? So I think this human in a loop and really putting these prompts together and asking these questions I think is really key. Yeah, I have not much to add. I'll just say that one thing that often gets ignored in all these discussions is that most of our AI models, these are data-based, all models are data-based. And data A is not cheap. And B, it actually needs physical things to go out and collect the data. So that's where I think that any singularity and other things in my mind is difficult to see in the description of what we will find in popular science books, is because you actually need to collect data. After that, your math tries to train so that you can train by the guidance of the data, but the data needs to come from somewhere. Right now, I think chat GPT and Dali and all these GPT kind of models have really gotten out, have really appealed to our imagination because they're collecting data from all the things that are already existing in the internet. But if you think in terms of a conventional hypothesis-based even research, right? The hypothesis needs to be tested. If it's a new hypothesis, somebody needs to go and do that experiment. And I don't think we are at that position or even close to that position where then, I mean, to some extent, you can then call that humans are those robots who are collecting data to train. And to some extent, we are training our brain by collecting data from the environment. So from that point of view, I think that at least in the sense that where we find in popular literature what singularity means, we are very far from it. We are at a very high level of automation. We are learning how to automate things in a better and better way. But that's why- Question over here. Hi there. I'm curious about the defense implications of the technology that's being developed here and in other locations. You all are talking about applications in commercial situations, with the environment, so on and so forth. Where do you think the first examples will be of AI involved in defense and what are the most concerning applications for that technology? Well, that's a loaded question. I'm not sure. I'm even qualified enough to answer that question. But I would say that if you just look at today, there is a war going on. And you already see drones are being used in large numbers. And so that gives us some indication that how automated machines, if you want to call them AIs and others, that's fine, are going to be used. But I mean, definitely, I don't have a lot of background in that direction to comment. I think it's already being used. So, and I kind of know for a fact, it's already being used. So, computer vision has been replaced. I mean, computer vision is still there, but it's gone through a complete revolution. It's done only, or 99.9% with neural nets now. And these systems are being used broadly. They're being used in defense systems as well. And I think that as researchers, it's a question that comes up a lot because we're supported by the research branches behind the Department of Defense. And we're working on largely civilian applications and applications that are important for society. But these technologies are being used also in defense systems. I'm clearly not qualified to speak to this, but I'm gonna make a comment nonetheless. And that is that this is a people problem. So we're designing these systems. Where are we putting the people in the loop? Are we doing the continuous monitoring to make sure that where we can intercept, we actually can intercept, and that AI doesn't go around the human. And so I think these are really important questions and it does speak to the importance of how do we put the safeguards in? Where are the safeguards? How are we monitoring and ensuring that things are functioning as design? Now sometimes functioning as design can be problematic, but that's another question. Unfortunately, because of time, and I know a lot of people have other programs to go to at homecoming weekend, we're gonna have to wrap this, but I might be able to impose on our panelists to just stay here if you wanna come up and ask them a question informally, but we do have to sort of wrap this up for time for the homecoming weekend. Suje? I just wanna thank you all again for coming. Hopefully you got some insights from our panelists. Thank you again, and please enjoy the festivities of the day. Go Bears.