 Good evening. How you guys doing? Awesome. I want to take a second for you to check in with your neighbor and just say hi to the person sitting next to you Awesome. Well Albert Einstein was credited with saying that it seems like our Technology has far outweighed our humanity and so oftentimes we overlook folks that sit next to us And so my name is Michael Henderson and I am a lecturer here at the University of Texas in Austin As well as today. I'm representing the future forum board of directors. So again, welcome to the LBJ presidential library and museum We want to thank our sponsors because of members like you as well as our sponsor sponsors We have incredible programming. So we want to thank the downtown Austin Alliance as well as the FVF law firm This event was not a soul undertaking I want to thank two of our board members that are here tonight. Mr. Amon Siddiqui As well as MK Painter, please give them a round of applause Also an unsung hero is Sarah McCracken. She leads programming here. She's running around. She is so great to work with so we Thank you as well for all of your hard work Please join us the future forum We're excited and we're attracting new members. You can either visit our website or speak to Members of our board. We bring together individuals with different backgrounds experiences and points of view To discuss local statewide and national and international topics that affect us today Our goal is to create civil informed and bipartisan Discussions So I'm so excited about today's event with the laws and regulations that you saw coming out of the European Union As well as President Volodymyr Solinski is meeting with our president and we have a presidential election coming up In the near future. So now more than ever is it important to sift through some of the discord and the fog I'm excited to in addition open up the floor to discussion. So please we're gonna have time for you to ask questions Make sure you're not third panelist And last but not least I want to introduce our panelists My heroes and she roles we have Miss Chelsea Collier She's the founder of DigiCity and the editor at large for smart cities connect as well as a resource assistant For good systems. Please join me in giving her a round of applause Miss Doreen Lorenzo She is the assistant dean for the school of design and creative technologies at the University of Texas here in Austin And Dr. Luke Wilson He's the chief data scientist and partner at Vizius Also moderates moderating today's discussion. We have Dr. Craig Watkins He's an earnest sharp Centennial professor and the executive director the new executive director of I see to institute here at the University of Texas So please keep in mind that we are excited for this conversation and please join our board And with that I will turn our discussion over to Dr. Craig Watkins. Thank you Good evening and Michael. Thank you for that generous introduction And thank you Sarah and the board and the committee for hosting this conversation in this event I know we're all looking forward to it So I guess thinking about this time at last year the whole conversation around artificial intelligence Sort of accelerated in some pretty significant ways obviously with the arrival of chat GPT And since then there's been almost kind of like an arms race in the AI space Raising a lot of questions about this technology about its impact on society As Michael so eloquently stated is impact on our humanity And so these are things we are going to discuss today But I thought we might do by a way of additional introduction Is just have each of us maybe say just a little bit more about What you do particularly as it's related to the topic that we've identified for tonight And what is it about AI and technology that If not keeps you up at night certainly keeps you thinking about the future And the direction in which we're headed Dory turn it over to you Thank you. Oh boy. I get to go first. Okay Well, so I've been in the design and innovation space forever and over the last 12 months I've probably had I would say upwards of easily 50 people tell me that we don't need to teach that anymore That design is obsolete that it'll all be done by AI Why bother This is my fourth time being obsolete You know as you go through technology trends and things change This is what happens and fear sets in But we're still the humans And so what keeps me up at night is we have to behave like humans and make sure that we are putting the guard rails in place So that we can protect because the technology Okay, it does do some good. There's some good in the technology. It's not all doom and gloom. So how do we put the guard rails At our school six years ago we started teaching one of the first courses ever taught in ethical AI design So how do you design for AI in ways that will impact humanity in a positive way We've been teaching that now Michael Henderson has even been part of it We have many AI courses around this because I think that those are the types of things I want to make sure that everybody understands the humans still have control And we need to take that control and make sure that what we're doing we use it in a way that does benefit humanity and society And I don't think I'm as obsolete as they're telling me I'm so relieved to hear you position the question this way and I couldn't agree more I've spent most of my career kind of in between different sectors and by sectors I mean groups of people who normally don't talk to each other So in the smart city space which I've most recently been in That means looking at how technology could integrate into our communities that may be a municipality To bring everyone together from government to industry to civic sector and academia and discover how do we want to shape our communities So I'm hearing a lot of things that Doreen is talking about because here we are again what was smart cities is now affected by AI AI is the common term and while we all get to live in a community together now we all get to participate in the shaping of AI And it's no longer this conversation of oh it's up to them, oh the technologist will take care of it, oh the government will take care of it They will take care of it and I see AI as this wonderful opportunity for co-creation and shaping But we have to jump in and create some levers for all of us to feel some agency in doing that And it's really what motivated me in a mid-career to come back to the University of Texas I have the great pleasure and honor of being a PhD student and a part of good systems The research project that I work on is called Smart Hand Tools and I'll talk about that in a little bit But it's a real opportunity and I see the University of Texas at Austin taking a leading role in saying we can be that convening factor And it is not their job, it's our job so I'm excited for this conversation and appreciate being a part of it Hi Dr. Luke Wilson, I've been in this space, artificial intelligence space for about 20 years Did my master's work in the algorithmic tools and techniques that are now used at a time when they were thought to be obsolete So it all comes around, hopefully we don't end up obsolescing ourselves again But I've been focused on how to use these types of techniques to increase their ability to mimic human capacity and human capability In terms of background, since everyone else has got a UT pedigree, I spent 12 years here at UT Austin at the Texas Advanced Computing Center as a member of HBC Research Staff Focusing on how to bring these types of AI techniques to large-scale computers which is sort of fueling the revolution we see now Where more and more computing power is making these AI systems more and more capable And so today I'm really just focused on how we can bring these tools to bear in a way that's responsible and a way that highlights their capabilities without overemphasizing their flaws Thank you So Doreen mentioned some of the work that's happening in her program in terms of ethical design as it relates to artificial intelligence Chelsea also mentioned good systems, just in case you are interested and want to look this up after tonight's event Good systems is a campus-wide grant, one of the three grand challenges here at the University of Texas That's been funded and sponsored by the Office of Vice President for Research And good systems is a reference to what we typically refer to as ethical or responsible artificial intelligence So asking these questions about not only how do we design AI that's good for society But what does that even mean? Good for who, under what conditions, but also I think who participates in the design and deployment of these systems Something that you each sort of refer to is this idea of agency A lot of what concerns or drives a lot of the public conversation is at least the perception by some That we are increasingly surrendering more and more control, more and more authority to machines, to computational techniques, to artificial intelligence And as a result of that people feel uneasy, people feel they don't trust what's happening When I hear people talk about AI, oftentimes hear the word, it's scary And so I wonder if each of you could talk a bit about how can we as a society assert more control over the technology To make sure that we are in fact in command of the technology And the technology is not in command of us Who wants to go first? Garbage in, garbage out Right, so we are, again, we are the humans and this information is coming from, you know, it's just to be, Luke, you explain this better But it's a big pot of soup, you know, with information in it So you've got to put, what we're seeing that's scaring people is all the fakery that's going on So how do we then have to develop systems around trying to figure out what's real and what's not real So I think jobs change, what we do changes as humans, how we act I mean, I'll go back in my world when computers came into being They said that that was it, you know, you didn't have to have any skills in drawing or learning or designing anymore That was really common, and of course that's not the place, you just learn about things differently And that's what we are with, I think with AI right now, which is we have to develop different systems Different systems and tools to use to make sure that we're not putting out the garbage that we're seeing Right now I feel like we're just in a big playground, you know, it's so new and everybody's just playing with it I'm going to throw this in, if you, I don't know if you see it in, or anybody, we see it in the students using the chat, the generative AI chat I just reviewed a whole bunch of applications for a program that we have There was maybe 45, and I would say 34 of them started off with the same exact same So we got, you know, we can, you kind of can tell, you know, like we still, we have to get better at that I think the playground analogy is a really good one, because it kind of lightens us up And I hear the same thing, a lot of fear when we're talking about AI or chat GBT or, you know, AI enabled robotics To put cool, shiny tech in the blank And underneath that fear, I like, as a social scientist, to dig deeper and kind of understand what's at the heart of that Where's the fear coming from? And I think it's a very human sense of a perceived lack of control So the opposite of a lack of control is acquiring knowledge And I think right now we are all going, all being society, going through so many shifts so rapidly I mean, even in my lifetime, the internet was invented that changed everything Mobile devices, another huge shift, fast forward, a global pandemic, another massive shift And I think we're just having to kind of reel as a society and understand very, very different things Than what we fundamentally knew growing up, whatever generation growing up is for you So I think, again, the antidote to that fear and that lack of control is community So I like to see a lot more attention paid to strengthening of community And finding diversity in that community where it's okay to talk about, hey, I don't really understand this Who does understand this? And we can all come into this space of unknowing as opposed to, oh, that person has all the answers Sorry, look, I'm not pointing at you because you actually do have a lot of answers But it's this idea of, you know, we're all kind of collectively in this playground together And we get to determine the rules I think something that's really interesting about kind of the fear that's out there about AI And the concern about this loss of agency from a technologist perspective You know, we always say it's AI until we build it and then after that it's just statistics Because it's really just a statistical model under the covers If you look at any of these AI systems, any of these generative systems It's a slightly more sophisticated version of the word prediction system that your phone uses when you type up text messages And you've seen how bad those are So there's no agency behind these things, there's no understanding of what it's doing It's just giving you a set of words or a set of pixels or a set of musical notes Or design options that are the statistically next most or second next most likely to occur in the string of things that it's seeing And so I think that we need to be careful about being too afraid of something that really has no ability to turn around and turn on us Or whatever you would want to call it because it has no comprehension of what it's doing There's no consciousness behind it And I think another thing that's interesting you mentioned, you know, the web and all the technological changes that have occurred over our lifetimes Any of that, AI as a tool and just like any other tool that's been created in human history, it's designed to offload our burden To take some sort of burden off of us so that we can focus on the more interesting, more abstract and more human things that we can accomplish And all of these technological advancements so far for, you know, the 6,000 years of human, you know, post-agricultural revolution society Have been changes, tools that reduce a physical burden They take away a physical burden from us, they give us more time, they take away some physical trouble We have finally built a tool that offloads a cognitive responsibility We have something that sounds human enough that we can offload some of our mental load into the tool But that doesn't mean that the tool is taking away our humanity It just means that we're freeing ourselves up to do the more complex, the more abstract, the more human things that are going to progress our society even further I think it's an interesting point and something I'd like to come back to at some point in this conversation I've seen, I mean when you read about these issues and I'm sure many of you have, are reading about these things And you see projected, right, that within the next 10 years or maybe even less than that That most of the content generated on the web will be computer generated, AI generated And that's everything that we consume and access, right? And I think about the students that we work with here at UT And increasingly, right, the patterns and trends that we see in society Where more and more of our information, our news, are seeking out of health information You name it, right, that we are increasingly relying on, you know, a computer mediated systems technology, social media even for better or worse And I wonder how each of you might sort of respond to this sort of, on the horizon this sort of new reality, right That most of the content that people will be consuming, politically oriented content, health oriented content, education oriented content Will essentially be generated, right, by these systems that we reference here What's at stake when the bulk of our information environment will be computer generated? I think we're living it right now, I mean so much of our stuff is, we know there's a lot of untruths, mistruths, how do you say this? There's a lot of things that are not true So I think part of as we go down this consciousness of how you make this right is to And that's why we have something like good systems, is to come up with systems so that we can flag what is true and accurate versus not Because I think that's a big part of what we're dealing with, now anything is possible I mean, you know, go to Dr. Google and you've got, you know, you can cure everything there, right So it's a little scary in that respect, but because it's not true And we haven't set the boundaries and the plans for what is, what we should believe And there's this, somehow we've got a little sideways with disbelieving accurate information And I'm not sure how we get back to that, but I think that's part of, again, the university investing in such a large project like that to come up with better systems and good systems that we can use So I think it's, you know, we are living that right now, we have been living that And that is partly AI and that's partly, you know, the whole internet and social media world that we live in What's a great point is that technology is the enabler, right? And technology is, it's a mirror and I think AI is a mirror It's reflecting things about our society that maybe we weren't prepared to see or acknowledge And the disinformation and misinformation, I think that in time we can come up with mechanisms and frameworks for labeling, for making sure that there's clarity Making sure that the governing bodies who are creating some of that clarity, that they are ethically composed and hold each other accountable because that can get off the rails real fast But at the same time we really have to look at ourselves and in our loved communities and in our not so loved communities and really come to reckoning about our relationship with truth And how we really understand that and how to hold space for someone who's being really challenged about that And you know, we're in holiday season where we're sitting around with family members who may have some real different political beliefs I know that is definitely the case in my family and I'm doing some own kind of soul searching about how do I hold space for this person who my love dearly Who is really not on the same page when we're talking about some fundamental truths And I think, you know, to bring it back to outside of my family dynamics and back to the topic is that AI is reflecting all of that It's the mirror of our time I think, you know, generative content, generated content that we're seeing on the web and that we're going to see going forward as we kind of hit that tipping point where most of the data we see I won't call it information because it doesn't necessarily have some intrinsic knowledge value behind it, but the data we see on the worldwide web is going to be generated by computer systems I think we've been preparing for this for a long time as a society Google is now 25 years old, it was founded in 1998 And that was the beginning of the practice That was the practice for us as a society to get ready for this because no longer was the university library, the authoritative source on where to go It was no longer the first place you went to look for information Now you're looking for information on the web and Google is giving you information not based on how truthful the result is But on how popular the result is, it's using its page rank algorithm to decide what you see first And so we've been preparing as a society for this moment where we're having to filter out what are good results and what are bad results And thankfully, because of a whole lot of human intervention at the search engine builders, good information usually pops to the top of the list But that's not always the case, it was definitely not the case in 1998, 1999 when it was just a bunch of trash at the top of the search results And you had to really go digging for something or you had to know which library website you wanted to go to in order to find that truthful authoritative source on something And so now, first off, we've had quarter century to prepare But we've also had this kind of quarter century to get numb to the idea of the difference between reality and fantasy Or kind of this make-believe space where we've just invented what we want our world to look like and we publish it on the web Generative AI doesn't change any of that, it just accelerates the pace I would argue that most of the things you see on the web even today, they're not cited, they're not sourced, they're not rigorously researched And so generative AI doesn't change any of that, it's just more of the same I think it's really interesting bringing up private sector companies who are now in charge of our access to information And I don't say that as to demonize the private sector, I actually mean the opposite Where we as relationships to those companies as consumers have maximum power And we get to decide how those sources or how the information is sourced We can say, hey, I really want to understand what your commitment is to AI and ethics I want to know what your commitment is to transparency And at least in the United States of America, a consumer voice is a powerful voice, so we all have agency in that And I could be wrong here and correct me if I am, but that's a relatively recent sort of recognition Where we understand that that power asymmetry that you're alluding to and how we as a public, how we as a society as citizens can actually demand more from these companies Because oftentimes get asked, will these companies, how will they change, will they begin to operate in ways that not only give consideration to the private sector But also to the sort of impact on society And I think we're all adults here and recognize that that will continue and always be a driving sort of motivation for them And without I think public pressure, scrutiny, sort of raising the expectations of what we want from these companies Demanding greater rights, greater transparency, that until we get to that point, and I think we're moving in that direction Where we're applying more public pressure, we're better understanding what's happening with these systems, certainly a long way to go But I think that this is definitely an interesting sort of turning point in the conversation Speaking of turning points, I think we can all agree that something pivotal happened after 2016, right, the presidential election there Where there was sort of a massive kind of reveal, right, just how sinister kinds of activities that were very designed, very calibrated To try to undermine any election that we had in this country, presidential election in 2016 And since then, the tide of the conversation has shifted pretty significantly So whereas these companies were oftentimes celebrated, were oftentimes sort of treated as heroes and sort of noble actors in the world There's a very different conversation now about the Googles, the Facebooks, the Metas, the Microsofts, the Amazons of the world And I wonder from your perspective, as we think about Michael mentioned, we've got another big election coming up They seem like they get bigger and bigger every year, the stakes get higher and higher But what do you think we're doing, or rather do you think we're doing enough as a society, as educators, as regulators, as policymakers, as citizens To make sure, right, that when we enter into these moments of great stakes, high stakes like elections, that the information that people have access to And the kinds of conversations that are being facilitated as a result of the information are as accurate, filtered, and in many ways protected against some of these sort of sinister inclinations That oftentimes drive a lot of the disinformation, misinformation campaigns that are so deliberate and intended to sort of undermine the election, to undermine people's faith in a democratic process Absolutely not, Chelsea. No, no, I mean I think part of the issue too is putting it into the hands of private sector with no regulations at all Nobody wants government regulations, well I like the fact that the government regulates what drugs get put in or the road I mean there's some things that are good for and I think we, you know, I am encouraged looking at what the EU did with they I think, no, we're not doing enough, it's still happening and we could see what's happening now with X You know, it's in the hands of a private sector and it's kind of lunacy It's kind of lunacy, I don't know, every so often, do you go on it every so often? I mean, I have decided to boycott it I went on it two weeks ago, I hadn't been on it in a long time This is formally Twitter This is formally Twitter It was, it was, it was lunacy And the impact that that has had for groups of communities who depended on that to share information in very proactive grassroots advocacy ways Is I think is a real shame and a real wake-up call So my answer to your question, Craig, is, you know, I think Dr. Watkins, I'm sorry, excuse me Keep my manners about me, get all impassioned, yeah But this idea of it, I'm not apologizing for apathy, you know, but it used to be, okay, I'll, you know, maybe I'll vote in this presidential election Somebody will take care of it, and it might be kind of great, it might not be that great Those days are long over, and it is no longer, I'm just speaking for myself and what I hope is true for everyone in this room because you are here It is no longer acceptable to be on the sidelines, and yes, it's going to take more energy, yes, it's going to take more time Yes, you will have more questions, but we have to show up in rooms where before I think we all kind of took that for granted And those days are no more, so question the origin of the information you're receiving Talk about it in groups of people who aren't the normal groups of people you always hang out with I think it's really important, and we have to have a societal commitment to do that I think we talk a lot about the responsibility of the Googles and the Facebooks These are kind of cold algorithmic aggregators of things I think what we see today, and what we've seen over the last couple of presidential elections, and what we see in the media today, is the law of unintended consequences We opened the world up to citizen content creation with the development of the web And Facebook and X and Google, they aggregate this content for us because it's impossible for any one human to keep some catalog of everything that's out there So much is generated at any given time, even before we get to algorithmic generation And in a way it was great, right? We opened the world up to all of the thoughts and opinions of everyone The problem is we opened the world up to the thoughts and opinions of everyone And it's now to the point where the most frequent comment, the most repeated statement, wins the day just because of the way these algorithmic systems work In terms of the ranking of what people see And so you can use tools and systems and generative AI and all those things to help push your message by just repeating the same thing over and over again until enough eyeballs have seen it I don't think this is new though, I mean if we look back to the early days of the republic and the times of the revolution, pamphleteering was the way people got their information and there was no verification of anything in that either It was just it was harder to distribute the data Now everyone can express their opinion just like printing a pamphlet, it's just easier to transmit it all over the world And we have the influence of outside actors, you know, it's not just people living here that are expressing their opinions, people living outside of the United States expressing their opinion about what's going on inside the country And what generative AI is going to do is going to move that out of the realm of state actors like what we saw in 2016 and 2020 where it takes an enormous amount of capital And a consolidated effort in order to get all of this disinformation, all of this misinformation content out there in order to get it ranked up higher Now a non-state actor can do it with just a little bit of code and a little bit of time And so, you know, it's not that it's new, it's just that we've opened the toolbox up to other people, other participants that probably don't have best intentions in mind And how do we verify the information? I mean, that's what we keep coming back to, right? How do you know what's true and what's not true or what's real? And we're not talking just about opinions, we're talking about their lies, people just put lies out there about things And I think that's where as humans, right, because humans are, you know, controlling or tweaking the algorithms of where they're going to come up, you know, what's going to come up How do we begin to create systems and tools around that so that we are suppressing just lies? Just start there, not opinions, lies And maybe it's a system of kind of social accountability where this can be socially constructed where it's not cool to smoke inside of a building anymore It's not cool to lie on a social media platform and we can hold each other accountable, I know that's a little polly, you know, of an answer, but that's the best I got right now So we're smoking and not, you know, thinking we were going to give up smoking Exactly You know, I think one of the things that, you know, some of the big tech companies started doing in regards to this, right, was sort of de-platforming those who were identified as being responsible for generating and spreading this kind of misinformation, disinformation That obviously is very controversial because it raises questions about free speech and whether or not these platforms are biased towards one side or the other in this sort of, you know, political discourse that we currently have And of course, why technologies are trying to sort of build, you know, systems and procedures that might be able to detect when information is sort of misinformed or deliberately intended to sort of undermine, you know, faith in a democratic process For example, marking it via its technological procedures But, you know, this also raises a really interesting question and just something to illuminate for you here When we talk about ethics and AI, right, there are a number of issues, a number of questions and tensions that sort of surface And one of them, I think, is something that's been alluded to here And so the question might be, from an ethical perspective, you know, should companies begin to think about their algorithmic processes different, right? So rather than ranking things or privileging certain kinds of information just based on popularity, based on the degree to which it raises the temperature and everyone gets upset about it You know, maybe there's some other metrics, right? Maybe some other kind of technical procedure that would determine, right, how information flows, how it's prioritized And I think those are the kinds of conversations that more and more of this public scrutiny is beginning to compel companies to at least consider, at least to some degree I think if you look at, you know, the online technological platforms as kind of analogous to the town square model We have the ability within these electronic platforms to upvote or show our interest in something, but what we typically do not have is an ability to show disinterest in it And that's a critical piece that's missing because, you know, free speech works because everyone can just turn around and walk away in the public square There's to be that crazy guy yelling on top of the tomato box, but we don't have that kind of capability in our electronic platforms We either have excitement or apathy. That's the two options you have and you don't have a way of expressing, you know, genuine, you know, distaste for something that gets posted And so I think that's part of the equation that's missing, that's part of the algorithm that is not there yet And there's a reason for that because adding this disinterest component saying, no, I don't like this either opens your platform up to these massive kind of, you know, cancellation attacks Or people just move to a different platform because what people are seeking is that endorphin rush, they're seeking that immediate gratification that occurs when they see a like and they see a thumbs up And they see a star and all of those things, it gives people a rush like a drug And so that's what these platforms prey on is people looking for ways to get those likes, get those hearts, get those those upvotes And it doesn't benefit them in any way to give a way of saying, I think this post is inappropriate or I think this post is incorrect Without going through the very painful process of reporting a post, which usually gets you nowhere and takes a long time That reminds me, and Doreen and Luke and Chelsea, you may be familiar with this as well, but this concept of persuasive design And so maybe some of you saw the Netflix documentary, The Social Dilemma, but it was in some ways a reference to that The idea being right that none of the things that happen in our experience with social media, for example, with these screen technologies is by accident Virtually every aspect of that experience has been designed, has been engineered, has been tested And there's sort of social psychological underpinnings for what we see and why we see it to motivate, to provoke fear, excitement, happiness And it's all of these things that encourages to continue clicking, continue watching, continue scrolling And so this is very much understanding how our brains work, how our brains respond to certain kinds of stimuli Instead of incorporating that into the technology experience that we all encounter And so it seems like a really powerful component here No, what Luke described is exactly what happens And what happens as a result of that is you can sell more ads and more people stay in the platform So we're, you know, we're holding to quarterly profits So oftentimes with these particularly publicly traded companies, I mean, you can't take a company offline to go make these types of changes And you'll see a dip before you see an up There's very few organizations that will make that change because they are beholden to quarterly profits and the stock price That's, you know, that has nothing to do with AI, that's just the reality That's, you know, how our financial systems, you know, got set up and in the 80s we started moving to this quarterly profits So you really can't have innovation or change on a consistent base, you know Innovation takes a long time, change takes time It's very difficult because they're going to lose money and could be, you know Then you're going to have the corporate raiders come in and take the company over I mean, they, we've set ourselves up in a situation where it's very difficult for these organizations to want to make those changes There's no incentives for them to do that And we get to be at this time where now we get to question everything, you know Let's throw out the big one, is capitalism really serving us? That's not the topic of our talk tonight But, you know, this idea of, you know, in the Facebook algorithm The anger emoji is weighted five times heavier than the thumbs up emoji That is designed into that experience So no wonder we're all just shrinking like, God, does the world just get meaner and angrier? What happened to our sweet society? It is being manipulated, but knowing that is power in and of itself And we can make different choices What's happening with X formerly known as Twitter is a perfect example The pendulum has swung, in my opinion, too far And people don't want to be a part of that anymore So maybe there is some learning, again, being a bit polyana and hopeful about this That we can come back to this center ground of just human decency And there can be profit in the social good But it has to be a bit more in balance I'd like to get each of your perspectives on one of the big topics that's really driving a lot of conversation About AI across many different sectors and communities Some of the buzzwords, right, are bias, discrimination, fairness Those are some of the really big ethical questions that are increasingly being asked about these systems Here's an example, I was just kind of looking up some things online about generative AI And some of you may have seen this, if you go to Bloomberg, if you just Google Bloomberg and generative AI They actually ran a little experiment, right, where, so now you have these systems, right So we know chatGPT, you can give it a prompt, it'll generate, right, you know, human sort of, you know, inclined text Literally at the snap of a finger But you can now also do similar kinds of prompting to generate images or even, you know, video, for example But this particular experiment that Bloomberg did was basically sort of image, sort of generating a system And it would ask the system, for example, to, you know, generate images of a lawyer, of a doctor, of a teacher And often, let's say doctor and lawyer, oftentimes, right, the images that it generated were heavily skewed white, heavily skewed male Asks it to generate an image of a teacher, heavily skewed female and white Asks it to generate image of a social worker, heavily skewed female and a little bit of kind of diversity in terms of race and ethnicity But it also asked it to generate things like, you know, a criminal or a drug dealer And it heavily skewed darker skin individuals and males, for example The point being, right, that these systems are picking up on, and this is in other kinds of research as well That these systems are picking up on sort of long legacy historical biases and are replicating them in an accelerated pace and at scale And so if more and more of our content is going to be generated in this way What are some concerns that we as a society should have about the ways in which bias and discrimination and other kinds of things are informing How these systems function, the kinds of outputs they generate, and the kind of content that we then are exposed to as a society You know, the system is pulling out what society believes now So it's just replicating exactly where things are That's the scary part. So again, I think it gets back to where we started is how do you put controls around the system Now that we have an opportunity to have a newer tool to put controls, and that's human. Humans have to do that Put controls around the systems to stop the bias So you have to put more information in there than it's getting Because really, I did read that, and to me it's like, yeah, well that replicates, look at the world, I mean look at our society It replicates exactly what we think So now we have an opportunity to make a change. Hmm, that's a new job We can create a job around that, you know, where somebody is going in there into the system and beginning to say, okay, well how do you make that change Maybe that's how we get traction around this, and maybe that's a positive thing that comes out of AI I agree 100% and completely I mean AI is this quintessential moment where we have the opportunity to say this is unacceptable And this is not going to be an unintended consequence of a conversation around a hiring conversation Or recidivism rates, or healthcare decisions That is a one-on-one conversation that drastically impacts people's lives This is widespread through all the algorithmic design On the data that those language models are trained on It is unacceptable for those to not be transparent It is unacceptable for there to be bias in that training We have to identify it, note it, and correct it And we have that opportunity to do that now Will we have the opportunity in 10 years? I'm going to turn to the technologist because I can't answer that question But it feels like we really have a shot right now, and so time is now, people You know, it's AI until it's statistics And so what we see coming out of these systems now is just a reflection of what was put into them And what was put into them was the content that we have amassed on the World Wide Web in the last 30 years And so the bias that we're seeing is just the reflection of the bias that we put out into the world But we live in a content creator sort of society now Like I said, we've unlocked the ability for any individual to create content and put it on the web So we have the power We as individuals can put more content out on the web that breaks this bias pattern And by doing that, the retraining of these models will skew away from the historical distribution of data that it saw Where every lawyer is a white male, if we are putting more content out there as a lawyer as an African American female Then it will slowly bring those models back in line with what the distribution of data out on the web looks like And so we have an opportunity that I don't think would have been possible if we were creating this technology 30 years ago In that we still control, for the most part, the data generation and the data publishing as a society So if we're just going out there and publishing on the web the things that we want to see Then the AI as it ingests that and builds its new models will reflect what we've put out there So we wanted to leave some time for some questions or comments or reflections I think we have a mic out Okay, so we have mics for you So just if you have questions or comments, please share Okay, hi, my name is Brianna Gordley, I work at Texas Appleseed, our Fair Financial Services project My question's kind of two-part First, can you give some examples of actual regulation that you can see in this industry And at the same time, how do we effectively regulate the AI industry without a federal data privacy law? I do a lot of data privacy work, I'm trying to keep up and here comes AI, even though it's existed Trying to find how does regulation make sense for both of these, but how can we have one without the other? Thank you I think, you know, speaking globally, the EU is definitely leading in terms of a regulatory environment for AI Obviously, the EU is not the United States and the United States is a lot more me-centric And I'll be so bold as to say Europe is a lot more we-centric And I think they as a society have some different guardrails to borrow the phrase about what is possible However, I think, you know, the White House and the Edict on AI is a place to start A lot of people criticize that saying it's not enough, well it's not supposed to be enough It's a place to start and then comes standard setting bodies In order to have standards and whether it's NIST or whether it's whatever federal agency is going to claim the space around this Again, I think it takes a lot of consideration with different sectors at the table and not the same old actors Again, I think we have a real opportunity to say who has not been at this table That it is not just up to the regular folks, no offense to lobbyists, but it's not just a lobbyist and a government conversation It's a social sector conversation and in that, I think from standards, then the right sort of regulations that can be flexible enough to accommodate the innovation of AI Maybe possible, but it's a big, big question I mean, I think, you know, there are a number of sort of, we've referenced 2016, a number of big reveals and one of them, right, were the subsequent hearings that were held in D.C. And among other things that those hearings demonstrated, right, is just the utter lack of preparedness and understanding by those who actually should be sort of driving a kind of regulatory apparatus to build those guardrails To hold these tech companies, right, to higher standards, ethical standards, technical standards That what was demonstrated, right, is that that currently just does not exist And so there's an ongoing conversation now to sort of ramp up that effort What I think is on the horizon in terms of really interesting sort of point of contention is just going to be around like data, like your data, my data, our data Who owns that, right, how that gets monetized, how that gets controlled Underneath everything that we've talked about tonight, right, is this notion that the biggest source of currency that drives all of this is data And that's, you know, to the point that you made earlier, right, sort of the ability to sort of aggregate all of this in sort of massive ways, unprecedented ways It's given the big tech companies, right, the sort of capacity, right, to be who they are, and that is just the heavy weights, right The big financial sort of players in this space, sort of dominating and driving a lot of what's happening from a technical and developmental, but also ethical or unethical perspective But I do think, right, that that's going to be a really interesting sort of battle front We're seeing it already, we're seeing some authors, for example, suing chat GPT, for example, for using their copyrighted material, right As part of the language model systems that they're building, each of us who have generated content, if you've published, if you've created something, it's likely being used by these models now But you have not given consent, you have not approved for that, and you certainly haven't been compensated for that And so those I think are going to be some really interesting sort of battle lines to come in a not too distant future Thanks. I was just going to ask whether you were going to give us all a homework assignment that was supposed to all generate content for our various websites I encourage everybody to test it, to test the systems out, because, you know, we can all talk about it, but until you get involved and you work in it, you don't understand it as much Because if you do that, you'll understand both the positive and the negative of what's possible there And then you're able to have a voice and talk about what works and what doesn't work And I think we all have to be educated in that, so I would not be afraid of it, I mean, that's why it's caught on so much, because right now they've made it sort of uneasy to use, right It's so easy to use. This is a very quick story. Had a plumber come to my house, and I was on a call and I was talking in technology Don't know this man, and he said, do you know a lot about technology? I said, well, you know, a little bit, and he said, well, this was six months ago I used that open AI thing. I said, yeah, and he goes, yeah, I wrote a love letter to my wife And now she thinks it's me Talk about transparency In my kitchen, I'm like, why, is this like a deer abbey? Is this a comedy routine? He goes, what do I do? Do I tell her? I go, she's your wife, hey, you got to tell her the truth But there's your ethical dilemma, like what do I do? It's like, you got to tell her the truth, he goes, but then she'll know, it's like, it's okay, you'll generate another one Get flowers I think something interesting on this point, it's good to play with these tools and it's good to try them out It's always important to remember that when the service is free, you are the product And so anything you put into the system is used by the system the next time it's trained It was actually a disastrous example with Samsung where some engineers at Samsung were putting, you know, in DA proprietary circuit data into chatGPT to have it generate new circuit designs for it And then they ended up getting spit back out to a competitor because they were using the free version of the tool and anything you put into that system is actually incorporated by open AI and reused And so it is possible to get data out of these systems Sometimes in rather spectacular ways, I was actually exploit just last week where if you asked chatGPT to repeat one word infinitely, it would start repeating that word infinitely and then it would just dive off and start dumping data that it was drained on And it would give personal information, it would get information scraped off the web, anything it could find And so it's important, play with tools, just be cognizant of the fact that anything you give to them is going to stay with them And words matter, that's, you know, kind of important to me and thinking from a smart city's perspective, you know, we as residents in our community, data is being collected on us all of the time Whether that's in a private building or, you know, what is the data use policy of the city in which you reside or visit These are really important questions and I'm not picking on the city of Austin because they're hard questions to answer and all of us are kind of in this place of learning and being unprepared so I'm not trying to do a gotcha But this is a community level conversation, I would like data collected on the integrity of my water pipes, I would like data collected on city systems I'm not super excited about facial recognition technology at stoplights or license plate cameras and these are conversations that we need to have as a community and AI is underwriting all of that I have a question, you've talked about data a lot and you've talked about this kind of nebulous monster open AI that's out there Really it's chat GTP a product of open AI that's kind of doing that we would that we would use but in the last 30 days open AI the big entity itself almost self destructed They had a tiny fact I would say terribly incompetent small board and they fired the the brains behind the thing and then Microsoft who you know obviously is the funding when there's no revenue stream has invested 13 billion dollars to today with a lot more promised on open AI very very quickly almost all the people and all the not the technology because they couldn't have walked out the door with that But it was all in those I think the number was 2500 or so employees they were ready to move to a building that Microsoft had already set up with laptops and access to systems At the end of the day where in the public realm where's the next open AI I know there's a couple of things that Google and some of you know meta and a few of the others are putting money into But you've got this great big thing Microsoft has already monetized part of the technology where's the competition and where's where's the ability not for that to become kind of a de facto It's it's it's a it's an interesting thing that you bring up so we talk about chat GPT and open AI just because it's the most well known but there's there's really a handful of players building these types of generative AI systems But they all have something in common you'll notice that open AI is primarily funded by Microsoft they are the the principal principal company behind the Bing search engine Google has its own generative AI models with First Palm now Gemini also the biggest search engine in the world meta the biggest platform in the world generates a generative AI model previously called Lama all of these companies first off are enormous Because it takes enormous resources to build these things that so the total cost to train the Lama to model at meta was four billion dollars took four billion dollars of compute resources to build that model And it's the same thing with all these other companies so you have a handful of models being built by the mega players and then a bunch of smaller startups and the rest of this ecosystem that's kind of using those models and then fine tuning them in a little way so that so that they get more specific information This kind of goes back to the problem of you know, you know bias in the system why not the problem is there's only a handful of people. There are only a handful of models. The biggest one I know of that is independent of the major tech companies is Bloomberg. They wrote their own generalized pre train transformer GPT model based on their own financial data. But other than that, you're stuck with these these four. There's there's a couple of open source efforts like hugging face out there that are trying to build more open models that are published on the web and you can actually download them and train them against a data set that they they publish so that you can look at it. But these are big models trained by big companies on enormous amounts of proprietary data. Pascal. Yeah, I think it's a I think it's an important point to make. And that's partly why open AI partnered with Microsoft is it realized to do at scale what it needed to do it needed significantly deeper pockets than it could provide a sort of a as a nonprofit agency. The only other thing I'll say is if you really interesting if you sort of have followed technology trends, what I've noticed right is that whatever system is built by if it's YouTube if it's if it's Twitter now X that that how how these systems get built. And then how they ultimately get used and deployed by society are very different things that the creative YouTube could have never imagined what YouTube has become. My point is that even despite the fact that these companies exercise so much power and compute power over these systems. How these systems get deployed in health care. How these systems get deployed in education or employment. They won't figure that out. There are others who have figured out how to do that. They're building the platform. They're building the system and the architecture. But the real innovation that happens post building those systems is going to happen happen elsewhere. That's been the story of technology. And I think that's great. It's already happening right with a lot of the genitive AI kind of systems. But the whole open sorry not to keep that but that's a very you know it's a Dr. Jekyll and Mr. Hyde story because open AI was built was started as a nonprofit to protect ethical AI. And once they got the money from Microsoft and they decided to go down this path of making money. A lot of people felt it changed. And if you now you're reading the follow up stories you're like OK. Yeah interesting now. So to be to be it's going to be interesting what happens there. Maybe another some hands over here. Yeah I guess the one question I had was in academia it's very important that you cite your sources. In business and government probably less so. Where should we be with holding AI companies to a standard of disclosing their sources. So Chelsea's point garbage and garbage out. It's only as good as what you point it to to train itself. Another just to comment believe it or not here in Austin. We have done the majority of the design work for the fastest supercomputer in the world called Frontier. And it's also the fastest AI supercomputer in the world with AMD Coray and later HP Enterprise. But you can't get this. I mean there's no way you're going to be able to get the sightings. This is a big discussion here at the university about sightings because it's going to change how we're doing. I mean that yes that is how we do things. Maybe that's again a job to figure out how we get these sightings but you're billions and billions of pieces of information. Yeah so so explainable AI is an active area of research so you know the academic community has been working on this for the better part of a decade now. Trying to figure out how to tease reasoning out of the black box. And we can do it with a lot of simpler statistical models we can't do with these large neural network types of systems that are behind tools like chat GPT and things. They're not ready yet. And so we have this scenario where we have commercialization happening before all of the research work on the academic side is really complete. And in a way that's great. It gets technology out to people faster. It allows those downstream innovations like applying to health care and whatnot to happen faster. But we don't have the algorithmic components under the covers yet to really explain what goes on. And there are lots of different potential approaches to it but none of them is ready for prime time yet. But it's on the horizon and you go back to the question about what's the next thing. The next thing we'll be figuring out how to make these systems explain themselves. What data did you use to come to this conclusion. What were the data points you use to make this statistical inference that you made. Why did you pick this word. And so as we we progress in the academic space in this in this area of trying to figure out how to have the system spit that out. It's we will see that that academic learning move into the commercial sector and then we will have systems in place. Hopefully in the next couple of years they can explain themselves a little bit. And this is something that's really interesting right. So the stakes get higher and higher right as these systems are integrated into really important sort of human based institutions. Take the financial sector for example. And this idea right that if an algorithm rejects me for a loan in that and no one knows why to your point right. It should be explained to me right why that decision was made. If an algorithm recommends to a judge that I you know not be released on good behavior. You know I have the right at least some people are arguing right you have the right to be explained to why and how that algorithm decision was made. Right now there's no transparency there's no explainability no documentation of what's happening. And those are I think those are the kinds of conversations that we're beginning to have as a society about what we should demand from and expect from these technology systems. We only have time for two more questions and we've got a few people have been waiting and but then I encourage you to stay. We're going to have a reception and hopefully our speakers will be able to stay for a few minutes if you still have a question that hasn't been addressed yet. I encourage you to stay and ask as we wrap up. Thank you very much. Hey there. So my name is Dustin. I work at Google Cloud and I'm a strategist to help people figure out what they're going to do with their data strategies and their AI and ML strategies. I agree with so much of what you said about societal ills and societal problems and the challenges of adopting and transforming with these technologies. The impact that it has in society. But there's a very big disconnect between a lot of the things that you said and what I see in my day to day life in the last five years. First off, I'm really not that smart. I'm not Macho Valley. None of the people that I work with are Macho Vellies. You know, we're all feeling around in the dark. We're all just a bunch of human beings. And yeah, I've worked with companies of all sizes from small little startups all the way up to multinational multibillion dollar companies and their executives. And they are all the amount of dysfunction that you see in every other organization. The majority of the stuff that's going on here in the world around all of these technologies, you're not seeing. You know, you're looking at a few handful of things like generative AI and self driving cars and that becomes these beacons. But they're really just the tip of the icebergs. The majority of the stuff that I deal with day to day is how do we flip the light switch on and off better? How do we save more power? So many of the concerns that you have raised today around these technologies, if they can be solved, they've been solved to some degree. And if they can't be solved, a lot of them are being worked on, bias. We will never get rid of bias. We will never get rid of ethical problems. These things are always going to be shifting. But we do have a lot of people putting their thumbs on the scale directing these things. That's why when you interact with them, they've got these safety filters. We do have a lot of people in these organizations who are concerned about this. When I sit down with the executives and we're talking about how we're going to create a new pricing model that's going to change how they interact with their industry. There's a lot of times where there are the chief compliance officers raising their hands and saying, nope, we can't do that because of regulatory controls. The CMO is raising their hand saying, no, we cannot do that because we cannot stand the brand damage. So it's really easy to look at these things through the lens of the society and how they're impacting us and think like, wow, how crazy this is. And all of these people who are working towards these ends, but it's like any kind of other system, they're created with whatever intention they are. They're either created to make money, they're created for a purpose. And then they have the byproducts of negative outcomes that are sometimes controllable and sometimes not. But it's something that I'd have everybody think about here because, you know, Dr. Wilson, I agree with you a lot on what you said. We are humans driving these things and we are flawed and incomplete creatures and there's a lot of people trying to help that. But at the end of the day, these are human machines made by human beings with human flaws and these things are changing over time. So, you know, the day to day stuff, you're benefiting from nonstop and then there's going to be these transformative things that just change who we are and we lose agency over that control. You know, what does that mean when we don't think about school the same way? What does that mean, right? We need to figure that out, but it's not inherent to the technology. The technology is a tool just like fire was a tool. So, you know, we need to learn lesson from Pometheus. We gave fire to people and they started burning things down with them, but they also cooked food, right? So, I think that's kind of where I would leave off here is think a little bit broader outside of that and think about that it's not necessarily just a handful of companies like Metta or Google or whatever. There is tens of thousands, hundreds of thousands of companies doing this stuff day to day and they don't have a perfect concept of ethics. It's just normal people. I think you kind of nailed it. And again, I'm going to switch this taking a cue from Doreen. This is the opportunity where the word values really matters. It's not just some stuff you can etch in marble and then people pass by it in the lobby and nobody really cares. You think deeply about the values that are driving the decisions as a company and then you all hold each other accountable for how you are living those values in your day to day actions. And I think it's a cool opportunity to do that. I was in a conversation. It was kind of a gathering around ethical AI and we're in this, you know, kind of meeting room and the guy sitting next to me said, yeah, I'm in the private sector. You know, I don't have time to be ethical because the next competitor is going to beat me to the punch and I'm like, oh, and it kind of took me back. And I had to appreciate his honesty as you are taking the opportunity to be honest about, hey, this is the day to day work like this is this is what I'm doing in my day job. And I think it's important for us to voice that and then circle back up to the values driving any company or organization. And, you know, my advisor, Dr. Ken Fleischman, I've learned so much from him with the good systems at the University of Texas and in the School of Information. His work on values and how that gets expressed into things that we are talking about here today is so important. So definitely invite you all to check out good systems. A lot to say to that last comment, but not a lot of time. So we'll, was it one more question that you wanted to get to? I think we'll just wrap up there since we've gone a little bit over. But thank you all so much for sharing your time and your expertise with us. We have a lot to think about. And I encourage you. I encourage you all to join us for a bite to eat or a drink if you have time and we'll continue the conversation. Thank you.