 All right. I think we're going to get started. Good afternoon, everyone. Welcome to the Berkman Khan Center and to the Institute for Rebooting Social Media. Thank you all so much for being here. It's so lovely to see some more familiar faces, but also some new faces are at least new to me. For those who don't know me, my name is Tony Gardner and I'm a program manager with Berkman's Institute for Rebooting Social Media. So it's my pleasure to introduce our moderator for today's event to Young Jin Park. Dr. Park is a professor at the School of Communications at Howard University working on the effects of emerging technologies as they relate to social and policy problems. A member of our institute's inaugural visiting scholar cohort, Young's current research focuses on AI, algorithmic bias, personal data, and digital inequality. Thank you, Young. Thank you, Tony, for the kind introduction and thank you for everyone for joining. I mean, this event is making popular because of the artist and Tony Kujo and Jay-Jee and Jay Snickens and then me and everyone actually finally held and I'm so glad that this one actually is happening today. Dr. Newman, if I introduce his work, he has an intellectual background. He's coming from the Berkeley and I heard that when he got accepted in Berkeley, a sociology program, one of the acceptance letter actually came from the telegram, not the email or anything like that. So how much actually technology since then actually changed and that's a fascinating story. And he actually went to the world of school and then right now he is at the NYU Shanghai. And as you can see, this book is over the last few decades, Dr. Newman actually produced actually coming from the mass communication, political communication, and new technology. And then last book was Digital Differences, which is fascinating book. Now, this going into the new book which is the topic of today's conversation. And then when I actually put together my slide, it was not actually ready, the cover image. So I found out just a few days ago, the new cover which reminds me of Steven Spieger's old movie, E.T., there is a child, 1982. I watched that movie. So it's a nice book, title and cover. History between Berkeley and I, we met at the communication department at Michigan University of Michigan. Even before that, I mean, we had a pretty nice Korean gang out, follow of the program. And even before that, I got actually introduced the future of mass audience actually written in 1991. And I got introduced by this book by Andrew Goodwin in San Francisco. It was one of the 1998. The first book actually I read about the mass communication and technology. And I got a class in that class, luckily. And then unfortunately, he passed away today, two years ago, but the resonating a lot of things about how Dr. Newman's rule, and he's actually introduction of his rule was still guiding a lot of things about my work, my work. So before I go, before I go, one of the last thing to resonate with me about his work, Dr. Newman's work is that in the future of the mass audience, it does start with the one story about World War II, how much European actually destroyed by the World War II. But they had an opportunity to actually rebuild the system anew. But what Dr. Newman actually saying, instead of rebuilding the system entirely new is by the opportunity. They actually rebuild the system exactly the same way, same crooked rule, same old-fashioned way to assist them because they miss what they actually lost because of the world. But that resonates a lot of things that what we think about new technology, how should be designed. So for that, I actually welcome Dr. Newman for his introduction of the new book. Thank you. My plan is to speak relatively briefly and take advantage of our informal setting here and and you're munching on your lunch. Think of some things that you'd like to raise up. I'd like to just basically, I'm not going to use PowerPoints. I'm going to speak from some notes. I'd like to raise about, I'm sure many of you have seen the back of the head of a speaker who is reading the bullet points to you off the screen and find out you asked them the ideal way of getting a conversation going. So I'll try to resist PowerPoint dependence as a impediment for communication. And I'm going to start by asking if you've heard the one about how paperclips are going to help kill us? Have you heard the AI? Well, I've got a couple people about the paperclips. So my colleague, Nick Bostrom at Oxford University, developed a scenario, I think it was good intent for the idea is if you generated a command to a all-powerful super smart AI system and said, make as many paperclips as possible, eventually that empowered agentic system would kill all human beings because it would intelligently figure out that the atoms that make up living human beings could be converted into paperclips. And since the only command that is running the system, so ultimately those paperclips would kill us. And it seems like such an exaggerated scenario, I'm sure that Professor Bostrom's intent is to say, let's still pay attention to this, the extreme, and there's been basically every other day, in at least the New York Times, there will be the AI, Siri is going to kill us kind of op-ed, but it's been two particularly demonstrable ones, one by Num Chomsky and other by Harari and colleagues, raising questions about the potential threats of AI. Since I'm talking about the future and the next generation of AI, I take note of this tradition. I respect that I think we need to be very careful about the potential challenge, negative challenges of AI. I'll conclude with my particular view of that. But the rest of my presentation is going to say, let's for the next half hour adopt a view to say, let's think through what the positive effects could be of next generation AI and work toward that ideal and think kind of, how would that actually work out if, and this is the central concept of my book, the concept of evolutionary intelligence is based on this kind of a metaphoric notion that, well, wheels made us more mobile, machines made us stronger, telecom allowed us to communicate across the earth, and AI may actually make us smarter in practical ways in day to day, and not just for elites, but for every individual who want to take advantage of it. If more or less the great majority of human beings have mobile phones, many of them smart phones, I think ultimately we can see that controllable AI will be in the hands of a very large portion of the population, and that could be a good thing. I point out that the term evolutionary intelligence doesn't put the intelligence in the human, and doesn't put the intelligence in a robot or machine, it focuses on this underlying concept of evolution. So let me take just a minute to run, you've all heard various versions of this, but this is my version, I call it a language, land, leverage, and literacy. If you think about the stages with which the evolved primate humanoid evolved cognitive system has survived while 99% of the species that have lived on earth have not survived, it's because of our capacity to adaptively find elements in our environment that we can use. And the first was among ourselves language so we could pass on wisdom from one generation to the next. Language may be started originally as a bunch of grunts and gesture, and more formalized language with the construction of being able to predict the future and talk about the past, probably is about 100,000 years old, writing is only 10,000 years old, but language is the first. The second is moving beyond hunting and gathering, and that's only, and we know a fair amount about this, 10,000 years old. So for 90% of family distance as modern humans on earth, we were surviving in the grasslands of the forest by just picking up berries and chasing down small game. 10,000 years ago we discovered probably by chasing some animals into a canyon and figuring out a way to keep them there, how to actually domesticate animals and to take some seeds and purposely grow food. So farming and domesticated animals is literally only 10,000, and that permitted enough extra wealth that allowed for cities and larger social communities. And before 10,000 years ago human existence was pretty much families and very small extended tribes. The industrial revolution a couple hundred years ago, I use the word leverage where we replaced wind and water power and animal power with machine power. And then finally, literacy, and this is what makes my version of this a little different than others, mass literacy with the common individual able to read and write on their own, it's only 100 years old. And for those of you that pay attention to these kind of global patterns right now, the global literacy rate for humans is in the 80s and 90%. So it's really have now come to an age where we can expect that human communication can be beyond just verbal exchanges. So let's take a look at what AI can do now. And I focus on what it's good at. And my version of this is there are three things that the current generation of AI is good at. The first is pattern recognition. And that's what leads to the capacity for generative technologies and large language models. And then it's pretty good so far at the SLAM robotics, SLAM is the capacity to locate yourself and simultaneous mapping of where you are at motion. We think of it in terms of the self driving cars, which is a focus of a great deal of research currently in applying AI. So pattern recognition, generative language models, and SLAM robotics. What we all pretty much agree on is that it's a somewhat awkward term, AGI, for artificial general intelligence. This is the term the acronym is now used for human like intelligence. And my view of what the future of AI is that we are going to have a variation of AGI, especially if we can define it in terms of a co-pilot and not pilot, that is an advisory service to human behavior rather than handing over control of our environment to some kind of machine system. And I emphasize the importance of the stage that we're at. I focus on the fact that the concept of artificial intelligence was invented in the mid 1950s. And it isn't until 70 years later that we really start to see examples of the artificial intelligence that was imagined 70 years ago. And the question is what was lacking in the technology and environment that prevented a real artificial intelligence from being demonstrated for 70 years despite a great deal of effort. And I think there were three impediments we can identify. And thinking about them, I think is helpful for understanding the fourth, which is the focus of my research. The first was we didn't have sufficient computing power. It turns out that large language models require real heavy metal, big iron, and one of the reasons OpenAI decided to work closely with Microsoft was access to Microsoft's extended Sturford-Farn network and capacity to generate the effective computing power that they've been putting to work. The second thing was we didn't have access to accessible large language databases in the 1950s, 60s, and 70s. We didn't have what is available now largely based on the internet, but from other sources as well. And the third was we didn't have a mathematical, a set of mathematical tools that make sense of parameterized prediction models that have 175 billion parameters. I mean, we just didn't have, and the notion of layered neural nets and the last 20 years of research has developed probably about a dozen mathematical models that help us make sense. So the map, the data, and the computer power together generated what we were at at the current level. And somehow what that's missing is the next step for something that would be close to hopefully approaching in a positive way AGI. And my argument about what the next step needs to be is a way of dealing with an unstructured data set. If you think about the P and GPT, the P stands for pre-trained. Generative pre-trained transformer. Transformer is just the name for one of those versions of a neural net modeling system. And my proposal is to pay attention to the work of my colleague, Jan Lacoon, who is running the Fair Research Unit at Metta and is also a professor at NYU in our Karate Institute. And his model has a different acronym. He's published a paper on this and I commend his paper to your attention. It was published in June, I think of last year, 2022. And it is a description of the JEPA model, JEPA. And the JEPA is joint embedded. So that's not the key issue. The key issue is the P. So the last P in GPA is for pre-trained. And the secret for the next generation, in my view of the next generation of AI, is the predictive part of Jan Lacoon's model. What does he mean by predictive? He means that you can show a intelligent computational learning system a video of what's going on and say, what's going to happen next? And so if you've got a video stream, you don't have to label what happens next. What happens next is in the video. So you don't have to have the key, the secret sauce and the Altman and Brockman and the others at OpenAI freely admit this, that part of the secret sauce is reinforcement learning with human feedback. So they have a lot of individuals responding to a version of a generative phenomenon and saying, I like the second answer, not the first answer. And that's one of the things that generated the sense of humanity, humanness, that people sense when they're dealing with these chat technologies. So the next stage, the notion that we could label every possible corner of the trillions and trillions of elements of knowledge. And here's what fascinates me about with human's model, which is that if you think about how an infant learns about the world around them, the mother isn't always labeling this and that. The infant knows that if it drops something, it falls to the ground and he begins to understand the physics of the environment by touching and interacting. And the argument is we can build a JEPA model that would have the kind of world knowledge that's missing. If you think of Cyril's Chinese Rome model about how intelligent systems can translate the Chinese, but they don't have any idea what they really said, the notion that AI can make it sound like without really understanding the logic and the world basis on which these languages mimicking. That's why the word, and they use it freely, mimicking these transformative technologies that are mimicking language they've heard. Sounds like a human, but doesn't reflect real understanding. My view is that the JEPA model will permit that. So I think that the next challenge is going to be what is going to be the interface between real working humans and the capacity of these technologies to better understand the environment and provide advice on our environment to us as we systematically misperceive the environment. So if you say evolutionary intelligence, what's the key idea? The key idea is that these technologies can be compensatory. Can compensate for the relatively well understood limitations of the human cognitive system and the built-in biases of the human cognitive system. What was beneficial for survival when we had clubs and maybe stones as resources for us in the grasslands and the jungles may not be the same that will serve us well in the modern environment. So if you think about Kahneman and Ferrisky's notion, they call it in a formal way prospect theory about half a dozen, maybe a dozen known systematic biases and misrepresenting risk and future developments that probably were beneficial to survival in hunting and gathering days and clearly not beneficial for our survival. Kahneman got the Nobel Prize from economics because they realized understanding human biases and representing risk and reward, systematic errors in understanding our environment is something that might benefit from correction on thinking these technologies can be provided. How would it work practically? Here my central concept is, I've got an entire chapter on this in the book, convergence. Computers used to be big tube-based machines and the Aiken computer was just across the walkway here, one of the first computers in history. Air-conditioned, room-sized computers and then of course mainframes and then of course smaller individual computers, laptops. Right now we're at smartphones and smart watches and this convergence is merging towards and I'm arguing that ultimately despite some very shaky background in Google Glass, that glasses and smart contact lenses and other ways in which the technology can provide information to the individual confronted with a real-world situation, I think the ultimate convergence is going to be wearables and that the definition of what is and is not a computer is going to be almost impossible task is intelligence migrates converges from a distant box to the environment of the individual. I expand upon this and say when you walk into a room, you'll note that evolution gave you the capacity to hear and to see, but evolution didn't give you the capacity to receive an interpret radio with. So well turns out that if you go into a room you've got a smartphone and a few other things and you're probably communicating with this other range of electromagnetic spectrum to the world. My notion is that we're increasingly going to, when we come into a room or an environment, we will communicate our existence through a pre-arranged identity that's electronic, not just visual and we'll look back at the crudeness of having to use facial identification of our facial and the key to making this not an invasion of our privacy is the individual's own control over how they represent themselves with their electronic envelope. If you think about the clothes you choose to wear, the demeanor you choose behaviorally, the language you use choose, which will be differential in different kinds of environment, each of those is a way in which you are presenting yourself. My proposal is your electronic envelope will be under your control and you can present as much or as little or head towards the avatar side of things as you choose when you enter an environment electronically. The key to making this work is a set of rules of engagement. I'll give you the example that I think is I think the strongest and perhaps most promising and I call it, I had a chance to talk with you this morning about it, something I call intelligent privacy. Think about this is quoting Yuval Harari, how many people have taken the most valuable personal information they've got and given it away in order to watch free funny cat videos. I've done a couple of calculations, one that's easiest to remember is that your personal information as a denizen of the online world is worth about if you add up all of the advertisements that have been sent to you and kind of divided the number of people that are active. It's about $1,000 a year. That's the value of all the different ads that have been sold to get access to your attention. And I say, let's take that $1,000 split it 500 for you and 500 for the other companies and generate a capacity and say, look, I'm into privacy. I don't want my $500. I don't want anybody to know anything. I'm less set up a system. So I'm on an anonymous basis and I can do my transactions and after that all the information is erased. Okay. And you don't get your $500 and they don't get the advertising benefit. If you say, I'm going to make all my, if I like chocolate and I buy size nine sneakers, I don't mind if anybody knows that, then those advertisements go out, they sell the sneakers. The $1,000 is spent and you get contractually $500 and the other companies to motivate their participation in intelligence privacy. And you've got the ultimate control. And when you don't care and when you would just assume that the chocolate companies know you're available and want to sell you information on the latest chocolate infection that you might be interested in, you'd be delighted to be contacted by them. So people talk about one particular dimension of the disadvantage of the next generation of AI and other technologies. And that is the atrophy of human capacities by dependence on other technologies. My view is that this is again under our control and for those that are inclined to defer to others to take care of or make all the decisions for them, that the atrophy is not a necessary or natural outcome of the use of strengthening our capacity. It's like saying I'd rather walk than drive and when it's convenient you do that. And I talk about how we used to be dependent on horses and now when we go to the stables to go for a nice trail ride, we take a car to the parking lot next to, we don't use horses anymore, we've got a new way of doing things. Will human beings take good advice if it's offered to them? I pause for emphasis because I see that as the ultimate challenge that I and others are working on, which is finding a way. So one way of putting it, we've got some pretty good HCI. HCI is the acronym for Human-Computer Interaction or Human-Computer Interface. And I'm saying the next challenge will be coming up with an HCI for AI in which we find a way to formulate advice so humans will evaluate it and when it's good advice, they'll take it. My model for this is two-fold and that involves cars. When your automatic sensor beeps and says you're getting too close to the curve of the car in front of you, you generally respond, you say I've come to know that I'm going to hit that car if I don't pay attention to that beep. Most people have found it to be in their interest to respond to a red light by stopping. Not all, but at their peril are those that say I'm just going to scream through that red light and pay no attention whatsoever. And we've come to say sometimes Waze gives us advice to say two minutes and Waze is going to do its algorithm and say if you take these 15 left and right turns on these back roads, I can save you two minutes and you say I know it's not worth 15 left and right turns to save two minutes. Thank you very much Waze. I'll just go over the longer way and you begin to negotiate with when you're going to take the advice of this optimization. And what's not yet happened is you haven't yet got the chance to inform Waze that for 15 left and right turns I don't want to save two minutes, which you can easily program in new ways as well. Right now we're at the stage where you have to learn when you will and will not take the advice from your car advisor. I said I would end with a question about what my primary concern about the future of maladaptive or malicious AI is and it's not the technology. It's the control of that technology and I think those in government interested in protecting their power and those in industry interested in protecting their profit margins have every incentive to distort the design of these systems in such a way that our challenge is not in regulating mathematical algorithms it's in regulating the social and legal and normative environment in which those algorithms are displayed and put to work. So I think the challenge is to us and that it's maybe a little more optimistic than your mood might provide on this rainy day in Cambridge that AI can literally be the savior of those humans who relying on instincts that were useful when we get battle with clubs will rely on those same instincts when we can rely on nuclear weapons and shouldn't and that the capacity of our intelligence systems to remind us what the long-term costs of aggression and violent acts and maladaptive behavior are can literally need to making us smarter. Thank you very much. I thought I really and I did so let's let's turn this into a discussion. I'm hoping some of my optimism provoked you and so we'll take the grumpy as first. So going back back on the ecology for context humans were really stable for a very very long time as a species like hundreds of thousands of year and arguably going back to bacteria. It's really the development of writing that kind of destabilized us right because then we get ledgers, economies, cities start growing. There's an alternative view of this is that it's not that technology is dangerous but it's actually the destabilizing force that creates a positive feedback loop that's just running the environment and leaving the global warming and it feels a little hard to frame that destabilizing kind of feedback as as a force for good and not something to be kind of controlled and balanced. And the assumption that the next technology we don't want to let go of feels over-optimistic maybe that makes any sense. Okay so my argument is that perhaps the technology provides an opportunity for destabilization. It's up to us to find the norms and to do the programming and to do the corrections when we see that a particular set of algorithms is leading to racist or hateful speech. There was a piece yesterday in the Wall Street Journal that said Bing is intentionally boring or maybe it was Bard that was intentionally boring and the idea is already within a couple of weeks of some of these releases we've gotten some humans in the loop reinforcement learning to generate correctives for part of the more provocative elements of the GPT 3.5. It's not the technology it's the use of the technology so let's let's talk about the the rules of the road the rules of engagement the legal environment rich the state's levels please yes. So I want to ask about I want everybody to know he's smiling I'm not sure what you're about to say but at least he's smiling this is a good start this is the second grumpiest guy. I want to ask about the personal aspect of what you said the model which I completely share that you know it's worth a thousand dollars take 500 for the the chip and then the contact lens and and the other 500 for others and my question is to get from here to there where the contact lens and the chip are pretty clear now what they're going to look like can we as a species from your perspective in the evolutionary process tolerate the AI being proprietary for profit or otherwise regulated through these political means that you yourself don't trust there that's the grumpy part of if you want to look at my smile in other words does all of this have to be open source as education and literacy is open source or medicine is open source and isn't that just as essential as the chip and the contact lens interface okay your question generates a set of possible responses one is the issue of transparency so we understand something about how these models work so we can respond to them the other is the technical term open source which means the actual source code is available and and ultimately in a sense I mean it in the sense of education being generally not transparency okay um my part of my reaction to that and I'm smiling out is I am stunned with what extent companies like meta and Google and Microsoft have published technical articles on the underlying map of neural nets so that other people can share the transformer concept made public used by other the underlying math um yonla koon who is the primary lead in AI research for metham is publishing the basic map of his jet and jeffa project so there's a lot that's being shared for scientific advantage I think about the underlying math that thinks these work we'll notice that open AI is called open but they have been less than fully public about what were the databases they use for creating it and what are the details of the way in which they've done the some of the waiting functions but the underlying logic of this is clear and my response to your question is doing if I'm correctly characterizing your question and comment can we afford to tolerate a bunch of commercially run and proprietary systems to get to a good point and the answer is I think what we're going to see is a competition between some more open and accessible systems and some proprietary systems and it'll be interesting to see whether some of these more literally open source models will be even more successful than the proprietary ones in some case so I think what we're going to have to tolerate since I don't think it's possible to outlaw proprietary AI in total that what we want to see is a really interesting arms race between the open and proprietary systems to see which ones are going to have to be more successful I'm delighted that the speech of Altman and Brockman and others at open AI still talks about trying to make as much of this on public and when they argue that they're sending out betas that they know are going to make mistakes they say I'm putting this out for general use for the general public because we want feedback and what we've done wrong so I think there's there's a kind of a middle ground between proprietary and open source and I hope that some mixture of the two generates the goal we want to work on thank you so much I love that you compare your predictive jump up I think yeah yeah to to child development on a child development especially so I I love the fact so my question is as people humans are born they learn and then they die do easier humans are born they learn okay they grow up yeah and then they die is there a point where we're going to see the death of uh at the beginning of we're going to see the death of the AI so because the metaphor is if humans are born and die will AI die and my response to that challenging a little bit grumpy question of by raising the issue of inevitability death is that humans are implementations of the human genome and and human culture and the culture proceeds beyond the existence of the individual life and GPT 3.5 will be replaced by GPT 4 and right now they're joking that GPT 10 will have 100 trillion parameters which comes from the notion that our brains with 100 trillion synapses can understand and learn from natural learning processes our world so the individual models will be built and the individual AI models will die but the concept of artificial intelligence will die. Nick. Hi thank you for the talk I was curious about two uh or the sort of bookends that you you had um you seem optimistic that world models and prediction as sort of the basis of common sense um might produce some sort of form of intelligence that is maybe not fully analogous to human intelligence but along the path there but then you also ended the talk by saying maybe not to worry too much about rogue super intelligence AI systems and I was wondering how you square the circle there because at least to me it seems like those stances might be a little bit in tension. Okay uh I'm going to need to figure out what you were thinking of when you use the word rogue let me come up with two for what you're just talking here right we're just ripping um there is a model of called superintelligence and the k-pop function when AI systems start probing themselves uh our friend uh Max uh Ted mark from MIT uh has an extended introduction to his book live 3.0 where he talks about the omega project where a private company over a holiday weekend starts a smart computer that programs itself twice on saturday 17 times on sunday and a trillion times on monday and rules the world by tuesday this is the omega project the beginning of his book about how the computer can take over and have all the agencies do everything that's a rogue computer and it's based on the explosion knowledge explosion notion the self programming notion that it will go rogue because it's not listening to you anymore it's programming itself and personally maybe somebody can explain to me I have been struggling with that for two years I don't understand without feedback from an external world how a program can program itself better the next time without any feedback which means it takes time for the assistance to evolve because as it takes time for an infant who has to spend the first three months of his life trying to decide what where does the infant that I am stop in the world outside start I think three months for us to figure out so I am not a uh a true believer in the knowledge explosion model that many of our colleagues and in that case the rogue is that the computer says I'm no longer going to listen to the input from the human and I'm saying that could happen but I think the much more likely thing to happen is a rogue capitalist rather than a rogue machine yeah thank you so much for the talk this is really interesting I'm curious I'm going to follow up a little bit on sort of that idea of birth lifetime death and the reason that I that I that I'm thinking about this is really about sort of what we've seen increasingly is the presentism that is baked in to so much of both the training sets and the the algorithms how they are run and the even the feedback that that exists very much in a present tense of training sets we've seen have for the most part run across internet provided materials we don't see a lot of cultural heritage materials for example being baked in we don't see a lot of sort of non-English language some but not a lot but I'm not going to go on the geography bit there I'm really thinking right now about this this occupying the present tense and so I'm curious as to your thoughts when you start to talk about the human in the loo I think there is a presentism to that particular gesture that doesn't necessarily think of a generational impact or a multi-generational impact and in some ways now looking from the perspective of us looking back to the industrial revolution and looking at our current environmental crisis I'm just curious how do you square that how do you move past that embedded presentism in this in in AI as you were speaking I was thinking about this term digital natives the notion that people who've grown up in a digital world are different human beings are you is that something I don't believe in it I was going to say I don't I don't think there is I don't need this I think we are the evolved human cognitive system and we're hard to do a bunch of things and I think they could the digital names can type with their thumbs a lot better than I can then you have a set of expectations socially culturally reinforced expectations that they want things to respond faster but I think we are still the same evolved human cognitive system and that hasn't changed in 400,000 years meaningfully so what we've got to do is assume that the basic human phenomenon are true and when you say is there data historical and cultural data that could be added to these databases my hope is that they the demonstration of diversity of sources generates a better decision system that that incentive will lead to a much broader set of inputs into these materials right now we're not sure where open ag got all that stuff and eventually I think that information will become more and more known and the competing systems will start to say use my system because the basis of my system is a more diverse set of of training experiences and that will prove to be better because diversity of origin material has been demonstrated again and again we have better a better basis for precision but yeah I really enjoyed your comment about advice and human speaking advice but one of the main features of the current state of the technology is that some of these models are very good at providing very bad advice that sounds like very sound advice and despite the literacy levels keeping up and almost being 90% business I wonder whether we're equipping next generations with enough critical thinking skills to differentiate good from bad advice or you know a fake image or video from a one that's not and I I wonder what your thoughts are about the dangers or the limitations of that education system may create uh I have two things I like to say about that I mean I attended drawing historical metaphors um when we invented the automobile for example the Model T you had to adjust the spark a little bit at the at the at the adjustment of spark was at the wheel you run under the front of the car and you start cranking like this and if it gets on you run back to the wheel and you think of and then of course you had to have goggles on because all the dirt the roads were not made for that so I we're at the Model T stage of chat GPT and the fact that Watson thought that Toronto wasn't in Canada and I mean and we tend to focus on some of the more blatant mistakes that some of these early systems make I think they're correctable and however I find the final statement you made one just which I normally and enthusiastically agree which is teaching humans critical thinking skills or turns out to be hard might be enhanced by finding ways in which we can demonstrate the benefit of critical thinking skills in these models so that the combination you know what happened when Kasparov lost to big blue Kasparov came back the next month and said I want to show you a demonstration that a human and a computer working together will be the computer and hats off for Kasparov for doing that and that I love that it's that story and my argument is that with appropriately designed resources humans can then be taught and find the benefit of critical thinking skills I'm from biology department and from the plant side in plants and fungi I'm from plant biology department on campus in plants and fungi we know at least there are more than one million species and often we struggle to identify unknown species and we often make a joke probably one day the AI will determine all these correct identifications that's what we among biologists often think about it so now as you know human beings they want to live longer and the longevity is determined by the structure of the chromosome and cell biologists I'm not a cell biologist but cell biologists have been working on it so now do you know whether artificial intelligence is being applied in any way to enhance the longevity of human beings this is well beyond my expertise I've read the popular reports that AI is particularly good at chromosome folding modeling and quantum computing is particularly good at some of the complexities of understanding the folding dynamics I simply am unable to respond whether that is or is not clearly connected to the issue of longevity but we're looking forward to your magical magical results of your research so that longevity becomes a human option rather than a predetermined and much too soon result are there some questions from our guests online that's what I wanted to to share with you there's a few comments online I'm going to pick one other question serious questions about governance that question about this project being dead on arrival but I'll just read this one wait a minute my project is dead on the project of AGI evolution does indeed seem to be the ultimate tap challenge when it comes to taking good advice in a sense expertise is a form of artificial intelligence in relationship in relation to the person who doesn't possess it and we've seen what's happened in recent years when it comes to larger numbers of people the ability to evaluate and take advice from such artificial intelligence how can we not simply repeat our past failures to probably evaluate and take good advice when it comes to artificial artificial intelligence okay you'll see that my term evolutionary intelligence blurs the notion between human and art and machine intelligence and so I was expecting to have the classic question for people in my field which is well what's intelligence and my definition of intelligence is correctly receiving receiving feedback from your environment so that you can correctly adjust your behavior to meet your goals if you look at the book these these books that I've been citing Ted Mark and Bostrom and Stuart Russell all of them start out with a definition of intelligence pretty close to that and so intelligence is correctly interpreting the signals from your environment and if a machine helps us do that better and we want to correctly interpret the signals from my environment I give an example in the book of somebody who's arguing with an individual with which they ultimately disagree it's a neighbor who is politically on that's probably a red-blue thing right Trump and the non-Trump and the advisor advises the liberal saying if you give that advice to your Trumpy neighbor with all the statistics and all the elite opinions you're not going to have any success in changing that person's mind and they are going to be angry and frustrated and the advice comes and the guy says no I'm going to do it anyways and somehow take a pleasure in making my neighbor angry because I'd rather have conflict than confront and humans may well choose to take an advantage because they just find perverse or other pleasures in ignoring good advice many of us will find cases of entertainment or something like that we choose not to do that hopefully each of those occasions becomes a learning experience for ourselves and I see emphasizing the co-pilot not pilot we're not handing over agency to these technologies my proposition we're taking advice from when we choose to and one more question from Helen the Royal Society recently published a meta-analysis of genetic evolution versus cultural co-evolution in humans with the conclusion being that the balance has tipped towards cultural co-evolution over pure genetics what impact will advance large language models have on our cultural evolution uh my favorite in this tradition is Tomacello's work uh a psychologist talking about the evolution of the evolution of human capacities emphasizing to that point about cultural change being much more dramatic than genetic change uh physical change in humans um the short answer to the question will uh AI the question was formed in terms of generative peculiarity I see generative is just one part of a much broader set of when uh when Kasparov played big blue and when Watson went on jeopardy we talked about it for a year it was a big dramatic single event when Altman and others at OpenAI are asked will this be the the changing event is 3.5 the event is chatty the event the argument they make and it's the argument I make this is a continuous development and that I don't think any single event is going to be a turning point and that among a series of other technical developments uh and cultural developments we'd have to get dramatic change in our culture hopefully for the for the plus side thanks um two more question uh if you are familiar with the I saw that you worked at the OSDB um if you've read their ethical AI guidelines I'd love to just hear any feedback you have you know specifically or generally on them producing that and then the second part is you mentioned your main concern is about bad actors getting control of this technology however it seems that there is and the people who are most likely to develop it are those bad actors right our governments have the most resources large corporations have the most resources this equation is large government equals bad actors well I guess I would love you to get it it's not more but like it seems that um you're going to have a lot of incentives by a lot of the uh either firms or governments have the capability to produce these kind of road AI systems and how do you like concretely regulate that environment if you have any thoughts um I have a couple of quick one-liners let me just sort of share them with you I think uh regulating a if somebody uses the telephone to call up somebody else to arrange a crime you can't sue the telephone company for having facilitated a crime and I sort of have that same sort of attitude we're seeing a possible change of that section 230 of the 1996 Telecommunications Act protects uh online uh providers and social media from being held responsible for people that say bad things on their platforms and that's currently under review at the Supreme Court so you may see a major change in the way in which uh conveyances become held responsible for those that usually conveyance for something that's malicious um my one liner is when an illegal act is is conducted with the facilitation of technology don't sue the technology don't blame the technology find a way when a malicious act is taking place or murder or robbery or something of that form uh you use the existing legal structure and traditions to persecute um malicious behavior and uh don't shoot the messenger um my notion of defining and regulating AI is akin to trying to nail jelly to the wall you can't do it I don't think the use of mathematics to decide uh decision processes for environment or understanding our environment is could be ready today uh I am fascinated by the open AI act and its tradition in Europe and its facilitation hopefully in Brazil we'll see how that works out um uh the basic notion there is if you do something wrong we're gonna find and we'll see if that has an appropriate corrective event uh I'm betting for getting the best net result competition from multiple players so the better systems end up being successful and those that have you know and there's a hint that I might be right about this and the hint is that the Chinese who have been putting they say 30 billion dollars into AI research haven't been very very successful so far on the transform uh transformers and um and generative technologies we'll see if the top down control of research in that kind of authority environment just doesn't generate as good research and good as good technology in which case there's a natural environment that lets needs the good guys to win but we'll get back to your bad guys equals big governments yeah any thoughts on the OSTP AI ethical guidelines uh hopeful um sending in a hopeful message uh but the notion and of course in the in the american context you don't talk about regulation you talk about government partnerships with private enterprise I've been there I know how to say it and my hopeful hope is that there will be some of that and we'll experiment and if a government private enterprise is successful because it competes better has a better system that's a win and if a government managed system screws up then okay hopefully the marketplace will still be able to make the distinction between which technology is working better I guess time's almost up so one one final comment because I'm not aware that one of the things about the intelligent privacy is depending on how much we are also intelligent so in particular record actually suggests we're not so I think that some of the concern that I kind of share that at least private schools they're not that smart all right so young's final word is go ahead and give out all your information to surveilling capitalists and watch those funny cat videos and enjoy that's that's that's my version of what young people said funny cat videos are good things not to be critical all right so for that I think that we have time so if there's any further question please add so on Dr. Newman afterwards and then thank you everyone for coming