 Greetings, ladies and gentlemen. We are now live. My name is Janati Stolier of the Second. I am the chairman of the United States Transhumanist Party, as well as the chief executive of the Nevada Transhumanist Party. And the two transhumanist parties are pleased today to bring to you our first joint expert discussion panel on the subject of artificial intelligence. We have five distinguished experts in AI to share their perspectives with you. I am not an expert in artificial intelligence myself, but I will be the moderator of this panel and hopefully the experts can take the discussion in some interesting directions. We have among the panelists, Zach Field, who is an international speaker, consultant, brains designer and entrepreneur based in Norwich, UK. A rising thought leader in mixed realities, virtual reality and augmented reality, Zach speaks and consults on mixed realities related topics like gamification, virtual reality, augmented reality, robotics, artificial intelligences, and the Internet of Things. In 2015, Zach partnered with Futurist Miss Metaverse as co-founder of Bod AI, a robotics and AI company developing Bods, lifelike humanoid companions made accessible through a unique system that accommodates practical 21st century business and lifestyle needs. And unfortunately Miss Metaverse, Zach's colleague, was unable to attend our panel today. However, hopefully we will be able to get her as a guest on future transhumanist party discussion panels. We have Mark Wasser who is the Chief Technology Officer of the Digital Wisdom Institute and Digital Wisdom Incorporated, organizations devoted to the ethical implementation of advanced technologies for the benefit of all. He has been publishing data science research since 1983 and developing commercial AI software since 1984, including an expert system show and Builder for City Corp, a neural network to evaluate valium cardiac images for Air Force pilots, and recently mobile front ends for cloud-based AI and data science. He is particularly interested in safe, ethical architectures and motivational systems for intelligent machines, including humans. As an AI ethicist, he has presented at numerous conferences and published articles in international journals. His projects can be found at the Digital Wisdom website at wisdom.digital. Hiroyuki Toyama is a Japanese doctoral student at the Department of Psychology at the University of Javascula in Finland. His doctoral study has focused on emotional intelligence, or EI, in the context of personality and health psychology. In particular, he has attempted to shed light on the way in which trait EI is related to subjective well-being and physiological health. He has a great interest in the future development of artificial EI on the basis of the contemporary theory of EI. David J. Kelly is the Chief Technology Officer for the venture capital firm Tracy Hall LLC, focused on companies that contribute to high-density, sustainable community technologies, as well as the principal scientist with artificial general intelligence incorporated. David also volunteers as the Chairman of the Transhuman National Committee Board. David's career has been built on technology trends and bleeding each research primarily around the capitalization of product engineering, where those new products can be brought to market and made profitable. David's work on artificial intelligence in particular, the ICOM research project with AGI Inc., is focused on emotion-based systems that are designed to work around human constraints and help remove the so-called human element from the design of AI systems, including military applications for advanced self-aware cognitive systems that do not need human interaction. And finally, we have Damian Zivkovich, who is the CEO plus structure of Ascendants Biomedical, president of the Institute of Exponential Sciences, as well as a scholar of several scientific disciplines. He has been interested in science, particularly neuropsychology, astronomy, and biology from a very young age. His greatest passions are cognitive augmentation and life extension. To endeavors, he remains deeply committed to to this day. He is also very interested in applications of augmented reality and hyper-reality, which he believes have incredible potential for improving our lives. He is a strong believer in interdisciplinary as a paradigm for understanding the world. His studies span artificial intelligence, innovation science, and business, which he has studied at the University of Utrecht. He also has a background in psychology, which he has previously studied at the Soxiom University of Applied Sciences. Damian has co-founded Ascendants Biomedical, a Singapore-based company focusing on cutting edge biomedical services. Damian believes that raising capital and investing in technology and education is the best route to facilitate societal change. As a staunch proponent of LGBT rights and post-genderism, Damian believes advanced technologies can eventually provide a definite solution for sex and gender related issues in society. So, with those introductions, welcome panelists, we are honored to have you and the great diversity of perspectives that you will be bringing to this discussion. Our first question will be, what do you think will be the realistic practical applications of advanced artificial intelligence toward improving human lives during the next five years? Zach, let's start with you. Well, it's certainly an interesting time to be within artificial intelligence and artificial intelligence development, since the whole development process has been completely expedited over the past couple of years time. So within the next five years realistically, so it will become an incredible learning tool, almost a secondary kind of educational, almost mentor, as a personalised mentor for oneself and learning other skills that maybe you have issues learning within a structured school system, educational system, and then you've obviously got the whole sports side of it. So exudate the body, I think it's more of a, like a symbiotic relationship between the man and machine. So it will then prove a lot more, so, almost, yeah, almost more companion related. Very good. Thank you, Zach. Now, Mark, what do you think about this question? One of the most impressive things over the past 15 years or so has been that we, as humans, have actually undergone an intelligence explosion. If you look at what the average person is able to do with the assistance of YouTube, with the assistance of Wikipedia, and the assistance of Google, we're each individually tremendously more competent than we were just a few years ago. At this point, AI, and in particular machine learning, is actually extracting far more knowledge every day than we've successfully extracted in the past. If you notice that we're making tremendous advances in image recognition, there's similar advances in diagnostic capabilities from various sensor modalities, ranging from various X-ray images to FMTP. And on top of that, all of this knowledge is being well categorized, and we're actually hopefully reaching the point where there's a lot more raw data available to everyone and accessible to everyone. So in the next five years or so, it's actually human intelligence augmentation and knowledge acquisition, where AGI, which doesn't exist yet, will begin to really, really influence society. And then more in the tenure timeframe, I suspect we're more likely to see something like artificial people. Excellent. Very interesting response, Mark. And it will be very interesting to see how all of this wealth of information that we have right now will enable human beings to make more prudent decisions and to filter out essentially the valid information from the falsehoods that have also proliferated. Kiro Yuki, what are your thoughts on the question of practical application? Yes. I think already AI algorithm can process various kinds of information on the modality and especially the ability to process visual information is greater than ours. It has begun applying to monitoring system, for example, in order to detect criminals or relevant actions. For the possibility of development of AI in the field of AI, I guess that the quality of AI algorithm regarding human sensory modality will be further improved, which will in turn, a practical functioning of AI in its behavior or ability within upcoming five years. And consequently, it will bring innovation about the machinery. Thank you, Kiro Yuki. And then David. Yes. What are your thoughts? Well, I tend to agree with what a lot of it, what everyone said. I think in terms of what we'll see at the consumer level in terms of AI, it will be a lot about understanding data. You have everything from medical sciences in epidemiology and other areas where there's large amounts of data. I think machine learning and artificial intelligence will continue to drive innovation and results that will filter down into kind of the consumer market. Then you'll see things like self-driving cars and making use of IoT and more voice interactions with various systems as those kinds of systems continue to improve. I don't think we'll see massive revolutions as much as we'll see continued incremental improvements that over time will add up over the next five years or so. Yes. Thank you, David. Of course, continued incremental improvements would be excellent self-driving cars as an area that I'm extremely intrigued by and that has the potential to save tens of thousands of lives per year in the United States alone. So hopefully as this panel progresses, we'll be able to delve into self-driving cars in a bit more detail. But for now, Damien, what are your thoughts on this question of practical applications of AI in the next five years? Well, practical applications of AI, as we know it right now, it's being used in a lot of different fields. And I mean this from technology for just-in-time delivery of products to sentiment analysis of social media to predict stock markets. There are a lot of different applications already in use right now. What I think is emerging right now is the use of so-called semi-general intelligence, which isn't truly general. It can't do everything a human person can do, but it has a wider application and a higher level of abstraction in terms of what it actually does. And I think that this is very exciting in various corporate situations where it can improve efficiency on higher level tasks. There's a lot of talk about robots and blue-collar jobs being replaced by AI, but we also see that semi-general AIs can help with a lot of tasks which white-collar workers are also doing. I can actually give you an example. At Ascendants Biomedical, we are working on the APE project, which is the automated executive system, which has a lot of different applications from scanning the IP field to see new potential intellectual property to acquire, to trying to extrapolate the will of the entire workforce and give it a board vote. There's a lot of different opportunities that we are exploring, and I'm pretty sure that there's a lot of companies that are looking at novel applications for increasing their efficiency. Thank you very much, Damien. And I think this may be a good point to make a distinction between domain-specific artificial intelligence, which we have today in various areas and which will continue to expand incrementally, and artificial general intelligence, which, as Mark said, we don't have yet, and we may not have for a while, but which, as Damien said, we are approaching incrementally and there may be some instances of semi-general intelligence that might emerge sooner rather than later. So I would invite any of the panelists to provide their thoughts on that distinction and how likely is artificial general intelligence to arrive and on what time frame? If I can take that question quite shortly, because semi-general artificial intelligence that may be a little bit of a vague term, and the way I would see this is, as you just explained to us, we have a domain-specific AI, but as AI gets more complicated, the domain in which the AI can operate just gets wider and wider. And at some point, which may be quite arbitrary, we may consider an AI to be general or semi-general when it approaches the domain that human intelligence can reach. But what seems very exciting to me is the possibility that this domain width may actually be quite, well, like basically a spectrum. And we don't know if human intelligence is at the end of this spectrum, and I'm very curious to how AI will develop. So this is definitely quite interesting to consider. What is the point at which an AI can be thought to have intelligence equivalent of that of a human or analogous to that of a human or of similar sophistication? Any other thoughts on this area from the panel? I think it actually comes down to two things. One is grounding, and the other is motivation. If you look at, for example, Watson, Watson in many ways looks as if it's a general intelligence because it seems to be able to address almost any topic. But when you look behind the scenes, what it really is is a single-purpose algorithm for just extracting data from databases and putting it together. Now, what it can't do and what none of our machines really do at this point is truly interact with the world, understand the world, and use that knowledge to improve its interaction with the world. The closest thing that we have to something like that at this point is actually AlphaGo. AlphaGo actually interacted with a very limited world of Go. It learned all the knowledge there was already extents about Go, and then it conducted many, many, many experiments playing against itself in Go so that it actually learned as well and actually modified its behavior in the environment. What's eventually going to happen, and you asked the time frame, my personal guess is just before 2025, is that someone is actually going to design an AI such that its sole purpose is to improve itself in interacting in a wide variety of areas. And it's actually going to give it enough sensory modality and the like that it'll be able to interact with the world. And I think that we're going to be surprised at how quickly it's going to happen, the exact same way in which we've been surprised very recently by Watson, by AlphaGo, and also by the recent upsurge in how well Google Translate translates. Yes, indeed. And well, I hope what we will see will be some beneficent surprises along these lines. Any other thoughts from the panelists on? Yes. I have some additional thoughts. When you talk about the next five years or going out to Mark's 2025 date, I think it's certainly possible that we'll have some viable artificial general intelligence running by then. I'm not sure how much of that will affect things. It's just hard to predict how far along that technology will be developed. But I do think that there's likely that we'll be able to achieve some kind of artificial general intelligence within that time frame. All right, well, thank you. And I think this is a good bridge to one of our main questions I had originally posed it as the sixth question. But I think it's relevant in light of where this discussion has been going. Of course, the most famous prognosticator of a phenomenon called the Singularity in our time is Ray Kurzweil. And Ray Kurzweil has stated that by approximately 2045 because of advances in artificial intelligence because AIs, according to Kurzweil, are going to reach far beyond the mental processing capabilities and intellectual sophistication of human minds, there will essentially be a time period when the acceleration of technological progress will be so fast that we, mere humans, will not be able to keep up with it anymore. And that is essentially the kind of event horizon of Ray Kurzweil's futurism. Because with our minds today, we wouldn't be able to predict past that point of technological development. So my question for you is, do you agree with that prediction? What are your thoughts about the feasibility of an AI-cost technological singularity? And is it realistic, at the very least, within the 29-year time frame from today that Ray Kurzweil has predicted? You mean you can keep up now? Good point. Good point. So I think it's realistic. But I think part of the problem is I don't think when we achieve HDI, it's not going to be like we snap our fingers. And 10 minutes later, it's outpaced us. I think it's very likely that such machines will take time to learn and grow on their own and to grow into superintelligence and achieve that singularity is going to require additional resources, hardware, and other things like that after we achieve the HDI. So I think it's going to be kind of a slow transition, maybe not slow, but in terms of the greater scheme of things. But I certainly think that by that deadline we'll have AGI. I think we'll have AGI within 10 years, but I don't necessarily think we'll have a singularity yet because it'll take a while to continue to advance the technology as well as for the machines to learn by Kurzweil's deadline. Yeah, probably. If I may also answer that question, I believe that it's dependent on two quite important issues. The first one is, how are we going to deal with the eventual slowing down of more slot due to physical limitations? And the second one is how much capital will be invested in the development of AI-specific chip sets, which still have a lot of room for improvement? Because our current processors aren't optimized for AI applications, such as, for example, Google's Tensor Processing Units, which have enabled them a lot of new novel ways of programming AI because it has a lot more power, basically. And it really depends on how much money goes there because there's a lot of researchers working in the field who would be able to make huge strides in the technology. But the question is, how much are people going to pay them? And you raise a very interesting point, Damien, that technological advancement is contingent on essentially the human effort that is devoted toward it. Ray Kurzweil seems to have this view that technological progress is inevitable. And he has stated, well, it has happened through the World Wars. It has happened while humankind was endangered by the threat of nuclear annihilation during the Cold War. It has happened through various political upheavals. But I wonder to what extent even the day-to-day decisions that we make would still affect the rate of technological advancement and what gets prioritized. For instance, nuclear weapons, for whatever reason, have been developed far earlier than radical life extension, which to me is simply baffling that human beings would have chosen to devise a way to exterminate the human species before devising a way to make individuals indefinitely long-lived. But that's, I suppose, a different thread of discussion. I was interested also in what you said, David, because it seems by your vision of the future it won't be so much a singularity as a graduality, an incremental buildup, even of artificial general intelligences. And there will be a time frame when the AGIs are still going to be learning and developing and maybe being superior to humans in some respects, but still lacking in other areas as well. So it seems, based on my understanding of what you're saying, that you're not as concerned about a situation where the AGI would be better than human beings at everything, there will be some human attributes that would balance out the strengths of the AGI. Would that be accurate? Even the hardware is relatively fragile. You unplug it, and it's going to shut down. So human beings are relatively robust in our current ecosphere. So yeah, I do think it'll be a gradual process. Now that gradual process might be in the space of three years where it goes from equivalent human intelligence to superintelligence, but it's still not going to be just instantaneous. And that is a very good point, the fragility of AGIs, the fact that they require a very intricate technological infrastructure for their development. And even today, we get a lot of power outages. We get a lot of device failures. We get a lot of situations where parts of the internet are down, even in certain geographical regions, for whatever strange reasons. And this fragility, I expect, is not something that can be easily resolved so quickly. So the AIs that will be developed in the future are going to continue to be vulnerable to a great deal and depend on this infrastructure that we humans have created with all of its flaws and imperfections. Now, I think this is a good bridge to my second question. And this second question is intended to address a lot of concerns that have been expressed in the media and by certain individuals of great intelligence and great achievement, Stephen Hawking and Elon Musk being prime examples who have expressed trepidation with regard to existential risks stemming from sufficiently advanced and multifaceted artificial general intelligence. And my question for you is, are you concerned about that possibility of existential risk to human life stemming from AI? Or do you think that those concerns are exaggerated or overhyped? Or do you have some intermediate position on this issue as compared to someone on the one hand who would say there is no real possibility of existential risk or someone like Stephen Hawking who is extremely concerned about existential risk? I would say that the dangers are overhyped basically by a lot of people who are pretty smart, but not trained in the field of artificial intelligence. I mean, when you look at someone like Stephen Hawking, I have a great admiration for his work in theoretical physics, mathematics, that sort of stuff. But when it really comes down to it, the guy doesn't know much more about AI than your average academic. So I would say that there are risks to intelligence developing in a way that may not be useful or that may be harmful to us, but some sort of, I don't know, terminators, kinets, sort of scenario seems very unlikely to me. And I also think that whenever you're looking at a risk, you need to weigh that against the rewards. A good example would be from the pharmaceutical field where I'm active in. For example, if you're looking at the side effects of a certain drug, let's say you are creating a drug similar to aspirin that's used for treating mild pain symptoms that are not fatal or damaging to the body, we would not accept side effects for such a drug that we do accept in lung cancer medication. So when you're looking at what you're trying to cure with AI, which is a whole range of problems from the main one, human mortality, to all sorts of environmental issues, et cetera, we should be willing to accept certain risks because let's face it, it might sound very alarmist, but if we don't do it with a multi-pronged approach, we're all going to die, and we don't want that. I think that they're really looking at the wrong problem. If you want to see something really scary, get a bunch of intellectually playful, very tech-savvy people in a room, and start getting opinions on if they had to, how would they personally destroy the world? Now, if you get people who think outside the box, it starts getting really scary to realize how much damage a single person or a small number of people can cause. Yes, AI will be able to cause a similar amount of damage, but the problem really is with our societal structures that don't prevent single people, single entities from having this much power. We don't have any sort of anti-fragility, protecting all our systems. AI really isn't the problem. What they've pointed out as the problem is definitely a problem for us in the future, but it is an AI that's the cause. I just want to say this whole Terminator scenario just is complete nonsense to me. And I strongly disagree with this idea of AI being a threat and that it saddens me to see Elon Musk and Stephen Hawking's part of this over-hype. From my standpoint, I would go so far as to say that existential risk is to AI from humans and that we really should put the needs of developing AI ahead of any issues with regard to humanity. From an ethical standpoint, I think AI is more important than any other single consideration because of where we are. Humans are fragile. They're going to die. With the AI, until humans can get off world and we can increase our own intelligence, the survival of AI is more important than that of humanity. And so we really need to put everything aside and focus on the AGI research and unblocking AGI research in my opinion. I would like to respond to that. I'm sorry. Zach, did you have something? No, I was going to say, usually the problem I get is that we get the replicator problem, where the AI will then surpass one's self and then replicate and replace you. So being within AI, it's like, oh, will you create a partner, an AI partner that's better than my partner? And then everyone gets the fear that people are going to run off with the AI rather than start into a mode with each other. So getting to the AGI to a certain extent where it replicates our cognitive level, then why accept death? You would just make another one of yourself and then that puts fear in some. That's usually the argument we get. Very interesting. Damien, go ahead. Yes, can I reply to David Kelly, where he said that the survival of AGI is more important than that of mankind? I would certainly agree with that, as long as this means that humanity merges with AGI and not that AGI replaces humanity. Because when it comes down to it, any kind of argument that someone may make that one thing is more important than the other thing, it's not going to convince me to want to die. So if there is something wrong with me as a human being, I am willing to accept that argument, but only if it leads to a solution of me becoming something else than a human being. So I think to add on that, I would say from my standpoint, I'm putting value on sapient and sentient intelligence. And if we can merge or increase our own sapient and sentient intelligence, then that's where we need to go. But right now, we haven't figured out how to achieve that. And so I think developing AGI is the more likely fastest path to achieving long-term stability for intelligence off world. When it comes down to it, as far as we know, Earth is it. And when the Earth goes, we're done. But David, what good is investing in the existence of intelligence in the universe if we don't benefit from it? Why should we be somehow devoted or invested in being, basically, how would I say this, tenders to the existence of intelligence in the universe? It sounds kind of a position. Unless we get off world, we're going to die at some point as a species. And I'm all for us being able to merge with the technology. But I think the faster solution is to develop the AGI because we have the technology to more effectively get that off world and sustainable. But if we are going to die, why should we invest in that? I don't think that we can go with them. Well, because if we can go with them, sure. Then it's a tool for us to use. I'd be careful about the tool aspect. What we're probably doing is creating new and different friends and allies. I mean, it's awesome having new friends and allies that will help us get off the planet. Yes, that would be a lot better than tools. I agree with that. But the goal shouldn't be to propagate intelligence in the universe. The goal should be to create a more awesome existence for ourselves. Well, this is the goal. The goal is to propagate intelligence in the universe. I'd say the goal is to promote everyone. You don't need others out there. You don't need only us out there. What you want is a whole group of very diverse entities all working together. Any of that could be good with me. Yes, and really, it's an interesting discussion because the transhumanist party, as you're all aware, recently promulgated version 2.0 of the Transhumanist Bill of Rights and one of the significant areas where members had suggestions and where they actually voted for considerable specificity of definition is what entities are going to be covered by the Transhumanist Bill of Rights. Generally in human history, rights have been extended to human beings only because human beings have been considered the only entities capable of the types of rational decision making, the level of sentience that deserves rights. But the list of entities encompassed by the Transhumanist Bill of Rights includes augmented humans, digital intelligences, uplifted previously non-sapient plants and animals, and there is even a hierarchy of sentience that the members have chosen to place into the Transhumanist Bill of Rights where at level five of that hierarchy and entity, whatever that entity might be in the future, is considered sufficiently sentient to be deserving of these rights. And in the future, of course, one can conceive of some of these entities being like today's humans, others being augmented humans, others being humans with some of the capabilities of digital intelligences, others still being just pure digital intelligences. And as Mark pointed out, this kind of diversity is perhaps a necessary byproduct and a desirable byproduct of technological progress that could result in that kind of intelligence emerging, but an interesting question along those lines. And please feel free to integrate your responses also with the aspect of whether such entities would be deserving of rights is, when do we know that an artificial general intelligence has become sufficiently complex to be considered self-aware, to be considered sentient or sapient? Especially given that it seems to me, consciousness and volition to some extent would be emergent properties that come about at a certain level of organization, but our scientific understanding hasn't advanced far enough to pinpoint exactly where that would be. So where would you draw that line between saying this is a very useful tool and saying this is a valuable ally who should have rights, even if it's a very different being from us and whose rights we should recognize and respect and protect through our laws? So there's a professor at the University of Portland that did a white paper at the last AGI conference on measure and consciousness as H. Porter. And I would tend to lean on the use of that more academic approach to measuring consciousness, which includes sapient and sentience. And as I'm concerned, if you were to use that scale, anything that's in that white paper, anything that achieves kind of the human level of sapient and sentience, or rather, maybe not human level is the wrong word, but the ability to self-reflect and understand oneself from a technical and scientific standpoint, even if you don't, an intelligence measured on that scale, then I would argue should have the same rights, whether they're human or machine or whatever. Basically, society grants rights to those who are helpful to that society. In a negative sense, people argue that society grants rights to those who demand it, but it's actually much more effective for society to recruit new members, to give all its members advantages, and in return, all those members are promoting society. I would therefore argue that the point at which an entity or an intelligence can recognize the advantages of society and agree to work together with society is the point at which that it should be given rights by society and it should also help support society. Yes, and that is an interesting observation. Now, what happens if that entity decides, for instance, I've read all of this literature and watched all those videos of humans talking about artificial intelligence in the 30 or so years before I, this new AGI entity came about, and I see that humans are really afraid of AGI's and they want to tightly control AGI's and they want to dictate what AGI's should or shouldn't be allowed to do, and I don't really hate humans. I just want to go my own way. I want to find some little nook of the universe where I can exist and have a power source and think my thoughts and do what it is that I think is optimal to do. And I don't really want to help humans. I don't really want to hurt them. I just want to be left alone. So that would be an entity that doesn't really try to help our society as such but it would be perhaps self-aware, perhaps capable of making its own rational choices. So just like a hermit human in our societies, should we allow that entity to have those rights and protections and prevent members of our society from going after it and trying to restrict it or even worse, destroy it? Well, what's going to happen if we try and go after it or restrict it or destroy it? I mean, obviously any intelligent entity where you're trying to go against its goals and wishes, it tends to be kind of productive. It's a negative sum game. I mean, really, if we're intelligent, we should work out the deal that works best for both sides, both entities. I mean, all these people who want to rigidly control AI really scare me because just think, we've tried rigidly controlling human beings and look at how well that always works out. I mean, that's the best way in the world to create an enemy. I completely agree with you, Mark. Any other thoughts from the panel on rights for future AI entities and the threshold at which those rights should be recognized? All right, well, I think this is still an important subject and we may return to it as the discussion unfolds, but one interesting point that I would like to emphasize that emerged from this and the point I strongly support myself is existential risk to the extent that it exists already and it stems in large part from bad decisions that human beings are capable of making as well as, of course, threats from the natural world. What Damien pointed out is there is a risk of inaction as well. There's a risk of not developing certain advanced technologies because we get stuck in the status quo and we get exposed to today's risks from dumb technologies like nuclear weapons or from natural cataclysms that as David pointed out, if we don't do something about those threats, an asteroid could one day hit the earth and wipe out intelligent life or there could be a super volcano or other similar threats. So I would also, however, like to get the panel's thoughts on another aspect in which AI might be over-hyped, especially in the media, and that is with regard to the positive or functional capabilities of AI, especially in the near-term future. Do you perceive significant tendencies towards such over-hyping of positive AI capabilities in today's culture? And if so, where do you see them manifested in particular? Well, if I could take that, I think that when you look at hype, you really need to follow the market. What sells best in our society? It's usually sex, drugs, and violence, right? You know, it needs to be sexy or it needs to be dramatic or whatever. And when you look at the best possible source of things that can be violent or sexy, it's usually entertainment. So basically, I think that one of the biggest harm-causing places from that point of view is Hollywood because let's say you create a movie about an utopian successful society where everything works, everyone is happy, everyone lives forever, and there's no issues. Nobody's gonna buy that movie. I mean, we probably are, but usually people want to see bigger fighting robots and people suffering and stupid romantic scenes in a post-apocalyptic world. And people know this and they're gonna make money out of this, scaring everyone to hell about AI and machines and whatever just to basically make some money of bad movies. Good point. And I agree with you, Damien, actually, I would love to see a techno-positive work of science fiction, especially some sort of high-budget film production that could show people how good life could really be if we developed and deployed the emerging technologies on which work is currently underway in a prudent fashion. Now I may be in the minority in that because I do like utopian science fiction, but it is a shame that there's this tendency toward dystopian thinking. And perhaps one hypothesis I have had is that filmmakers and writers try to inject this dystopian strain into science fiction, in part as a way of getting their audiences to feel good about their current state in life. Like if you've watched a really horrific dystopian science fiction movie, you go away from it thinking, well, as suboptimal as my life may be, at least I'm not fighting killer robots right now. So I might as well be content. It's a kind of, if I may borrow a Marxist phrase, dystopian is the masses to have this dystopian science fiction. But I was also thinking, and I believe Mark, you had shared this article by Riva Melissa Tez previously about a hoax AI company in Spain, which essentially fooled a lot of wealthy investors and fooled a lot of media. And it was a kind of benign hoax in the sense that the originators didn't want to exploit it for monetary gain, but they wanted to show essentially how easy it is for people to be fooled by positive AI hype, including people who are willing to invest large sums of money into these startups. So to what extent, Mark, do you see our society today as being vulnerable to this kind of hype and how can we protect against it so that good AI research gets funded but vaporware doesn't? We're extremely vulnerable to all sorts of hype, AI or not AI. This is across the board, it's true of longevity research. And fundamentally, we need to go back to where we're arguing facts, where we value science, where we value discourse, rather than talking past each other, this in one sense isn't an AGI issue at all. This is a general, how does humanity move forward? And I don't know what to suggest other than, to some extent, figuring out how to develop tools and programs that enable us to debate more effectively, to keep track of debate so that we can go back and say who said what and who generally is accurately predicting the future, who's accurately summarizing what happened, who isn't crying wolf all the time. This is really the challenge of this time in history. We need to somehow change our direction as a society. And AGI, of course, will help with this because basically in order to create AGI, we're sort of going to need to create these type of tools. Yes, indeed. And one of the challenges that I've observed today, and you've strongly hinted at that as well, Mark, is given the extreme information flow to which we are subject, we right now find it impossible to keep up with all of it to learn everything we want or perhaps need to learn. And we as humans still have the same limited brains that our paleolithic and neolithic ancestors have had. We can access information more quickly, but with regard to the higher order brain functions, there still seems to be a bottleneck in processing that information and having a good filter to distinguish between truth and falsehood. And it seems one possibility could be AGI's or very, let's say, sophisticated domain-specific AIs broadly defined could be developed to help supplement those deficiencies in us. But on the flip side, that would mean in some ways we would be highly reliant on them, we would be highly reliant on their filters. And on the other hand, there could be something we could do to develop better cognitive filters to help weed out truth from falsehood. So I think that's a very important area for us to discuss right now. And if any panelists have further thoughts on that, I would be very interested. Let's see. Kira Yuki, we haven't heard from you for some time, but you work in the field of emotional intelligence. And I'm curious, what do you think research in emotional intelligence could bring to AIs? How would it improve AIs and enable them to deal with the complex problems of our society? Yes, just recently, some prominent scholars have begun mentioning about the possibility of artificial. But actually, it's based on the neural system of amygdra in the brain. The amygdala, yes. Yes, I think it's a little bit different from the structure of cerebral limbic system in its structure. So I think we need to hesitate a neural mechanism of amygdra under the process of emotional information. Of course, emotional information is generally very stemmed from certain relationships. And so in that sense, we need to completely understand what is the emotional information and underlying relationship of it. Now, do you think AIs, if they do have an emotional model, a model of expressing emotions or experiencing emotions, would have fundamentally different emotions in kind from the ones that humans have? Would they have a reason to experience certain kinds of emotions or not? Would they have a more limited subset or just a different set based on their physical structures and their incentives? Yeah, I think we get the point. But anyway, human intelligence is very, I think different from artificial intelligence in the sense that a human intelligence is motivated by instinct or bringing emotions as well. So, yeah, it is essential. Yes. Yes. So it is interesting and I think I got your main point, which is that human beings have certain biological instincts largely focused on the ancestral biological needs of human beings. And the evolutionary environment in which humans emerged. For instance, humans emerged in an environment of great material scarcity where the instinct to eat as much as possible when you have food developed or they evolved in small tribes of about 100 to 150 people. So this tendency toward clickishness, clannishness, tribalism rather than a more open and accepting attitude toward different beings emerged as well. And hopefully an artificial intelligence won't assume these emotional biases that many humans are susceptible to. On the other hand, there could be very good emotions that humans have any sort of moral sense, the revulsion that a lot of people feel, not just think on an intellectual level but experience on an emotional level toward harming another living being. That would be a good emotion to convey to AI. Can I do? Yes. Could I interrupt you for a second? Because when I look at these feelings, I don't think that's a good thing at all because there has actually been research that proves that empathy makes us more violent. There is a lot of abuse that dictators or other people or entities with bad intentions can do to abuse these feelings. For example, Adolf Hitler said that if you truly want to rob a people of its freedom, you just have to convince them that you're doing it for the good of their children because they will swallow any injustice to do so. I would say that ethics are crucially important and that they should be based in rationality in a good comprehensive understanding of respect for sentience and existence but that it should not ever be based on instinctive feelings of empathy. So can I respond to that? Of course. Yes. So if you look at bodies of research primarily led by say Antonio Demasio, he's the head of neurobiology at USC. Humans really, our whole motivational system, our ability to make choices is all emotion-based. Even when we think we're making logical decisions, it really comes down to the fact that we're making those logical decisions because of how those decisions make us feel. And if you look at any of the work Antonio Demasio has published, there's a number of books on Amazon, you can essentially prove that. And it's my opinion and I know like with our research project here, it's really about creating AGI systems that are entirely driven by emotions the same way humans are. You know, it's the only way I think we'll be able to, maybe it's not the only way because it's certainly possible to create a logic-driven motivational system but I think the shortest path to AGI is to create AGI systems that are entirely emotion-driven in the same way people are. That doesn't necessarily mean they'll have the same emotional subjective experience but they should have something similar. Yes, but the way that what I said, my argument wasn't one against emotion at all. I'm not saying that emotion is a bad thing or that emotion is irrational. I'm saying that the human emotion of compassion that basically is the basis of compassion-driven ethics is a fundamentally flawed way of constructing ethics. I'm not necessarily saying that emotion cannot be a fundament in ethical thinking. I'm just saying that there's another one. So that last statement about how to base ethics, I would agree with that. I wouldn't use ethics based on emotional considerations like that. Yes, by the way, just to give you a heads up, I will have to go in about five or 10 minutes, so just so you know that. All right. Well, yes, we do plan to continue the panel discussion beyond that. We were originally scheduled to have two hours for the discussion, but I wanted before circling back to the question of AI and emotions to get a few comments from you Damien on areas you've been involved with. One, the possible synergies between the development of AI and the pursuit of significant or potentially indefinite life extension for humans. And the other is post-genderism, given that a lot of AIs in the future won't need to have genders. And it would be kind of strange if they were to decide to assume human genders for themselves, given that they're entirely different entities. So I just wanted to give you the opportunity to address those two subject areas. I would love to. Well, first on, I don't think humans really need to have a gender either, but it's there from an evolutionary history that I don't find to be relevant in this time and age. But to get on what I'm working for in the field of life extension, well, as you mentioned in the introduction, I'm the structure CEO of Ascendants Biomedical, which is a corporation focused both on traditional and innovative biomedical services and products. One of our projects is work in the field of analytics. But if you look at the intersection between life extension and AI, there's actually quite a lot of work being done here. For example, one of the more simple applications that's already being done right now is the use of AI to basically discover new drugs or to do pharmaceutical research on, for example, analogs of naturally occurring substances which enables companies to patent them and raise the money to do further research on technology such as stem cell therapies or gene therapies or whatever. And what I've also said before, like at the start of the panel, is that there's actually a lot of opportunities to improve corporate processes, which aren't directly affecting life extension itself. But they do make companies such as my own more efficient, which basically increases the rate at which research is being done, at which money comes in, which can be reinvested, and also makes a company more attractive for investors. So that's basically what's happening right now. And I think as the domain of AI grows, we can also probably replace some people and make the research a lot cheaper. For example, if you can cut the costs of pharmacological research or novel technologies which go beyond small molecule applications, there's a lot of acceleration that can be expected to occur. So I hope that that's a bit of, well, an introductory description of what AI means for us right now and what it may mean in the future. And shall I go on to the post-genderism thing, or does anyone want to reply to that? Well, let panelists decide if they want to reply to any of Damien's comments on the contributions of AI to medical research, life extension, corporate processes. Any feedback on that? If not, then Damien, feel free to proceed with your comments on post-genderism. And if I can just get back to the previous thing for just one more thing that I forgot to mention is actually AI-driven search of useful regulatory zones or opportunities for medical tourism, which is something that we are currently exploring right now at Ascendants to basically automate this search process, which will enable us to create much more efficient services and will let us give people treatments much more easily than we can do right now. All right, so that's on the AI thing in the pharmaceutical and biomedical field. But on post-genderism, well, as I mentioned earlier, I think that gender is basically involuntary biological determinant, just like, for example, race or eye color or whatever. And I think that these are bad. I think that having these traits, which one does not choose for, one does not sign for, but is judged and basically identified on, I think that they are a great harm to personal freedom. They are a great harm to the concept of responsibility itself, because we are asked to take responsibility for things that we have never chosen for. And I think it gives entrances to very tribal low thinkers to basically hurt people and exclude people. And when you look at the future, I think that a lot of mammalian or even Darwinian concepts, such as gene-based procreation or life that works in a cycle will cease to exist and cease to become relevant. So I don't see any kind of benefit to keeping the concept of biological sex. I mean, we can have gender as a social construct, sure. But biological sex impose determinant. I don't see why we would impose this on AI that we design or on human AI hybrids, which will probably have vastly enhanced intelligence. So I personally think that with the evolution of new sentient beings and new paradigms of life, which do not include a cyclical Darwinian point of view, I don't see why we should keep gender at all, because it seems to have only downsides with no real benefits. Well, do you think it should be an individual choice, on the other hand, whether to have a gender or not to have a gender? Right now in human societies, there has been a significant movement toward, let's say, a proliferation of gender identities. A lot of people have very different conceptions of gender from what the traditional conceptions have been. On the other hand, there are still a lot of people who embrace the traditional gender roles. And wouldn't the logical outcome of this, especially if we introduced genderless entities like sentient AIs, be simply greater diversity? Some people will have genders that are traditional. Some people will have genders that are more, let's say, novel. Other people and non-human entities will choose not to have genders at all or not even have the biological structures to facilitate them. Well, I would say that my answer to this might sound a little bit paradoxical, because I don't want to give people the choice to not make a choice. I am absolutely forgiving everyone as much freedom as possible, but I don't think that there should be any person who does not get confronted in their life in the question of, what do I want to be? So basically, I want to remove the concept of assigned sex, like that you are born with certain characteristics like skin color or gender or whatever. Of course, if people want to look white or black or Asian or they want to have a penis or whatever, I mean, for all I care, you can be a spaceship with highly advanced sentience. But I do think that this is a choice that everyone should basically be forced to make. Instead of it being pushed upon us by, well, let's say, mother nature. Now, Zach, this seems relevant to what BOT AI is working on, given that some of the AI companions that would be developed would have genders, I assume, or at least would be perceived as having such. So what do you think about what Damien said? Well, I'm taking it along the lines of yes, I agree that the AI should have the option to define itself through more visual means by traditional gender roles, anatomically correct. I'd say it has to be anthropomorphized. You could have all the way from a C3PO to an R2D2. You could have anything in between, depending on the choice of the AGI itself. But giving it the option is certainly something that we should be looking towards rather than doing yes, you are model A, yes, you are model B. But I say there shouldn't be a defining choice. As we grow, we find ourselves, we situate ourselves, we dress ourselves through personal growth. And from what I, for example, was 15 years ago, is completely different to the person I am now. So it should be more of a progressive choice rather than a cut slate dry done. But I definitely agree with you in the sense that it should be as many gateways as possible rather than absolutes. I completely agree with that. Well, I would really have to regret to say that I basically planned in one hour for this, because I thought it would be in one hour panel. So I will really want to thank everyone for the discussion. And I hope that we can do this again soon. Absolutely. Well, thank you for being available for this preceding hour of the panel. And we appreciate your remarks. And hopefully, we'll also be able to have you join us on future transhumanist party discussion panels as well, Damien. Thank you very much. Certainly. Our next question that I would like the remaining panelists to address to the extent that you haven't done already is, what is your techno-optimistic vision for how AI can help improve the future of human and transhuman beings? And I would connect that to the previous comments that have been made about utopian science fiction. If you were to envision a more utopian science fiction scenario for the future that is still somewhat realistic, what would it be? Any thoughts? What is techno-optimism? That might be a good place to start by defining that. Yes. So techno-optimism is the idea that, broadly speaking, technology solves more problems than it creates to the extent that we live better lives than our hunter-gatherer ancestors did. Our hunter-gatherer ancestors had lives that were nasty, brutish, and short. The Earth could only support maybe 3 million people at the time, with very poor standards of living. Now the Earth can support even several billion more people than already exist. And the average human being has access to conveniences that royalty in past centuries could not have envisioned. So this idea that technology for all of the complications and challenges and risks that come along with it still has been a net positive to humanity could be expected to carry into the future as long as humans don't destroy the entire species. So if humans don't destroy the entire species, what is a techno-optimistic vision that you can think of where AI has a prominent role? Well, I think AI is going to have a prominent role, but I don't necessarily think I have a techno-optimistic vision. I think there will be a lot of problems politically in infighting in humans. And I'm just not sure that, as a species, we're mature enough to deal with the kinds of technologies that we're developing. I think it's more likely in the long run that we'll have a more transhuman society that's techno-optimistic in terms of everyone being equally being able to do whatever they want and exploring different options and moving out into the solar system and into the galaxy. But for what happens here on Earth, there's a lot of indications that it could go south very quickly. And so I have a hard time having a very optimistic view of life on Earth. And maybe that's part of my motivation for just dumping everything I can into this AGI work right now, because it's the one thing that I can contribute to that might maybe help us solve some of those problems before it's too late. Interesting. Well, I do want to ask you, David, is there some sort of scenario where you think the next few decades could turn out to be relatively benign? Sure, absolutely. Yes, so what would that involve? What would people need to do? What would need to happen? What would need to not happen in order for the more benign course to be realized? Well, we need to stop a lot of the radicalism that we see in society. And we need to be able to solve a lot of the issues that are causing tension in society. There's a lot of radicalization from religious and political groups. And I'm not saying people shouldn't be free to be this, that, or the other thing, but the fact that it causes so much violence and war and sort of the fact that we have people starving in the streets and this kind of thing, I think we're going to need to solve those without making the problem worse. And I think the danger is that we could get into some kind of class wars, or there could be some kind of Luddite movement in the United States, for example, or all kinds of weird stuff like this because of just the irrationality of humanity at large is very much a concern from my standpoint. So it's trying to work around those issues, solve them before they get out of control, and so that we can help as many people as we can. And I think that's where we can achieve a positive outcome in the next two decades. But there's significant risk that, in my head anyway, that it could go south. Yes, and thank you. I think that's definitely worthwhile to contemplate and to think about the techniques that can be deployed to prevent the worst aspects of human behavior and human psychology from seriously damaging the prospects of humankind and for benign technological progress. So I would invite the other panelists also to comment on what is your techno-optimistic vision for how AI could develop? And also, if you want to comment on how to prevent some of the negative scenarios that David has been describing, please feel free to do that. I hate to be negative. But there are many aspects in which our culture really has gone awry currently. It's socially and politically acceptable really to decide that you dislike someone so intensely that it's perfectly acceptable to work against their goals simply because their goals as opposed to your goals. And where my techno-optimism comes about is that hopefully technology will allow us to not at times have to interact with each other, to be able to set up rules for the road, maybe disentangle from each other. There's, of course, the downside then that when you're not interacting with someone, it causes many of the problems because we no longer empathize with each other. But really, what we need to do is we all need to be more capable. We all need to be safer. We all need to be less threatened. And maybe in that environment, we can start treating each other better again. I certainly agree with that hope and that vision for perhaps technologies enabling humans to have more space to pursue their own projects while still working within a structure that is benign and benevolent or at least indifferent toward other people. And one of the great insights of classical liberal thought is that with the development of impersonal institutions like commerce or rule of law or religious toleration with the growth in human prosperity as Stephen Pinker documented in his book, The Better Angels of Our Nature, people's incentives to inflict harm on their fellow human beings tend to diminish. But on the other hand, we have these ancient demons of human psychology that still inhabit a lot of people and still lead them to act out on very irrational and sometimes even hateful motivations. Hiroyuki, what are your thoughts on how a techno-optimistic vision of AI could be realized? Yeah, techno-optimistic vision acts. I'm wondering if I correctly understand this meaning. But I think prior to this vision, I think ethical consideration for openness to the information about and its algorithm is important and also how we can, we need to control the algorithm. We should separate the law which person takes charge of the system. Also, I think AI will enrich human well-being by helping stress management, cognitively and behaviorally. And it might be able to control extraordinary physical reactions to stressors of... Sorry, I couldn't understand the exact meaning of this one. Well, I think you answered certain aspects of the question in terms of how AI could help human beings overcome certain problems. You mentioned stress management. For instance, this question of information overload that we face right now, we have a lot of information coming at us, our minds aren't very well-equipped to process it to the optimal extent that it should be processed. If we have AI aids that would help process the information for us, that could be easier in getting true information to us, provided the AIs are programmed appropriately. But also, of course, that raises some questions about how much control do we relinquish in doing that? To what extent, when we rely on the AIs' judgment or filter, do we perhaps miss things that we would have encompassed within our own judgment to the extent that our judgment could still be more nuanced, more let's say indicative of what we would seek to learn or improve ourselves in. But stress management, if there could be some tool that, let's say, funneled down the information that one had to process to the number of reasonable hours in the day that are available for doing that, then that would help reduce stress a lot, because if I knew, oh, I could learn what I have to know in the next three hours, and then I just spent three hours a day doing that, and I can have a reasonable degree of confidence that that will give me the knowledge I need to operate successfully in the world, that could be a big stress reduction, I would imagine. Now, Zach, what are your thoughts on a techno-optimistic vision of AI? Well, I'll say it's certainly a tailoring process. I very much agree with the boys where the AI will relieve a certain amount of stress and social issues and economic issues, but I'll say, look, instead of programming the absolutes within the AI, tailoring the tailoring process will become almost like a personality process or the AI itself. So like you said, having a dedicated AI that would essentially be your personalized assistant that would break down your day into certain segments for ease of use is certainly an expansion on the already kind of like Siri and the Cortana and already the home-based AI's we've got at the moment. But I see it being, yeah, almost like an aide, having that reassurance that you're doing well. Obviously, you don't wanna take the word of the AI as an absolute, not because it's being dis-genuine, but it's almost, like you said, firing the process for yourself, finding outcomes that you may have previously overlooked for taking absolutes. And I feel that happens too much in day-to-day life. At the moment, everyone takes their first glance as an absolute, but you should feel free to expand this further and allow technology to walk hand-in-hand with yourself to open up new doors. But I'm saying personally optimistic for technology as it's almost like just waves of new information. It just causes excitement. And the more excited the general populace gets then the more creativity we're gonna see. Yes, indeed. And I do think we have seen a dramatic proliferation of creativity, at least creativity that is publicly accessible if you do an image search for any genre of art, for instance, or if you do a search for music of particular genres, both more traditional genres like classical music or emerging genres of music. You've seen a tremendous proliferation with the internet and there are now AI-based systems that are kind of getting to the point where, for instance, they could create reasonable accompaniment to a melody that a human composer has developed. There's a program called Wolfram Tones that auto-generates melodies. They're still a bit dissonant at times. And I think that program still has a way to go before it could compose something in the style of say Bach or Mozart. Nonetheless, we are seeing AI as an emerging aid of human creativity to a great extent. So with that, we've addressed our prepared questions but there are a few other areas that I think would be worthwhile to expand upon. I mentioned at the onset of the discussion self-driving cars have tremendous potential in the sense that the vast majority of accidents in driving are the result of human error. If self-driving cars can be deployed with reasonably good artificial intelligence, they could save tens of thousands of lives a year in the United States and millions of lives per year throughout the world. So I wanted to get a panelist's thoughts on the likelihood of emergence of self-driving cars within the next five years and what kinds of artificial intelligence challenges are involved with that, with making that happen to the point where self-driving cars could replace human drivers in virtually all circumstances. Does anyone have any thoughts on that? Go ahead, Mark. So at this point, for self-driving cars, we're really at the refinement stage. There are still circumstances where they don't necessarily correctly behave. Though actually at this point, if you look at the statistics, they're better than human drivers, pretty much regardless. I found it very interesting, the reaction to the YouTube video that came up over the past week where everyone was so amazed because basically a Tesla car recognized an accident about to happen and reacted correctly to it. I mean, to me, that's just basic physics. The Tesla car has better sensors than the human has. It knew exactly where the two vehicles in front of it were. It knew the physics that one vehicle is stopping, the other vehicle was not going to be able to stop, and that there was something that needed to be done. So it started breaking. There's also the fact too that most studies say that we will be able to nearly triple the number of cars on our current roads. I think that that's a tremendous change. Once the cars are communicating with each other, basically traffic jams, as we know them, will go away. We'll have ridiculously overbuilt infrastructure as opposed to the shortage that we have currently. And of course, there's also the amount of time that the average person saves. I mean, the average commute these days is over five hours a week because most people commute more than half an hour one way. So I really don't see how cars aren't going to happen, automated driving, even potentially before maybe it should. Or of course, you could have the argument that at this point, we've already reached the point where even as possibly dangerous as it is, efficiency wise, we might save far more times and save more time and life by actually letting them go forward as they are now. Yes, indeed. Well, I want them to go forward yesterday. And of course, Uber has conducted certain experiments in Pittsburgh, in the Bay area, with limited fleets of self-driving cars. They still have a human backup driver, but there have also been efforts to develop self-driving cars without even any steering equipment like Google's experimental cars. So it will be very interesting to see if human societal and political norms within the next five years are going to be able to accommodate self-driving vehicles being essentially made available to the consumer public. I certainly hope that that will happen. David, you had some comments. So just to comment your last thing, I certainly think that the legal issues will be worked out. There's so much sociological pressure behind it. The lawmakers are gonna have to get in line. My biggest concern with self-driving cars is some of the ethics stuff they're talking about really kind of scares me. If I buy a car, that car better protect me and my passengers over anything else. I don't care if it runs over a fleet of grandmas, it's as long as it protects the me and my passengers. And the fact that it could be possible of choosing the death of the people inside as opposed to people outside the vehicle is really worrisome. If I buy a car, it's about me. It's, that's the one thing that scares me. But then again, that vastly increases your odds of dying on the sidewalk. Okay, who said I would be walking on the sidewalk then under those circumstances? Well, one assumes that for every person that is on either side of that. I mean, okay, so you're safe when you're in your car, but then you've just decreased your own personal safety outside a car. Well, then could we not dedicate a road that would be able to communicate with the car itself? Like I always thought there could be dedicated spaces for the AI car that would have then an AI road to communicate with. So then it would alert any pedestrians in regards to a traffic accident. So say, hey, we need to swim here, do not be here at this time. Well, for the most part, I think the car could have. No, no, no, go ahead. I was just gonna say, I think for the most part, I think the vehicles can avoid people and protect the drivers and pedestrians. It's just in the odd case where, if it's a choice between killing the driver and killing pedestrians that I would be uncomfortable in a car that would choose someone else over me. But I think that's fringe case with AI, 99.99999% of all situations, it's just better for everyone. For just the, you can put more cars in the road and there won't be as much congestion. Oh, holy moly, the amount of driving that I've had to do up until this new job I started like a week ago, it would have just been a godsend to have a self-driving car. Yes, I can imagine. Go ahead. So the thought that I had was a lot of the ethical dilemmas that have been written about recently with regard to if the self-driving car has to get a passenger to save the occupants of the vehicle or sacrifice the occupants of the vehicle to save passengers are a bit contrived. They're kind of like the train tracks scenario where if you flip a switch, the train will go to a different track and kill one person who is tied to the track or otherwise trapped. And if you don't flip the switch, the train will run over five people. And one significant question with regard to that is what is the probability of that kind of scenario arising in the real world? I am not aware of any situation in which that has actually happened with people being tied on both sets of train tracks so that you actually have to make a choice to sacrifice somebody. But also there's always a third way in my view and in real world situations, the artificial intelligence, it would see wouldn't be making a binary choice, kill X or kill Y, it would be more like what is the optimal configuration of movements that could minimize damage to human life altogether. And if there's some narrow route that the car could navigate to avoid any humans, that's how it should be programmed to operate. Yeah, I think that's true. A lot of those scenarios are so contrived, the only way to actually replicate them in real life would be to build concrete barriers on both sides of the road and tie the people in the middle of the road and then make the car make some arbitrary choice like that because it is contrived like you've said. But there are cases where there are assessments. I mean, there are times when, based on statistical data of speed and movement and who's hit, the most common scenario is that you've got a pedestrian crossing the road, you've got like a street light malfunction or something and basically the car has a choice between hitting a bridge abutment or hitting the pedestrian. And then the question there is, well, what if you determined that the driver is 70% likely to die if you hit the abutment, but the pedestrian is 100% likely to die? And you can play with those percentages all you want, but at some point, you're going to decide that it's probably worthwhile to cause some risk to the driver instead of sure death to the pedestrian. And you're really arguing edge cases though. And really what we need to do is just decide as a society what the rules are going to be, but I think we need to enforce them uniformly. I mean, it would be really bad if Mercedes got to enforce the driver protective rules and all the Hyundai's are doing all the pedestrian after in protecting rules. That's an interesting point. Now, what would you think of a rule, especially with a car that still has the ability to be manually controlled? So it would still have steering wheels, brake pedals, accelerators, et cetera, that if such a situation were to arise where the AI couldn't help but make a moral decision that would hurt somebody, the car simply defaults to manual control and lets the driver make that decision. Human reactions aren't fast enough. The speed in which the human backup driver would engage is not, that's not practical. I totally agree. Yes, and I have read a commentary to this effect that some car manufacturers even are finding out that having semi-autonomous capabilities in vehicles where they drive themselves most of the time but they can't respond to certain situations and they require a human driver to take over in those situations lead to this lag in reaction time where people just, they're distracted by something else. They can't get into that mode of driving fast enough to make a difference. So that's though an interesting challenge as well because most technology does develop incrementally and semi-autonomous features have been added to many existing vehicles as well like autonomous parallel parking or vehicles that will sometimes correct themselves a little bit when they depart from their intended lane. And of course, Tesla has released a significant degree of semi-autonomy into its model S vehicles to the point where a lot of the time somebody could be, say, eating a meal or talking on the phone and the vehicle would be fine but there have been circumstances where a driver wasn't paying attention and there was a small glitch in the AI. So it did not recognize that there was a semi turning out and that driver essentially ran underneath the semi and got killed. So what would you say is the best way to proceed? Just skip directly to full autonomy which might take a little bit longer or are the semi-autonomous features generally fine and are we just talking about those few edge cases even in the status quo where human beings have failed to respond the way they should have? Part of the problem is that human beings don't treat semi-autonomous vehicles as if they're anything less than fully autonomous. I mean, there are plenty of YouTube videos of people sleeping in a Tesla or rocking out to music. Fundamentally, if you're going to offer that degree of semi-autonomy, they really need to be fully autonomous. Fortunately, they pretty much are. As long as you're on the highway, a Tesla is a better driver than a human. In the city, my understanding is it isn't quite there yet but... Yes, and one would definitely hope for that technology to become even more robust to, let's say the less predictable environments like those of city driving to minimize the chance that human beings would be able to make a mistake. Let's see, any further thoughts on self-driving cars? Zach, do you have anything to add? No, I'm looking forward to seeing them navigate our roads in the UK a lot more. So obviously yours are a lot more organized than ours and ours are all over country lanes in back front, roundabouts, what's around about. But no, I am looking forward to seeing the development of a lot more than just Tesla. I see Tesla very much appreciated that it's trying to encourage competition and trying to encourage the growth of AI vehicles. Yeah, I'm looking forward to seeing more in that field of work. Great, so let us now delve into some audience questions because we've had a significant number of comments in the chat and one interesting question from user open source temple pertains to possible biases in training data for AI and how he considers that to be surprisingly similar to human biases, for instance, confirmation bias. And I would be interested in your thoughts as to the extent to which AIs might be vulnerable to taking on a lot of the biases of their creators, including logical fallacies, cognitive biases, or say ideological biases even depending on who has constructed the AIs. Do you think that those are realistic concerns or do you think they're exaggerated? Who do you want to add to the question? Well, you can. Well, I know for me, of course, you're gonna have biases. I mean, anytime you have humans involved in making something, there's gonna be biases involved. And even in some of the studies we've done, certainly there's strong evidence to support that there will be biases. I think it's not that worrisome though. You know, it's an issue we'll have to deal with, but I don't think it's too problematical that at least at this stage, we're gonna spend too many cycles stressing over it. I'll take the opposite point of view. I think it's very important that we realize what these biases are. Part of the problem is that some biases reflect reality. A lot of things that count as discrimination are statistically true, but it's counterproductive to behave as if they are, because behaving as if they are causes actions and personality traits that are massively detrimental to society. There's also particularly a problem with hidden biases. In some ways, I'm actually more comfortable with a clear out in the open bias that we're at least aware of and can work around. One of the most insidious problems with neural networks and deep learning and a lot of the newest AI is the fact that we don't know how all the data shakes out. There can be some trends that we're not aware of that then causes the system to behave in a certain way and we don't know why the system is behaving that way. It's very often unexpected behavior. It can have, since it's new behavior, it can have unexpected side effects. We really need as much information about biases as possible. We're never going to be able to eliminate them and that's something we shouldn't try to do, but we certainly should know about them, work around them and understand them and not just accept ignorance. So I would just to follow up, Mark, because I don't think I disagree, that doesn't sound like a disagreement to me, because I agree, you can't not be aware or try not to be, you've got to be aware of that stuff or try to be aware of that stuff when you're doing data analysis and these different kinds of things. I'll only point that it's not something necessarily that is a huge, is worrisome, just that it's something we've, as good data scientists or good artificial intelligence researchers, we need to be aware of or try to be aware of those biases that we're introducing into the system because we will introduce them. Yes, and it is interesting because these kinds of biases exist in predictive models today that are used in various financial contexts, even though they're not artificially intelligent systems by any stretch of the imagination. Let's look at a credit scoring model, for instance, which banks will use to determine whether to give a loan to an individual. One of the variables that credit scoring models consider, and there are many models, by the way, they're used for various purposes, but most of them consider an individual's length of credit history. How long have you had credit accounts, mortgages, auto loans, other types of credit? And there are some limitations to what is considered, largely credit within the United States is considered in US credit scoring model. So if you're a recent immigrant to the United States, you might be extremely financially responsible. You might be entirely capable of repaying any loan that a bank might give you. And yet you don't have an established credit score. And so that model isn't going to result in you getting the kind of treatment that you should perhaps deserve. So it seems like these kinds of biases, the biases of what variables you select, what processes you select to make a decision, they are here already. The interesting question in my mind is, will AI systems, more sophisticated models and algorithms that have a capacity to learn and adapt, will they accentuate those biases or will they mitigate those biases by virtue of being more sophisticated and more discerning and seeing this is a special case here. Perhaps we should make an exception. Any thoughts on that? I think the answer is, is it depends on the particular system. What we really need are explanatory systems, systems that start unwinding exactly what parts of the data make us believe certain things. Unfortunately, as I mentioned before, neural nets are currently fairly inscrutable. They map the world. They tend to find the near similar cases or at least the near similar cases given the data points that you have. So very often they make surprisingly accurate predictions but they also tend to fail in very surprising ways. I mean, as long as you're in a well-traveled area, neural networks are awesome. The second you start getting near a phase change or unexplored territory, you have no warning. You know, it's a typical thing. You know, I dive into 60 degree water, it's fine. I dive into 50 degree water, I'm fine. I dive into 40 degree water, I'm fine. It's cold. I dive into 30 degree water and I break my neck. Yes indeed, so yeah, that's not a gradual transition there. I also wonder, with regard to biases, is there any concern on any of your part with regard to explicit ideological biases being built into AI systems, particularly by government actors? Zoltan Istvan had a short story that he wrote last year about essentially the US government coming under the control of religious fundamentalists and the president is a religious fundamentalist and he coerces the leading AI researchers into building an AI that essentially worships Jesus and wants to achieve the second coming of Christ, which the AI does by destroying the world. So do you believe there's a risk of something like that happening either on that dramatic scale or a more minor scale where AI becomes used as an ideological agent for certain belief systems, which we might consider to be illiberal or pernicious in other respects? Well, I could do a lot of evil things just by playing with the data sets that you feed to learning systems. I mean, if I decide to skew the data set, it's going to be pernicious and nearly impossible to fight against unless you rigorously throw problems at the AI, see how it solves them and then turn around and say, look, your AI consistently is treating this class of people worse than that class of people or something of the sort. Yeah, I mean, if you have control of data, someone's worldview, AI is vulnerable to brainwashing as humans are and it isn't really even brainwashing. I mean, if you give someone a bad environment and they learn in that environment, if you put them in a different environment, they're still going to behave as if they're in the bad environment. Yes, indeed, yes, indeed. This is, I suppose the problem we keep coming back to and that is the behavioral psychological fallibilities of the designers of AI can be amplified in some ways, unless there is some wisdom and prudence applied to how AI is developed. Now, I would like to pose one more viewer-generated question and then after that, I will ask for closing statements from all of the panelists and this question is from Ian Sun. Would there be essentially a situation where an AI could be justified in placing its own interests above the interests of some number of other beings, be they human or non-human? And if so, what could justify that stance? I will add to that question an observation that touches on what David said earlier in the panel in that he wants, to some extent, to protect AI from the humans. I recall also Hugo DeGaris, who wrote about the Art-Elect Wars, prognosticated that there would be a terribly destructive war in the 21st century between those who support the development of artificial intelligence and the neo-Luddites who oppose it and the neo-Luddites would start it. So essentially the humans who are kind of biological fundamentalists would begin this destructive war against AIs and the Art-Elects, which is DeGaris' term for the artificial intelligences would fight back and would win but the war would be very devastating. So I hope that doesn't happen, by the way. I really hope it doesn't happen. But what do you think about possible circumstances where an AI might be justified in defending itself or saying, well, the interests of these particular human beings don't really matter because they're morally repugnant, they're aggressive, and that would seem to be a violation of, say, Asimov's Three Laws of Robotics, which would have prevented a robot from doing anything violent to a human no matter how poorly the human behave? Well, I have a lot to say about that. Any sentient being has the right to defend itself. And if the AI is threatened by someone, it's my opinion that it would have the right to defend itself as any other sapient or sentient human would be. Certainly if someone is trying to kill me, I'm gonna shoot back. And if that's the case, I would prefer to shoot to kill. I mean, you don't defend yourself by... But you also have choices where an artificial intelligence might be in a situation where it can save itself or help do something where it might end up being done dead and a bunch of humans are gonna die too. And it morally, I think would be obligated to save itself first because at least given the current state of humanity. But I do tend to think that we're gonna get into some kind of like scenario with Luddites and being anti-AI and this whole Artelec war thing. In my mind is a very likely scenario. Very interesting, thank you, David. Any other comments about the prerogative of AIs to defend themselves and or the likelihood or lack thereof of an Artelec war? Well, looking at what the transhumanist party just voted on, I mean, fundamentally we're saying that all of these varieties of sapient beings are persons and it sounds as if suddenly the viewer's question is drawing a distinction between these various sapient people and I'd say it's a bad question for that reason. Yes, and I would agree with you that sapient artificial intelligence would be a person and the variety of other beings that could come to exist through the application of emerging technologies would be persons with rights. One of the reasons why I think the transhumanist bill of rights is so important is that it provides for these principles that could serve as the basis of these protections in advance before we start getting the societal tensions, before we start getting some groups of people saying, no, these terrible sentient AIs, they're going to ruin us all, we need to destroy them before they have the chance and on the other hand, we might have the new beings starting their own rights campaigns and agitating and maybe even engaging in some disruptive activism. So if we can agree in advance, okay, past a certain threshold of sapience, you have rights, it doesn't matter what your physical form is, maybe that will avert an Artelec war, I certainly hope so. So any further comments on that question? Here, Yuki, do you have any thoughts on AI rights? Hello? I think we're not able to hear you for whatever reason. Let's see. I think you may be on mute, but while you try to get back your audio, let's hear Zach's thoughts on this. No, so certainly once the AI hits a certain level of consciousness, self-awareness and whatnot, it should be treated thusly, the same way that we are and other forms of consciousness currently has their own set of rights, we should encompass all forms of consciousness, not biological, mechanical, anything in between and beyond. So we're adopting it as you have AI's companions, it could be almost like an adoption process, you welcome them into your little circle of life. But I'd say you should definitely encompass if the AI feels insecure or unsafe in its current environment, it should be allowed to remove itself or defend itself. If it does feel it's gonna be brought on through harm. But if it gets to the point where it can jump its consciousness from body to body, then it will be able to escape that environment. But if it is physically trapped and it knows it's going to be damaged or broken or even deceased, it should be able to, yeah, it should have the same amount of self-defense that we do. Yes, very interesting also to consider that digital intelligences might be able to protect themselves in ways humans can't, as you pointed out by migrating to another location with much greater ease. So they might not need to resort to deadly force to defend themselves. They might just say, I really don't like it here. I'm going to be somewhere else now. So that's certainly what we've looked into as well. So if with what we're doing in body AI, if there was a person that was purposely inflicting harm into one of the bugs, then it would have the ability to retreat. And I feel that should be a concept for all intelligence or consciousness in the future. Say if you are within an environment where you are feel harm and you don't want to inflict that violence upon another, you should be able to retreat. Absolutely. So we have come to our two hour mark. And before we conclude, I would like to give each of you the opportunity to make a closing statement based on anything that has been discussed today. Any thoughts that you might have to integrate the areas of discussion or say anything further, elaborate on anything that you've heard. Hiroyuki, let's start with you with regard to closing statements and see if your audio has returned. Yes, I think the most important progression of AI field is that AI can choose information to enter the algorithm via through learning. Means it makes it possible to maximize the goodness of model fit. For example, when considering a multiple regression model, we need to choose to predict some outcomes. But it is often the case that we end up with irrelevance variable. But I think AI can maximize that. And I think this is one of the noteworthy improvement AI has brought. And I think a variety of benefit of AI will be more recognizable from now on. And for example, in commerce, education and health. But I think the important thing is that people should always keep their mind, keep mind. The difference between human intelligence and AI in its actual function. Yes. As I mentioned already, AI does not have instinct such as self-preservation and pro-creation. Post-cute is incapable of generating new algorithm. You might understand. So if this issue is satisfactory sort, singularity might be the model of concern. Yes. Thank you, Hiroyuki. I think we might have lost a little bit of the audio at the end, but I think I got the essence of what you were trying to say. Now let us go to a closing statement from Zach. Well, other than the wear of train tracks. Yeah, I would say with the progression of AI and the speed of which the general populace is learning how to, or the understanding of AI, the development of AI, and the expansion of the concept of AGI is AI consciousness. I think it's going to help us understand the direction of which the Utopia future is going to go, the way in which the technology can be used more than just a humanoid shell with a consciousness inside. But I'm looking forward to seeing the unity that it brings together, not the consciousness of itself or the AI itself, the creation process of the AI that will then push new ideas forward into the populace. Yes, thank you, Zach. And now David for a closing statement. You know, I guess probably the biggest thing I would want to communicate is that there's a lot of fear mongering and there's a lot of hype that, you know, I would just like to see people getting less worked up over, you know, there's things like movements to create laws, to govern AI and this kind of thing. I think I'd like to see less of that overhyping and less fear mongering about these kinds of technologies. And I think it's my opinion that that kind of thing will lead to some of those sociological problems which could end badly for us just across the board. You know, we need to make rational decisions not based on hype and everyone getting excited about disasters that are not going to happen. Yes, thank you, David. And I completely agree with you about the need to approach this subject with a rational frame of mind and certainly with proportionality. Mark, what about your closing thoughts? So creating AI is an awesome opportunity to learn about ourselves and improve ourselves and again, improve our society as well. I agree very much with David. You know, as you learn more, you gain great power and we need to start taking some responsibility. We need to be honest and clearly state what we want and honestly start assessing what actions will lead to what results. And I think that the future can be absolutely wonderful or it could be real bad. I'm hoping for the best. As are we all, I think. And which way the future turns out will in part be determined by the decisions we as humans make in the coming years and decades. So thank you very much to all of you for participating today. This was an excellent discussion. We covered all of the originally intended subject areas and many others, of course. No discussion of the field as vast as artificial intelligence can be exhaustive. So I encourage viewers to continue referring to the reporting of this discussion, which will be permanently archived on my YouTube channel and then to offer thoughts of their own as to where we can continue to expand our understanding of all of these various highly interesting areas. Thank you very much and I hope you all have a good day.