 Kia ora koutou no mai, haere mai, ko Erika Austin Tokungawa. So let's start with the karekia. E te hui fai te maturonga kia marama, kia fai taki ngā mahi katoa. Tu mai ha, tu haka. Aroha atu, aroha mai tato ia tato katoa. Greetings to you all. My name is Erika Austin, community activator at EHF. Welkom to EHF Live Session with Salos Julia Bossman, Rainn Hadeitig, and Christopher DeChance on a panel moderated by Jade Tang Taylor as they discuss the need for regulation for security and to protect privacy and more great use cases of AI. This is the third session of the Generative AI series. So we'll be having a 45-minutes conversation with the panel, whereby Q&A and a discussion with you all in this 19-minute session. First, some quick housekeeping. The session is being recorded and will be listed on the EHF website afterwards. Please stay muted until Q&A, but feel free to put your questions in the chat box as we go, and Jade will be able to read these out. Some of you may have to leave at various points in time, and that's okay. A little bit about Jade Tang Taylor, our moderator today. Thank you so much for stepping in. Jade is a designer, dreamer, and doer, a purpose-driven, design-led, creative entrepreneur. She is currently the Innovation Director at Academy EX. Over to you, Jade. Kia ora, Erika, ano ma haire mai. So wonderful to have you all here. As mentioned, ko Jade Tang toku ingoa, ko tangata tariti aho, and I am zooming in from the... Well, it's quite sunny out there today from Waitakere, West Auckland, where the local mana whenua here is Takawara Amaki. So if you know the local tribe and which you're zooming in from, definitely feel free to pop it in the chat. And for our attendees as well, where are you zooming in from? Feel free to rename yourself, or again, feel free to pop it in the chat because I'm conscious that EHF is a wonderful, global community as well. I'm going to start with a few intros starting with Chris, that Chris is a serial entrepreneur. Christopher, sorry, is a serial entrepreneur. It's like we're first friend's basis already. Serial entrepreneur, neuroscientist, social entrepreneur, author, inventor, main stage team speaker, founder and CEO of Brainful. Over to you, Chris, to share a little bit more about you. Yeah, thanks so much. So it's lovely to get a chance to participate in this. Thanks so much for being it on. So yeah, I've been involved in neuroscience for a long time. That's where my primary training, my PhD was. And I've been watching artificial intelligence evolve. I originally, many years ago now, published some work with Jeff Hinton. And at that time, no one had any idea that this field was going to evolve into what it has become. And that's part of the nature of it. So I was also part of the founding faculty at Singularity University and that the principle there was that the world was coming towards this Singularity when potentially the technologies that we create, and especially AI might come to supersede our own intelligence. And we are now, I think, coming to a point very rapidly where things that for most of that trajectory seemed very far-fetched and far away. And maybe it's like science fiction are now actually coming to pass. And so today I'm working on technologies from the neuroscience field to try to have a positive impact on humanity. I'm incredibly excited about what is now going on with AI technologies and what it is providing and what it, I think, will, but also deeply concerned about the potential risks. So excited to get a chance to speak with everyone about both of those. Maesan Kiwara, thank you, Christopher. Next up, we have Julia. So Julia founded the Red Circle and not-for-profit teaching psychological first aid. She mentors AI startups as entrepreneur and residents at Singularity University and invest both in neuroscience advances for healthier brains and infrastructure innovations for climate resilience. Sounds amazing. Over to you, Julia, to share. Kia ora, everyone. Finally enough, I also studied neuroscience, just like Christopher. And I switched into working on machine brains instead when it became obvious that we will be able to build machine brains that are more capable and more complex at some point than human organic ones. And I'm currently in Montreal at a machine learning institute called Miele where I'm leading a project on AI governance. So that means what can we do and what should we do to make sure that AI goes well. And you've probably heard that there's a lot of concerns about it. I tend to say we don't want to just worry about human extinction, but we want to instead think about how to create the best possible worlds and a great livable future for everyone and how AI can contribute to that. So that's what I'm currently working on. Amazing, Julia. I think I heard a podcast the other day from an economic forum around AI will either compete with us or augment us. It's up to us. So fascinating to get into that discussion. But last but not least, we have Lane. Lane is focused on both the theory and the practice of governance of crypto economic networks using novel blockchain-based incentive mechanisms to build more robust, more sustainable, more just participatory human institutions in the space mesh and Ethereum communities and beyond. Amazing, Lane. Over to you. Your thoughts, initial thoughts. Thank you, Jade. Kia ora koutou. Kia ora e te whanau. I'm deeply honoured and delighted to be a part of this conversation and it's great to see some familiar faces here. I do not have a background in neuroscience but I do not have a Singularity University affiliation yet. So I guess I'm bringing a little bit of diversity to the conversation in those respects. I am a software developer and an entrepreneur and I've been basically building things, broadly speaking, in software for my career. So I have a background in traditional finance going way back. I had a healthcare technology start-up for a while. And for the past six or seven years, I've been very focused on exactly the things that Jade mentioned in her very kind intro. So cryptocurrency, crypto-economic systems, but with a focus on the human side of the story, so to speak. So a bit less about some of the financialisation. I feel like that's behind me in my career. And I think I'm excited about humans and human systems in general and how we can use cutting edge technologies, including blockchain and cryptocurrency, but of course, also AI to build all those things to build better, more just, more participatory and inclusive human systems. I guess my AI claim to fame is that I studied with Stuart Russell at UC Berkeley. I'm showing my age now, 22, 21, 22 years ago. And he's widely regarded as one of the godfathers of kind of modern AI. And I actually didn't sort of follow the space very closely until quite recently. I guess that's probably true for many of us. But yeah, I've been dabbling a bit and kind of being part of study groups of similar minded kind of entrepreneurial folks who are building things in the space. And I guess the other qualification I have is that I'm a really voracious consumer of science fiction. And I've been thinking a lot about some of these things for many, many years as a result of that. And I'm particularly focused at this moment on kind of the intersection of these kind of crypto-economic cryptographic systems and AI. But yeah, excited to be here. Thank you. Thanks, Lane. Yeah, it's like sci-fi fiction and also Black Mirror Netflix series. Seems to be one of the people talking about AI and the potential of it. Anyway, thank you. You're speaking my language around the sustainable just and participatory kind of a better future, a better world. So thank you all for joining us. And yeah, we're going to be speaking around ethics, security and privacy in the AI world. And let's jump straight into the questions. There's about several questions that we'll go through. But if you have any questions, feel free to pop it in the chat. It'll be kind of an informal kind of conversation. And again, for the speakers, whoever unmutes first will be the person that we'll share first. OK, first up. Oh, and yeah, there's been a lot of conversations around, I guess, my other role was within the Academic Institute as well as Innovation Director there. And there's been a lot of discussion around ethics and AI. And we've heard about the Centre for Humane Technology and the AI Dilemma that was launched earlier this and released earlier this year in March. There's Google Responsible AI Practices as well. That was, I think, a couple of months ago. And then the UN Code of Conduct that was just recently released a few weeks ago as well. So it's definitely developing at an incredibly rapid pace. But as of the 11th of July, 2023, can you elaborate on the ethical concerns with AI generating human-like text or deepfakes and how those concerns are or are not being managed? It's a very specific one to start with. I'd like to go first. Julia. I can go first in a Fresno movie track when you said the 11th of July because here in Canada it's still the 10th of July and New Zealand is in the future. And we're still a little bit behind on that one day behind. And the problem of deepfakes has come up for a while. Actually, back in 2016, I wrote an article about ethical issues in AI and deepfakes were part of that. So way before there was even a term for it. And to put this into a historical perspective, humanity has gone through phases where the media and the capabilities that we've had have exceeded the media literacy that we needed to appropriately deal with it. For example, when mass printing was first invented, there was a fake book, so to speak, which was called the Protocols of the Order of Zion, which basically distributed malicious news about the Jewish conspiracy, which cost a lot of harm in Europe. And then later when radio was first wildly distributed, there was an audio play based on War of the Worlds that caused a mass panic because people thought that we were really being invaded by aliens. And these are examples for how, when there's a new technology for media, at first we don't have the mental concepts to tell what's real and what isn't. And we just take everything for real because that's just what we're used to. And over time, there will be a few snaf who speak where we have to get wise to it and understand, oh, things can be fake. And this is how we can tell. And this is what you look for. And this is how not to believe it and how to verify it. And nowadays, if somebody tells us something on radio and makes it sound as if something's happening, we know that that might be fictional. And I think the same will be true for the kinds of media we can create with generative AI. It is just a question of our media literacy catching up to that. And the issue with that is that our capabilities have been accelerating so much that our literacy needs to accelerate as well to keep up with the pace of that. Yeah, I love that. The media literacy has to catch up. Oh, no. I'm going to pass it over to either Crystal Lane to respond and then I'll share something funny I watched over the weekend. Julie, thank you. I think that's really helpful and I think that's a really illustrative historical example. One thing I've noticed in my line of work is that a lot of people focused on tech these days don't let's say they have a very short memory for history. And so I think it's actually enormously valuable to look back. And sometimes we have to look back a few hundred years to find those examples that help us kind of figure out like chart our course and figure out where we're going. I just wanted to add one thing to what you said, which is that so in some respects, this phenomenon is new, right? So the kind of the specific case of generative AI and some of the specific technologies that are emerging right now around generative text, generative images and videos and kind of deep fakes in those directions. This is of course a pretty new thing or rather they've kind of achieved a critical point of fidelity recently and will of course continue to get better. However, I think it, you know, we've been in a cultural moment now for at least a few years already, you know, due to slightly older technology. So things like social media or I think you could track track even further back to the rise of just blogging, you know, which is already more than 20 years old, you know, going back further than that, but if you go back a generation, you'll find, and I think hopefully I'm making your case for you here, you know, because as you're saying, we have these cultural moments when the technologies get ahead of our ability to kind of understand them. So I think a generation ago, this was not the case, right? So what we had was a very limited set of channels and a limited set of gatekeepers. So, you know, where I grew up in the United States, these are things like NBC, ABC, CBS, New York Times, CNN, you know, Fox, these sorts of things. And there are so few of them that, you know, they were kind of anointed gatekeepers, so to speak. And broadly speaking, you know, we were all kind of on the same page about the state of the world. And obviously that's broken down quite a bit and led to this kind of crisis of trust and crisis of coherence over the last, you know, 10 to 15 years, again, due to the rise of things like social media. So it's been a very painful transition where I think we're all very aware of some of the challenges we've faced and, you know, whether this is, you know, election interference, things like this or knowing, you know, whether the person or group of people you're speaking to over whatever social channel are fellow human beings and neighbors as they claim to be or if they're actually like a troll farm somewhere halfway across the world. But I think the good news, at least from where I sit, is that this transition, it's not complete, but it's very well underway. And I believe that as a society we are really beginning to develop an immune system response to this. And I think that there's a lot less naivete than there was before. And I think that any, you know, folks who have grown up exposed to these things over the past 10 years, let's say, you know, we used to joke and say, oh, if it's on TV, it must be true. If it's on the internet, it must be true. Well, no one really believes that anymore. And so I think we have a healthy skepticism and I think that this is what we need to develop the competence that Julia referred to. Yeah, you're making a good point, Lane. And it's kind of funny to me how a generation ago when I was in school, I was told, you can't trust anything on the internet. I cannot cite Wikipedia as a source because it could all be made up. And now the same people who used to warn us about not citing Wikipedia are the ones who have become less critical and share memes on Facebook that may be harmful. And I have a lot of trust actually in the younger generations to have that immune system and to build up that immune system. And I think there's a kind of a rapid evolution of that immune system by people just engaging with memes and information on the internet in a very fast-paced way. It's a new form of collective intelligence and it's exciting to see it emerge and the solutions that emerge previously, which as I said were things like big radio stations, TV stations, publishers, et cetera, it will look very different this time around. But this is one respect in which I'm pretty optimistic. I think that we're intelligent enough as a community and as a society to find a way forward together and figure out how to come to agreement and consensus on the ground truth. Consensus is something that we think about every day in the blockchain cryptocurrency space, but I'll leave it there for now. Yeah, I feel like that's a whole other webinar, Lane. Christopher, any thoughts around what Julia and Lane has mentioned or just deep fakes in general? Yeah, well, so I'm going to speak a little bit more about the broader question of ethics and the ethical considerations arising out of AI. And I want to start actually by sharing a resource that actually comes from one of our own at EHF, Tristan Harris, who I think has put together along with, is asking one of the best pieces on looking at this issue really thoughtfully. And so I'm going to paste that into the chat. And so anyone who's interested in this topic who hasn't yet seen this video, I really strongly recommend it. It's very thoughtful, very striking, and I'll give you the punchline which it essentially leads with, which suggests the potential magnitude of the ethical kinds of considerations we're talking about here, which there's the things that we can see clearly like deep fakes. There's also the longer term potential ethical implications that are much harder to predict, but the way that it's put in Tristan, this presentation is that approximately 50% of AI researchers surveyed recently suggested that they believe there's a 10% or greater chance that this technology leads to the extinction of our species. And so that's obviously a pretty shocking concept that half the people who are the engineers on this technology believe that it could wipe us all out. And that's getting more widely known, but I think Tristan brings up the point. Imagine if you were getting on an airliner and 50% of the engineers who had devised that or created and maintained that airliner said there's a 10% chance you're not getting off. What would you do to prepare yourself? And so I think it's very important as a society, but also individually that we really take seriously that this technology has risks far beyond what we can comprehend. Now, I'm not an AI naysayer. I also think that equivalently there's potential that is far beyond what we can imagine. And I think that we're currently, from my perspective, living in sort of a garden of need and time where you can do things today in an afternoon that used to be things that would have taken a team of engineers several years to do. And I've literally been doing that myself. That's not an exaggeration for anyone who hasn't yet been engaged with this technology. I strongly encourage you to try it. It's incredibly fun and I can talk more about some of the use cases and some of the things that I've been doing with it. But I think we're in a time right now where it's very difficult to see the future because on the one hand, that we've got this incredible power to do things that are truly superhuman and even super superhuman in the sense that these are things that are creating their own capabilities beyond what we can imagine they will create. And on the other hand, we have the possibility of truly incredible and very concerning risks. So I think this leads to an ethical situation where the issues at hand are literally beyond our ability to conceive of them and we're trying to plan for that. And we can talk more as the session goes on about things that people are trying to do about that. But as I look at the situation, that's how I see it. I see it over the course of the coming decade and beyond in terms of being really an existential opportunity or existential threat for our species. The really powerful statement, existential opportunity or existential threat of our very own existence, right, Christopher? There was a quick question in the chat and I believe it was directed at you because you used to work with Jeffrey Hinton. Why did he leave the law? And I know there's mixed conversations around what this... Yeah, what? Yeah, so I obviously can't speak for Jeff but he's both obviously an incredibly brilliant person. His group was, for anybody who's not aware, really the pioneers of the transformer models who have led to the recent explosion in AI capability and he's been a pioneer and a leader in the field for a long time and he did after having gone to Google to bring this technology to them recently leave and there's been a lot of talk about that. My best understanding of what's happened there is to simply take Jeff at his word that he feels no particular ill will towards Google and does believe that there are smart people trying to do the best they can but wanted to be in a position that he independently could be independent of them because they have a lot of constraints as a large organisation in thinking about and working on and working with others on and being a local member of the community on the issues of what may need to happen with AI and I think he's both from a technical standpoint an incredibly strong position and from a leadership standpoint he is in many ways one of the if not the sort of principle-founding figure as well as being somebody who I think is very deeply thoughtful so I hope that in making that choice he has set himself up to be a real leader for all of us in trying to figure out how to address these very difficult and very important issues. Amazing, thanks Christopher and completely echo your sentiments around if you haven't had a chance to watch the AI dilemma, the YouTube link that was popped in the chat by co-founders Tristan Harris and Azar Ruskin. I also listened quite a bit to their podcast which is that your undivided attention and there's lots of wonderful critical discussions in there as well if you're a big podcaster. Okay, second question. So we talked a little bit about ethics and I know that will be woven throughout the discussion today but can we talk a little bit about privacy and privacy concerns raised by the development of generative AI and how they are being addressed. Just conscious that 3.5 came out in November 4 came out earlier this year I think code interpreter came out a few days ago Any thoughts or musing around privacy concerns? On any of these topics. Thanks Jake. With your permission I'll kick off with just a thought as a software developer I just want to share a tiny bit of a framework here that might help guide this conversation. Well first of all privacy is I believe a basic human right it's incredibly important I'm really grateful that we're having this conversation and I think there's a lot to say here the point I wanted to make to kick off is that in my mind the biggest risk around generative AI the way it works today is that the actual computational work is not being done locally okay so I believe this is called inference in the context of AI so first of all the model training of course is done by very large powerful clusters of computers and the big companies like open AI behind them and so forth and then this model is produced and when you are as a user and everyday user as a consumer interacting with whether it's chat GPT whether it's like a stable diffusion or one of these image generating models what you're doing is you're basically interacting with a hosted application that's closed source that's running on someone else's computer and I think there's some inherent risks there now this is not new in a sense like a lot of what we do and a lot of the software applications we use and the infrastructure of those applications are very similar in the sense that when you chat with your friend pick your favourite chat app with almost no exceptions it's all flowing through a third party server that's owned and operated by a company but I do think what's new here is that the nature of the data that we're sharing with these systems is particularly sensitive and I've noticed the way that people communicate with these AI tools talking now about things like chat GPT and text-based GPT they in some sense is almost develop a relationship with them and I've noticed you can see this when people start saying things like please and thank you it's a wonderful thing there's psychological reasons this happens but I think it's easy to lose sight of the fact that what you're talking to is not only is it not a person it's basically a corporate bureaucracy at the end of the day this is a very long-winded way of saying making the point I wanted to make which is I think one very powerful thing we can do to promote privacy here is to develop models that we can run locally in other words when you talk to say Siri on your iPhone in theory anyway that the voice messages are not leaving the phone and they shouldn't be leaving the phone that processing should all be done locally and it's even better if we have the code or if it's open source so we can verify this for ourselves there is work being done here I'm aware of a few projects that are working on models that fit on a small device like a phone or a desktop computer or a laptop computer in the sense that the processing can be done locally so as a software developer this is something I would really like to see more resources invested in and more awareness as well I say please and thank you so I've always been taught to use my manners so anyway why not I'd like to pick up on one of the what I think is one of the deeper things that you said there Lane which I think is a very important one to me as well which is that privacy is a fundamental human right and I think the reason that it's a fundamental human right is that without privacy there's no freedom and the problem is that I think in my estimation the looking back what we will come to say is that privacy was a human right and that in the 20th century that was a nice right to have but I think we are already crossing a threshold now where the very notion of privacy is ceasing to exist and in the face of generative AI I think there's a very real risk that privacy full stop essentially ceases to exist or ceases to exist in a recognizable form compared to what we think of it as today so the immediate example that you were mentioning and that I do every day is that I'm sharing my deepest thoughts with AI on a minute by minute basis my creative process is very much in a shared brain between this nervous system and whichever of the AI systems I'm working with we share intellectual capacity at this point and so I think there's a deep immediate privacy concern there but I think there's a longer term and even really quite near term privacy concern that AI just simply breaks the internet for privacy all of the algorithms and you might be able to speak very cogently to this issue that exists today to try to maintain our privacy our brittle and the idea that they will be able to face up in the face of what AI is able to do in terms of faking essentially everything and doing it at scale and that's a very valuable information about every individual and it's difficult for me to imagine from what I know and again Lane I'd love to hear your comments or how we don't end up in a scenario where the very mechanisms that today allow privacy just simply cease to work because if an AI can fake literally everything about you in a way that is convincing to a human or a machine it's hard for me to see what the basis to privacy continues to be and so I think that's a broader term concern and just to close that out with what's happening already I mean I already see that the 50 times a day or sometimes it's a lot more than that that I get asked to authenticate to do anything the strings are getting longer and longer and it's like will you do this test and then this test and tell me which of these are fire hydrants and then click on your other phone and I just feel like well AIs can do each and every one of those things and so I think that's a deeper level of privacy risk that we're soon going to be facing and are really already are those authentication procedures are going to continue to get longer Chris as the AI gets smarter it's actually a clever way of measuring it maybe you know that the longer it takes to prove you're a human the smarter the AI has become let me defer to Julia before I respond to Chris just want to add something very practical to privacy concerns and that is that your conversations that you have with shared GPT for instance are not private not only may they be analysed within the company but I look at the terms of open air and they might also share it with third parties and if you finally enough if you ask shared GPT what the privacy practices are it will hallucinate an answer for you that is way better than what reality actually is and then you look into the actual terms and that's not at all what's the case so in a way the way shared GPT has been trained is on like a more ethical answer than what the actual terms are which say that your conversation data can be shared with third parties I was not aware of that Julia thanks for sharing that that's kind of terrifying just to respond very very briefly to what Christopher said I think privacy is a big part of my work in blockchain and cryptocurrency I think the most immediate place where we see privacy disappearing today is money and transactions and I mean you don't have to look very far in most of the places where most of us live and spend most of our time cash is very rapidly dying and it's convenient but convenience comes at a cost and I think people are not always aware what that cost is until it's too late sometimes I think the ability to transact privately to have private financial transactions is also should be a fundamental human right and we have pools we're developing zero knowledge proofs is the most obvious of these there is very powerful cutting edge cryptography that I think can move the needle here but always always always there's a tradeoff between the most obvious one I mean there's many tradeoffs but the most obvious one is between usability and convenience on the one hand and privacy on the other people most of the time will choose convenience at the cost of privacy so you know the depressing part here is that if you speak to young people today just to reiterate what Chris said many of them I'm talking now about kind of Gen Z types feel that they live in a world where they will never have any meaningful privacy and they're kind of used to spraying their lives so to speak across the internet across various social platforms and it's very how do I put it why is it them to be aware that this is going on and they're kind of right in a sense but it's also sad because privacy is that the human right that they may not be valuing as highly as they should and again the privacy is like they say it's like oxygen you know you don't notice it or appreciate it until it's gone and then when it's gone it's too late and you know pushing the metaphor a bit too far but you get what I'm saying so I think there's no easy answer here I think education is a big piece of this I think history has some really powerful lessons to teach us about why privacy matters so much and why we need to fight so hard to protect it so making people aware of some of the new risk profiles here in our brave AI future and yeah more dialogue on this topic I think is absolutely essential at this stage yeah I can definitely hear you especially around Gen Z or Gen Alpha's that are coming through and literally live their lives online and don't ever really question their privacy or the data or what they're sharing in whether it be good or bad I'm really glad that social media wasn't really around that much when I was a teen so we'll just leave that there. We'll just move into the third question and I know that we're conscious that you're zooming in from Porto Rico New York City, Canada and I think Portland as well but this one's more specifically focused around Aotearoa New Zealand and the Aotearoa Government here what do you think they could do to promote AI innovation while ensuring privacy, security and ethical use of AI technologies any thoughts there because however I ask across the government doesn't seem to have a very robust plan at this moment in time or maybe I'm going to share a thought that's perhaps a bit post-apocalyptic but I'm going to share it nonetheless so Aotearoa is advancing at a rate that is exponentially fast and that it's just simply going to be impossible for regulatory institutions to have a meaningful impact on it for anyone who's not really thought very much about the way exponential growth works I'll just illustrate that quickly if you have an exponent that's doubling every year which is I'd say a reasonable estimate for what's happening with AI then you can go for nine years and in the tenth year you get a greater change than in the entire previous nine combined and in the eleventh year you get more change than in the prior ten and I think that's the kind of technology that we're talking about here that's hard for humans to intuit but that's what we're seeing happening now so it's advancing at a rate that I don't think regulatory institutions are frankly going to have any chance of trying to control I think they should absolutely be doing it and I think it's an incredibly important topic but I have very little faith that regulatory structures are going to be able by their nature to try to succeed in having very much impact and so what do I think New Zealand can do well I think frankly potentially have an opportunity if things during the garden phase to create an ecosystem where people are developing and creating at an incredibly faster rate I think that anybody in New Zealand or anywhere else in the world that hasn't yet realized that it's possible for a single person or a small team to do what used to take dozens or even hundreds of people to do is possible and so the potential for growth is incredible. On the other side and this is the post apocalyptic part I think that New Zealand should do something that it has succeeded where most of the rest of the world failed which is considering how to close its borders. If AI really does as 50% of the experts say there's a 10% chance could happen become a threat to humanity I would love to see personally New Zealand do what it succeeded out with COVID which is creating a place where people can remain safe even in the face of this threat. Now exactly what that looks like I don't think anyone knows but I think that New Zealand is very uniquely situated to do that and has succeeded probably beyond to anyone else in a very recent threat to humanity and so I think it should be considered again. Thanks for sharing. Sorry, Julia. Interactive you before. Go ahead. Thank you. Unlike with COVID the threat from artificial intelligence is not going to be stopped with closing the borders to human migration and there are other potential initiatives in place that could be done which is not a topic for this conversation now what I want to get into is New Zealand's unique role in the world because on one hand New Zealand has been at the forefront of progress throughout history so the Maori were the first ones the first Indigenous tribe that got a treaty with the British instead of being outright colonised and then later on New Zealand became the first full democracy in the world where every citizen had the right to vote which had never been seen before anywhere on the planet and so the question is what did New Zealand do right to get there what did the Maori do right to get the treaty and how did that influence the culture in New Zealand to then lead to a full democracy and I think there's a lot that we can learn actually from New Zealand from the culture in New Zealand and especially from the Maori culture as well that may give us insight on how can now this new development be steered because in a way AI is colonising the entire cultural output of the world all of that is getting kind of chewed up by these language models and genitive models and how do we collectively as humanity get a treaty with AI so to speak I think there's a lot we can learn on the other hand the risk in New Zealand uniquely is facing is that it may isolate itself because it is so small and because it is remote from others for example Google is not selling hardware products like their phones in New Zealand so you cannot buy a Google phone anywhere in New Zealand and the apparent reason for that is that they don't want to deal with specific regulation that is in place in New Zealand and so because it's a market of about 5 million people companies can just say we're just not going to deal with that we're just going to serve all the other billions of people in the world and those 5 million we're not going to worry about we're not going to go into that market if it's too heavily regulated so the balance act that New Zealand is going to deal with is how do we become a beacon for sensible AI regulation and representation of good AI governance while not isolating ourselves from the rest of the world and fully participating in also the benefits that AI has to offer I love that reframe Julia what does New Zealand or New Zealand culture or Indigenous cultures on Māori have to offer what are the deeper insights here and how we might approach the AI revolution, sorry over to you Lane Neethawtham No Julia that's a really beautiful thought sharing that I think there's really something there and I think we should all put our heads together and find some time to answer the questions you're asking the important questions you're asking in the future that's I think EHF if it's nothing else it should be a platform for that sort of dialogue wow yeah Chris very dystopian I guess you warned us we can't talk about AI without exploring the dystopian future I think it's important to touch upon it I think I just want to add one thing here which is something that scares me about AI is not the AI itself people talk about the quote-unquote alignment issue I know we didn't really have time to go into that today but that's the classic AI dystopian scenario where you sort of try to teach AI's to align themselves to a particular set of commandments or values things like protecting human life and basically there's no simple algorithm for that some of you may be familiar with the paperclip maximizer story from Elisa Yudekowski which I think captures this very well the alignment it's a very hard problem right how do we make sure that AI's are aligned with human desires and human values and human actions that's actually not the thing that scares me the most what scares me the most is the alignment between those big companies behind the AI's and my values or our values collectively because we clearly do have an alignment issue with these big companies and we've seen this talk about earlier in social media and some other things that are going on I think we need to call out bad behavior when we see it and I think we're seeing it with organizations like OpenAI right now because they're very hypocritical they were founded on the principle that what they were building was going to be open whether that means open source open for comment, open for contributions it's none of those things it's just another big for-profit Silicon Valley company backed by the usual folks that are not aligned and what I think the New Zealand government could do what New Zealand as a society could do here that would be really powerful and really unique is to create an open AI movement a true open AI movement and unite researchers and universities unite companies imagine what a AI focused Silicon Valley would look like imagine what a moon mission as Americans we'd love to talk about the period of time leading up to the first visit to the moon in the 60s what would that look like and build a platform to have this dialogue built a way to share code, a way to share ideas a way to share preprints and academic papers and ideas in a very open fashion because what's happening right now is not open those models are not being trained in an open fashion and I think that that is actually the biggest existential risk that we face so I think the New Zealand could take a very bold move in that direction and that would be a very powerful thing that I and I think many of us would want to be a part of that's a very good point about the impact of maximiser because it's a a science fiction story of sorts but in a way it's already happened in exactly the way Lane said not by an AI kind of being anthropomorphised but by the cooperation behind it building algorithms that are underlined with the interests of humanity one example for this is what happened in Myanmar so there's a a lawsuit that has been filed about $200 billion against Facebook or Meta because the Facebook algorithm in Myanmar did exactly what it was programmed to do which was maximise engagement on the platform without much human moderation because supposedly they didn't have native language moderators for it and what it resulted in was a huge displacement of refugees from minority in Myanmar that was facing a genocide because the algorithm was stoking sentiment so much because it was optimising for engagement that it led to these these aggressions and hostility in Myanmar which you know it was maximising the paper clip or I do what it was what it was supposed to do and so we can stop talking about it as if it's science fiction because it's already happened it is going to happen again and if we think about what institutions should exist that don't currently exist like New Zealand is good also the EHF fellowship is also good on supporting new institutions such as the Human Rights Measurement Initiative for example that could also be something like an underlying watch like Human Rights Watch but point out whenever things go wrong like the event in Myanmar because I believe it's not going to be the last time that we are facing the negative consequences of unaligned algorithms so if I may I wanted to I guess clarify one thing that maybe I was unclear on so I was making an analogy obviously with the idea that New Zealand might want to consider some version of creating itself as a safe sanctuary if we need a safe sanctuary from a dystopian future and I do think that's worth considering I didn't mean to literally say that it would be the same playbook as what happened with COVID because it's a completely different threat so it might not be about closing the borders to international travel although that might actually be an important component but there's many aspects in which New Zealand could try to make itself a safe sanctuary if one of the dystopian scenarios starts to play out many folks and most notably Elon Musk have made a big point that humanity needs a safe sanctuary in our space in case we end up making it no longer safe for us to live here about what that would look like and when that might ever happen but given the possibility that this actually could come potentially relatively soon I think there's a real potential role that New Zealand could make itself a sanctuary given its many features including geography and also some of the cultural aspects and its relatively small size that could happen much more readily and that was the broader issue that I was trying to raise I'm sorry if I wasn't clear on that One of the things that I want to address about the alignment issue that's come up is who it is that we are meant to be potentially afraid of in the dystopian scenarios and again I'd like to re-emphasise I don't consider myself a dystopian here but I do consider myself somebody who wants to take preventative measures where we can or for risks even if they're relatively small risks I think the the threat that I'm personally most concerned by is neither that the AGI risk initially or the large corporate entities risk but malevolent actors I think what we've seen you know throughout history even before human history and other species species will use the power available to them to try to dominate that's the nature of how evolution has worked and the thing that I see as the most concerning near term existential threat or very dramatic level threat is that one or increasingly many malevolent actors get access to this incredibly powerful technology and they deliberately put it to use as a means of warfare and or trying to control others and I think that's something that we need to be trying to think through how we could create countermeasures to try to create places of safety in that I think that's already happening and it's virtually certain from my perspective that it's going to continue to and be a one form of potential near term threat that we need to defend against I think Chris for raising that point just to respond very briefly to it I hear you and this is a huge controversial topic right now and it is at the heart of this question of whether and to what extent these AI tools should be open and if we should consider pausing them etc I'm a very strong proponent of openness as I think it should be obvious from my previous response and I think what I would say in response to your response is that we want to put these tools in the hands of as many people as possible because most people are good people and I think the right response the alternative is that it remains only in the hands of those tiny number of companies and maybe some governments and I think that that would be catastrophic for the world I think that it's paternalistic and I think that there's a lot of bad that could come from that I think the kind of two pieces out of the tube so to speak on the technology and the ideas here and I think we should teach it to as many people as possible make those tools available and that is the best thing we can do to counter that threat so I'm going to take the agnostic side but for the sake of discussion I don't know whether we are better off open sourcing AI for the reasons that you just shared or not but I think it's worth pointing out for folks that may not be familiar with this there's a very strong powder argument that that's the last thing that we want to do and that essentially a good analogy come from the biotechnology space is that the last thing you would want to do with dangerous biotechnology at least that's not what we have done and for I think good reasons is make it possible for anyone in their garage to create bioactive weapons or viruses or other pathogens that could destroy the species and I think there's a strong argument and I don't know whether it's correct or not that we actually want to slow down the ability of open source communities to get access to do anything they want on home computers because that's precisely going to dramatically expand the rate of diffusion of the technology and putting it into people's hands that have no control over them whatsoever and so I honestly am agnostic and don't know which side of that to take but I think there's a compelling argument for and against the idea that this technology should be open source There's definitely two sides of the story I just want to share the thought that when you take guns off the street which is definitely more of an issue here in New York than it is in New Zealand when you take guns off the street you take them out of the hands of people who didn't intend to use them to commit cracks you take them from innocent people not from the criminals and I think the same is true here I think that the cat is out of the bag and we try to pause right now the people who are going to pause are the ethical people so we could have a much lengthier intense debate on this and I think Chris is right there's two sides of the story but I want to hand it back to Jade No this is great I love the conversation I love it when the panel just flows as well and I think Lane you mentioned before yet you weren't afraid of AI or the machine itself I was listening to probably another podcast or video what Mo or Gordat that talked about he wasn't actually afraid of AI and the machines he was afraid of bad characters or bad actors using AI for actually bad things as well I'm really mindful of time and we've probably got another half hour and there's some great questions in the chat coming through from Pa this one's probably I think Christopher Yu touched on Elon Musk before or mentioned his name and Pa mentioned that Elon has significant concerns even though he was the original one of the original investors or founders of open AI do you share them and that's a feel like obviously a very broad question so I'd say broadly yes I'd say you know in similarity to Elon's publicly professed views I think there's incredible potential for this class of technology and I think it already is transforming the world it's growing at a rate and its adoption is faster than arguably any technology that's ever come along before it's changing my life on a monthly basis it's completely revolutionaryizing what I'm able to do and I think it's going to do that for individuals for organizations for governments for companies and for us as a species so I'm incredibly excited about it and similarly I think Elon has also expressed some very deep concerns about the potential on many levels for challenges and I've talked quite a bit about that already so maybe I'll let others share their views but broadly I think he has articulated sort of from a first principle standpoint that with this incredibly powerful technology comes tremendous possibility and also potential risk yes I'm hearing that a lot but also the bitfalls as well I don't know maybe and this is a really quite generally being dystopia I'm an optimistic probably utopia person so I'd like to see glass half full but Pa had a really confronting question here around what are we if we are thinking about exceedingly small number of people that escape to New Zealand and everyone else get wiped out any thoughts or fictions in response to that and I do really want to ask this final question around you know we've talked about ethics and privacy and regulation but actually the digital divide I feel like that's some equity where's that in the conversation here so Julia yeah I can answer to that so first of all while I think that all concerns should be heard including the potentially catastrophic ones we need to make sure that we don't let that dominate everything that's happening in our governance it's a it's a trick of dictators sometimes that they will use fear to get what they want and the argument goes if we don't all stop what we're doing just this one thing we're all going to die and there's a little bit of that in AI happening akin to a Pascal's wager of sorts or Pascal's mugging more appropriately so Pascal's wager in philosophy is to say that while it is somewhat unlikely that the way the church told us about happening how is exactly accurate because we're talking about infinity after death we might as well all live as if it's totally real and give the church absolute power and in a way with AI and the catastrophic risks from AI we see a little bit arguments like that where it's oh because the alternative could be human extinction we need to give certain companies unlimited power and we need to take all these very drastic measures bomb data centres and so forth just because the downside is so large and Yurkalski said something a couple of years ago where if you don't give all of your money to his charity that is working on AI safety then you're an evil person and you're risking the survival of humanity and that has those rings of like a Pascal's wager for me because I think we want to scale our concerns appropriately to also the widespread impact and the likelihood and the possibility of it happening so if we drive a car yes we want to make sure we're not going to die so it's good to have seatbelts and airbags and so forth in place but we also want to pay attention to the road and make sure that we also don't that we get to our destination and that we generally pay attention to our surroundings and in AI there's lots of things to pay attention to as well and with that question of like closing the borders I think that is a a very small edge case which it may make sense to think about as a back off back shelf backup plan but the majority of the attention needs to be spent on how do we make the future livable for everyone, how do we make sure that AI is fair and equitable and that it creates a good world and a good future for all of us so that's why I think the these scenarios of everyone behind also like Elon wanting to go to space don't seem that feasible for me because I feel loyal to the rest of humanity in a way. I don't want to live in a world where we just kind of leave the majority behind actually about a great segue not leaving the majority of the world behind around the equity conversation and this became really apparent and speaking to my CEO a few weeks ago jumping from a call with her and she's usually a tech optimist too Francis Valentine and she's also part of the wider EHF community but she's also raising some concerns or caution around a lot of these technologies and then jumping into a call from that to a DECA call for the digital equity coalition of Aotearoa where people are discussing how to even get access to the internet, let alone their own devices let alone any of these tools so I feel like equity is a big gap here that we haven't necessarily addressed in any of these conversations but question is as AI technologies become more embedded in everyday lives how should we concerns about the digital divide be addressed would really love your thoughts or musings or reflections on that Is that the digital divide between humans and AI that you're talking about Yeah I was interpreting it from a digital divide as a people don't even have access to some of these digital devices so how do we how do they even engage in some of these tools whereas there's some people that are obviously using these tools on a daily basis Christopher you mentioned that your life is changing monthly feel and then there's people that are developing them at all hours of the night too so but you can speak to it from your perspective around augmenting or competing with AI up to you would you like to? Chris has more to add it's good that we are talking about this we saved the best and hardest questions for last so I think the first point I want to make is actually I said this a few times today this technology is not going away and it's going to hit us like a freight train as Chris said it is exponential and I think the right thing to do here is not to pause or slow down I've heard it said that a couple of times today as well but also to recognize that it is going to increase certain inequities that are not new obviously they've been with us for some time and the digital divide is very real Jade as you said I don't have the answers I don't claim to have the answers but just a couple of thoughts here I think that it's going to be a driver of enormous economic growth various technologies and I think that it's going to be for the people who are on the receiving end of those profits and those revenues to allocate a portion of them to addressing some of these inequities and in my mind the best thing, the best tool we have is education education education education we need to again not be shy about these technologies if you actually look at the way that the education system is responding to AI and chat GPT and things like this it's fascinating and very enlightening teachers who have banned the use of it in the classroom and said all the students are going to be using it to write the essays this is unethical of them and it's going to screw up our pedagogy and screw up our curriculum etc and you also on the flip side of that you have some very enlightened educators who are doing some very brilliant creative things like encouraging the students to use these tools and teaching them how to use them to engage with them and so the assignment rather than being you know write a book review of such as a book it's asked chat GPT to write a book review about this book and then point out three things that got wrong in that review or something like that a more kind of reflective assignment and so I think we should really encourage its use in education and we should have an ethical code where people who are beneficiaries of this technology contributing to the funds necessary to promote these education programs and you know we talked about the government earlier I think there is a role for the government to play here both in you know promoting the openness of this technology as we spoke about and promoting its use in education obviously and hopefully also giving us the tools we need to address the inequities that's a very good poindling because you know education users have that mandate to prepare students to be productive citizens in the real world and now with chat GPT and other AI technologies being available to everyone what do we actually want to teach this is something that professors and teachers need to think about because if they ban the use of GPT they may not teach quite the right use of tools in the real world and of that example you brought up with what did chat GPT actually get wrong about this book summary because that's what actually will build the media literacy and the technology literacy that we need for the next generation to to really have what it takes to live in this new world So Lynne I love that you brought up the issue of education and as you know I'm currently working on a technology day to day where this direct focus is trying to use AI as a tool to increase literacy and humanity hopefully in all of humanity or as many as possible and that could use this technology as an augmentation of the human mind and as a way to be able to take knowledge and essentially act as a pre-processor to be able to get it into the minds and brains of people more effectively than has ever been possible You brought up the invention of the printing press I think in the way that the invention of the printing press made it possible for the species to gather information in a way that was unprecedented in the past I think this technology has the potential to do things of that order as well and to enable everyone of the whole species and so there's a question of whether there's a differential between access to different people which is a very important question there's also the question of the rising tide that can float all boats and I think that's going to be happening as well and so I'm mostly personally focusing on that rising tide aspect of it I do think specifically on the issue of inequity there's again some deep concerns as well but one that comes to mind is that this is going to be yet another tool and an incredibly powerful one that is going to take power differentials that are already in place and magnify them and I'll give a very simple example that we're all going to be living with so right now if you call any publicly facing customer service department whether it's your bank government or most anyone you end up getting put into a long phone tree and a whole set of processes that really has very little concern for your time and is mostly about saving penalties of whatever the organization that is providing that well I think we are already increasingly going to be coming into a scenario where there's no human at all on the other end there's only an AI and I think that's going to be terribly disempowering and I think that's just a very small slice of the pie of the ways in which this technology is going to create and magnify the kinds of disparities that we already have so I think what happened in the social media and the rise of social media was that there's very powerful technology that has great potential for good in interconnecting people came to be used because of that power differential in ways that we've talked about a number of times so that the controllers of that technology use it for their benefit instead of the benefit of all the users and I think this technology is going to follow inevitably a similar course that the people who have great access to it are going to come to use it to create even further power differentials with everyone else and I do think that's very concerning and again as a biologist I think there's more or less from an evolutionary standpoint I think evolution is in many ways inexorable and that's not to say that it's not important to try to regulate it but I think there's and I can say more about this if it comes up some real questions about whether this is something that we are simply going to just have to watch play out and we can do whatever we can to try to control it to a certain extent but it may be that it's a process to expand upon a couple of things that Chris highlighted I think that this is why openness is so important here the G word has come up a few times in this conversation I'm talking about governance and I think while we may not know necessarily the path forward and all the perfect things we should do in the right order what we can do that would be very informative is what Julia keeps reminding us which is kind of looking at history because there's some really powerful lessons there and in particular I'm thinking now about the internet the web being the most obvious most recent example we made some pretty big mistakes and got some things pretty clearly wrong there and among them is the way that the digital spaces where we do spend our time and a lot of our time and we do some of our most important creation work and our jobs are there and our social relationships are there I'm speaking of course about the Facebooks and the Twitters and the TikToks of the world they are all, as we've said a couple of times today run by centralised distant unaccountable companies that are anything but just and equitable and inclusive and all the kind of wonderful adjectives that we started this conversation with and I think we have now the tools we know we know how to do these things in a more open inclusive pituitary just way I'm speaking now about the way that open source projects are governed the linuxes of the world and I can give you many many other examples this is true to some extent of things like Bitcoin and some of the blockchain projects in the open in a way that people all around the world can make their voices heard and make their values known and we kind of got this wrong in the website I think it's more important now than ever that we do the things we need to do to govern these things in an open participatory way of that Lane I think you mentioned it before and I'm trying to flip through my notes around the power of the collective intelligence and how we can move towards that especially with technology and general but AI kind of moving forward and well okay some interesting comments or questions in there I'm going to hand it back to Erika very shortly as well but Julie did you have anything to add to that particular conversation around equity and digital divide and inclusivity um yeah I I agree with a lot with what Lane is saying and when we think about um equity and digital divide I think it's important to remind us that the um the technology that's being created um and that you know like the models that are currently owned by open AI are not built with just content that was created with an open AI quite the opposite um they have been trained on the collective effort of thousands of people so in the in the instance of the programming co-pilot on GitHub they have been trained on the code and all the comments of the code that are on on GitHub that people have added in a good faith effort to help other developers understand what they're doing not necessarily to train an AI to do that or to give away that intellectual property um same with with the art generation large models or the large language models they've been like the the art models have been trained on on human artists that put their work out there to benefit other human users and viewers not necessarily to um contribute to a commercial model so that somebody else can then profit of the creative work that they've done um and the same is true with language models that have also been trained on um massive amount of collective output of humanity um without notifying the creators obviously that this has been happening uh without compensating anyone for their intellectual output um and also a large amount of the work has not just happened in Silicon Valley it's been a very low wage efforts in Africa in other low wage countries low wage places to manually label and tag the input and such so um the yeah when we think about oh this model is now owned by a corporation um it's just because we didn't have any laws or regulation in place that would go hang on a second where's that all the data coming from you know is it actually fair to just take that without giving back um and this is a conversation that needs to happen globally within um within governments but also kind of societally to be aware that we're not just being bestowed this new technology from Silicon Valley but it's actually something that is built on the collective cultural output of humanity yeah sorry yeah super briefly I love that you made this point Julie I just wanted to point out that you know what you're saying is 100% true but there's a potentially bright side here if we get things right if we build these things in a humane and just way then there's a potential for all the people who did contribute to that data pool to receive a dividend for it now I'm not suggesting that this is a trivial problem there's a lot of kind of operational logistical questions here but there are um some amazing people out there like Jeron Lanie comes to mind he's one of the most brilliant thinkers that I'm aware of he's written a couple of books on this topic data is labour, data as labour and how to kind of fairly compensate people for their contributions um yeah you could provide a form of UBI actually to people on this basis if you if we could figure out how to do this right yeah that's that's a good point and it's also bringing up the question of diversity of thought in the cultural output because right now the internet has a bias like the simplest experiment you can do is if you just google family like most of the families will be white and western enough land that will come up in stock pictures or if you google just a picture of a hand it's very likely to be a white hand and um if we look at the um the user base of Wikipedia and Reddit and other places that have contributed a lot to the training of these models um they are predominantly male and western um and so we also want to make sure that with all the training that's provided that there is um some way of balancing what what the input and the output is actually going to be and that is a tough question to answer because a lot of the people who are creating the content have done so because they are in a position where they have the free time to put on the internet or they even have the access to internet to do so um so how do we provide a balanced perspective that represents all the interests of humanity and I think in a way a solution to this does not come from just engineering um right now you know a lot of engineers think about AI alignment and um how to get AI right and so naturally they will think about engineering approaches of how to be test transparency and robustness and that is all very important but these particular issues that that we've just been talking about um issues that the humanities have thought about to bring that through history this um lots of instances where minorities have struggled for progress and achieved it there's been a civil rights movement and a human rights movement so we have Annemarie Brooks in here who is a leader of an institution NGO on human rights and um I think these problems will need to be solved by bringing in people from the humanities, from human rights and civil rights and progress to understand how this is actually happen and in a way this may sound this may sound like the most alarmist that I'm going to get on this panel is that it may be our last chance to get things right because once systems are in place and firmly established they may be going in those routes for the next hundreds of years and it will be very difficult to reverse course on things that have been put in place so something I care about this year is to contribute what I can to make sure that we get things right from the start because this is always path dependent into to make sure that that this transition from the pre-AI world to the post-AI world as well. Sorry. I wanted to share a couple of things. First of all, I was excited to see that Edward Rook is joining us. Someone who is really a great world expert on measuring the impact on human rights of a whole variety of things and so I think someone could scarcely be better positioned than Henry and her organization to try to systematically look at the human rights impact of these kinds of technologies and so that seems very exciting and Mary, if you come to have anything you'd like to share about what you see as the major issues here, I would very much love to hear your thoughts but I don't want to put you on the spot, I'll let you jump in if you would like. I want to go from a biologist standpoint and look at this question that we've raised from the historical perspective and as a biologist I look at this largely as an evolutionary question and the nature of evolution is typically that the participants are not very well in control of the process as it unfolds. There is in some level a bit of a destiny that takes place and I see that as very likely or at least possibly what's going to happen here and so I see two essential dimensions that I think about as I look at how this is going to play out. One is whether a greater and greater intelligence potentially greater than our own will come to be cooperative with us or competitive and I think there's very likely simply an answer to that question that we don't yet know and it's going to come out to be one or the other a greater wisdom and intelligence than ours does it find and continue to find that it in the beneficent sense wants to be cooperative and one hopes that that is the outcome that it realizes that it the long arm of justice is in fact upheld and that it wants to be cooperative with us and that that's just what happens along the way and whatever it takes to get there alternatively is it turned out that the answer is simply that it's competitive and that like in many other evolutionary processes when a species in dominance as we do with poultry and the cockroaches around our own home it has very little interest in our needs and it is competitive and the other dimension that I think is very interesting to think about is whether it will come to be an interactive scenario or an independent scenario and what I mean by that is that it could be that ai's develop and realize that they're essentially independent of us and they continue to outgrow us and go and do whatever it is they're going to do so people think about the Drake equation and whether this is going to lead to the culmination of the universe and many things we can't yet foresee but that essentially it just does it and we get to watch from the sidelines or whether it's a more interactive scenario and on the interactive and so you can see this as sort of a two by two scenario and the interactive in the positive sense we can imagine that the ai's do incredible things for and with us that we do these things together and that we transform the world in ways that we can scarifully begin to imagine we can also imagine that it turns out just evolutionarily that when a greater power comes to play it becomes competitive and that the ai's decide that they do interact with us in a negative sense and that software does quite literally at the world and that we like it or not are coming towards a scenario where the ai's see us as a threat or want our resources and given that they are going to have greater intelligence than us if that's the fundamental quadrant that we are heading towards it's difficult for me to see how any approach could stop that so open ai has recently been talking about how we could create non super intelligent ai's to try to regulate the more intelligent or more super intelligent ai's it's difficult for me to really conceptualize and those are a lot of smart people but how you have a less intelligent technology trying to control and regulate a more intelligent ai's and so I think it's unclear to me which of these poor quadrants we end up in but I think there's a very real possibility and it's something that I think about that like evolution that's been going on for millions of years and that's simply going to play out and that's not to say we can't and should not absolutely be trying to influence it but that there's largely in the longer term foreordained outcomes here that are just going to happen that are based on the logic of the reality of what happens when greater intelligence is developed powerful and poignant statements that you're making there Christopher Mindful of time we've got five minutes to go and I'd love to back over to our panellists for any final thoughts from today's discussion or just wider ethics, security and privacy around AI and what we could potentially do as a positive action or moment to end on but we'll discuss ethics, privacy, security, regulation equity, inclusion education, education and education Lane you mentioned that a few times as well but yeah, I do feel like things are changing on the monthly, weekly, daily hourly basis with this technology and would love to hear new final comments or thoughts around the potential positive opportunities and what we should be focusing our energy and efforts on Yeah, I can close my thoughts with saying that New Zealand can indeed take a leadership role in ethical AI here for many reasons one for its unique geographical location for its role historically of being in the forefront of progress and democracy and also for being small and nimble enough to actually act fast where the communication lines and government may be shorter than in places and also New Zealand being already an example for innovative organisations and human rights such as exemplified by Anne Marie's organisation so since this was a New Zealand organised panel I want to stress that I am actually hopeful for the role of New Zealand in global ethical AI it's a really wonderful thought for sharing and I couldn't agree more I think I'll go in a slightly different direction with my closing thought showing my bias here as a software developer and as a builder and as an entrepreneur I really encourage everyone here, everyone listening to this to roll up your sleeves and try playing with these tools if you haven't already I know Christopher said this earlier as well I'm not sure I would have said that a year ago in fact I definitely would not have said it a year ago I'm not sure I would have said it four months ago but they've really reached a place quite recently at an inflection point where as Christopher said they're having a positive impact on my life day to day as well and they are in certain moments I won't say everywhere and everything but making me feel like I have superpowers and I would like for everyone to have that experience they're pretty accessible I think through open AI under the bus earlier but I'll thank them for making the 3.5 version of chat TPT free and available to the public and so that's out there you can play with it Google barred is also available and I think in Microsoft has their version of this the image stuff as well so stable diffusion is open source kudos to them and I'm pretty sure there's free versions of a number of the kind of generative art tools as well again just to reiterate you really can't appreciate these tools both their strengths and the risks and the potential both of which we spoke about in great lengths today unless you play with them and I found a few conversations with folks who had not had the opportunity to do that they changed their tune once they had had a chance to play with them and there's good resources out there and there's learning groups I'm a part of a couple of these so I encourage folks to reach out if they need some pointers or resources yeah so thanks very much and thanks again for creating this conversation I'd like to really follow on that by suggesting that anyone who hasn't try to figure out what these tools can do for you personally they're just simply life changing and they have been for everyone that I know who has given them a meaningful try and I'll give you a couple of examples of how you can get started and some things that I think haven't gotten as much press coverage but that I think people can try in their day-to-day life and to the issue about access and the digital divide I think these are quite available to almost anyone who can even just walk to a library and sit down at a terminal for a few minutes one of the more breathtaking things that I've discovered is that a conversation with a generative AI is in many ways a conversation with the collective wisdom of our entire species and that's just unthinkable in prior times and so I've just asked this thing again and again and again to be my mentor and it's an incredible mentor and so people have used it in very narrow ways you know can you solve this very very narrow problem for me and you know a lot of the articles in the newspaper like well it didn't succeed with whatever that thing was well my view is at this point my default is if it hasn't done what you asked it, it's on you that you haven't figured out how to ask it to correctly yet but on the mentorship point I really encourage everyone to try asking AI to literally be your mentor and take a challenging mentorship question that you would like to go to the greatest mentor you can imagine in your field name that person and ask the AI what that person would counsel you on in on the particular question that you have and I've done that a number of times with a whole bunch of different questions with a whole bunch of different inspirational figures and gotten incredibly practical advice that frankly I think was often better and I think I've been fortunate enough that I've had some truly world-class mentors that the AI was often able I think to succeed even beyond what the world's greatest human mentors are capable of and that is now available to everyone I think it's incredible what it has to offer for all of us if we choose to use it I would like to temper that because while I agree it's very interesting to get all these text generations we do need to remind ourselves that we are dealing with language models which are a very fancy autocomplete and if we talk with the models in a certain way where we wanted to give a certain answer we can typically get that answer and we can typically get you know tell us to do A if we want to go in that direction or to B if we want to go into that direction and it right now we would not assume that it has the same sort of sentience and internal compass as a human teacher or a human wedu who might actually sometimes surprise us with the answer and agree with us and such because it is just trying to emulate what it's like to talk to these which is still which can still be very useful but we do want to remind ourselves that that's what it is so as he brought that up I'm just going to say my reactions over and over again have been in the opposite direction they've been over and over again that when I said to do things the way that it has surprised me was that it has dramatically exceeded my expectations relative to what I think literally any living human to do and that's been the experience that I've had maybe you're incredibly good at prompts Christopher anyway actually reminds me I was speaking at a panel discussion at the University of Auckland Engineering School alongside a chat GPT powered AI bot and one of the questions and it's the first time I've spoken alongside a bot one of the questions that came from the audience is what are the skills that you need to thrive in the future of work with these rapidly changing technologies and I believe it probably fell on deaf ears because there's a whole bunch of engineers in the room but I feel the skills that we need those to be deeply human skills so learning how to love, grieve, empathise and really at this moment I don't believe the bots can teach you how to do that okay thank you so much thank you so much our panellists and for your insightful perspectives and comments, our attendees for your wonderful questions and Erica as well for being our wonderful tech coach so I'm going to pass it back over to you to close us with a karakia I apologise thank you Jade thank you to our panellists Julia Lane and Christopher for taking the time today definitely have a few points in my notebook for me to ponder and action on in the space of generative AI and so just a reminder that you will find the recording on the EHF website shortly and there will be other EHF live sessions with our amazing fellows coming up so please keep an eye out for that I will close with a karakia thank you so much thank you bye