 So good afternoon, everybody. Welcome to the 28th Military Writers Symposium. My name is Dr. Travis Morris, and I have the privilege and honor of being the executive director for the symposium, but also the director for the Peace and War Center. It's great to see everybody. So this is our third session for the day, our open keynote. We had two speakers. One was the Israeli ambassador who spoke to us about the intersection of AI and security. And the other was John Abelay, who's a Vermont native. And he is also the director and founder of Boston Scientific. And if you don't know what Boston Scientific is, I suggest that you Google that. It's one of the most powerful AI medical companies that have shaped the way in which we look at medicine in the intersection of artificial intelligence. So we've been doing the Military Writers Symposium for 28 years. It's the only event of its kind. And one of the unique features in charge for the Military Writers Symposium is to focus on themes and subjects that are challenges for the 21st century. Last year, we focused on the Arctic. Before that, we focused on the weaponization of water. And this year, we focus on the intersection or the nexus, if you will, between artificial intelligence and robots. Officially, the theme is robots rising, arming, artificial intelligence. So for you students or either faculty or staff to think that maybe this is a subject that is outside of your area of study, it's something that you may not be interested in, I would counter that by saying if you own a cell phone, you are in this age. Artificial intelligence is something that impacts your life absolutely daily. And for the students that are in the room, particularly those that are going to commission, those that are going to be involved in maybe policy, those in politics, maybe the security realm, you will be leading us in this new front. That's a very serious charge. Some of the people that we have on campus today are some of the world's leading experts and we are thrilled to have them and they're here on campus to help us think through what artificial intelligence means now and for the future. So I would encourage you, maybe you're here because of your class, and that's a good thing, but would encourage you to attend some of the other sessions because what we're talking about is not something just for computer scientists or those that are doing digital forensics or those that can write Python or code. It's for all of you. And this is the world in which you live in. So would certainly appreciate you being here today and also we've got two days of programming. Sometimes when we talk about artificial intelligence or something cyber related, we think that it's a domestic conversation. In other words, that all conversations about AI or cyber are happening in English and that's not the case. This morning we had one of our students read a short passage in Ukrainian about what AI means or what's going on in his country right now. But something else that's incredibly important is that this is a conversation that's happening right now across the globe in Swahili, in Korean, in Arabic, in Mandarin, in multiple different languages. We just happen to be having that conversation today in English. One of the things we do at Norwich is we always want our students involved. And so something that we're doing this year, we are having students open up our sessions with a reflection piece. And today's reflection piece is from Makayla who's going to talk about her experience in Kyrgyzstan. And so I just want to encourage you during your academic career, it doesn't matter what your major is, it doesn't matter what you're gonna be doing, even if you're gonna be doing something regarding mathematics or even just writing code. We live in a world that's globalized and the likelihood in your lifetime of you working with people across multiple cultures in real time is only going to increase. So it would challenge you. So today, if there's two new subjects for you, one is AI and robotics and the other is intercultural or cultural intelligence. I would suggest that you embrace that while you're here at Norwich University. Study abroad and even better. Maybe study abroad and work with cyber overseas. And you get two things at one time. So Makayla, why don't you come on up and the floor's yours. Thank you, Professor Morris. Good afternoon, everyone. Again, my name is Makayla Hart. I'm a senior here in the core. I'm a Olmstead fellow via the Peace and War Center here at Norwich in the Olmstead Foundation. And I'm just reflecting on our past trip to Kyrgyzstan that we operated with this past May. So there are no words I can truly describe or pictures or stories I can truly describe the experience as a whole. An unfamiliar country in my first exposure to an overseas environment, Kyrgyzstan is a place that I hold dear to my heart in an area unexplainable unless experienced firsthand. Being in this sort of environment with this sort of study and mission was something unique to our team in the expansion of Norwich University's presence in Central Asia. We think we understand and we think that we know what to expect, but time and time again, the more complex each situation got and the less we seemed to know. It was tremendously eye-opening from all angles. Being the operations officer for the trip allowed this perspective to understand the trip inside and out to a point where I did not expect to learn or experience as much as I did. The United States is essentially a bubble and most of the college-age demographic misses the many wonders of the world, especially places like Kyrgyzstan. The opportunity to understand their culture through the lens of peace and conflict is one of the most important and overlooked aspects of leadership. I worked with our team to ensure the success before, during, and after our trip. I needed to share an understanding of every aspect in order to communicate our priorities and keep our trip on track. Working closely with our team leader, this allowed me to develop my own leadership style through a multi-dimensional process. At that same time, I became an expert and primary point of contact for all things related to the United States Embassy in country. I made connections with the Kyrgyzstan Embassy here in Washington, D.C., as well as many connections, including Norwich alumni in the U.S. Embassy in Bishkek, with the help of our administrative officers while Ryan Keranston. Our trip was encouraged and safely supported by both entities. This subject truly tested my communication skills and my professionalism, two skills that are crucial and necessary to master before becoming an Army officer. Leaders need to recognize this in order to lead in foreign environments. I developed an appreciation for logistical tasks and interacting, excuse me, with higher level leaders. I'm filled with gratitude for this experience and I value these polished skills as I progress in my career. My advice to my audience, please consider traveling the world. Sometimes, see the many wonders and attractions our earth has to offer, but consider traveling to an uncommon country to experience their history and culture as the main attraction. Open your mind to new ideas and experience on your own that sometimes the American way isn't the most expected, accepted, excuse me, method of problem solving. Allow yourself to be influenced by the morals and ethics of different cultures and use the world around you as an experiential guide to refine your leadership and develop your own values. Thank you. Thanks, Michaela. So as far as being exposed to new ideas and thoughts, you don't have to go to Kyrgyzstan or Germany. You can just sit here in Mac Hall and if artificial intelligence and robotics is something new to you, we'd just suggest that you can embrace new ideas here and also think about how that applies to your current situation, but also to the future. The military writers symposium is not done alone. It's done with a team. And by a team, I mean, we have been privileged over 28 years to invite some of the world's leading scholars and thinkers on campus to help us advance our thinking on topics. And this year, robotics, artificial intelligence. So for our esteemed guests that are here in the audience, we thank you and you're gonna hear that numerous times today and I don't apologize for that. Our gratitude, it's something that's one of our mandates and we just wanna let you know we really appreciate you here. And another way in which we approach the topic is we want to look at any subject from a different lens. And the way we do that is we invite experts who have different backgrounds in different areas. And today we are very privileged to have our distinguished guests, Martha Wells with us today. Our moderator to my left, Dr. Brett Cox, professor of English, will introduce her and will moderate the session and would also just like to encourage you students, he will open up the floor for questions and answers and please take an opportunity to do so. And I would also, this is one time when it's okay to use your cell phone. If you're not familiar with who Martha is, I suggest that you Google her. This is a great opportunity to engage her. So, Dr. Cox, over to you. Okay, thank you, Professor Morris and welcome everybody. For first things first, can you hear me? Is this audible? And also first among equals, Martha, can you hear me? Yes, I can. Okay. So, well, welcome everybody. I am very happy to be talking with today with one of the most acclaimed of contemporary science fiction writers, Martha Wells has been publishing prolifically since her first novel in 1993 and she has garnered a particular acclaim for her series of novellas, short novels and a full length novel, collectively known as the Murderbot Diaries. She started the series with the novella All Systems Red in the year 2017 and has since published four more novellas in the series, most recently Fugitive Telemetry in 2021 and in 2020 she published the novel in the series Network Effect as well as a couple of accompanying short stories along the way. This series as a whole has one and I think I've got this count right for the various stories in the novel within the series, have won two nebula awards given by the Science Fiction and Fantasy Writers Association, three locus awards which are given annually by locus magazine based on their readers poll. Locus is a publisher's weekly type magazine for the science fiction field and four, Count them four, Hugo awards given annually by the World Science Fiction Convention, most recently last year in the category of best series. And I also want to note that Martha also where I think first noticed her work was actually at the 2017 World Fantasy Convention where she was the toast master and gave her the speech at the awards banquet called Unburied the Future, which was a powerful commentary on how the achievements of women both within science fiction and fantasy literature and the world at large are all too often quite routinely suppressed and it remains one of the more powerful talks I've ever heard at a conference. So thank you for that, Martha. Thank you for all of the stories and welcome to Norwich, however, virtually. Thank you for having me. I really wish I could be there in person. I apologize for not being able to make it, but I got a stomach virus on Sunday and I'm recovering now, but it would not have been a good idea to try to come. Well, great. Well, we're happy to have you here and just for your own orientation, you are on a really big screen in front of an enthusiastic audience in our main auditorium on campus. I'd like to begin with kind of a basic question because in terms of the theme of our conference, artificial intelligence, of course, we want to talk about your work with the Murderbot Diaries, but for the audience, could you give just a brief overview of what these stories are about and what the setup is for the series of tales? Well, there are far future science fiction about a being who is partially machine, partially machine intelligence and partially organic. Excuse me, sorry. And this being is called a construct. It has no name, it calls itself Murderbot. It's been enslaved basically as a security unit by in this area, which is completely controlled by different corporations. The company it's owned by is basically a bond company, an insurance company that guarantees equipment and security and so forth for groups that are going to do surveys on alien planets. These constructs are supposed to have a governor module that controls everything they do and that will punish them or kill them if they disobey an order. Murderbot has managed to deactivate its governor module without anyone knowing. It had a choice at that point that in the popular culture of this world, it's believed that any construct that was able to go rogue basically and get out of control would immediately kill all the humans that it could find. Murderbot had a choice to do that or it could access the free, the entertainment channels that were available on their version of the internet and it chose to do that instead. It's subject to a lot of anxiety and depression because of its situation and it ends up staying in its job basically concealing the fact that it's free and it's sort of stuck there really not being able to make a decision about what to do with its life. It ends up succinct to a group of scientists that it actually begins to like and it comes in the first story all systems read is basically where it faces the choice of revealing that it's free in order to save their lives or basically letting them die and it's making that choice that kind of propels it into being able to accept the freedom that it has and take over its own life. So they are wonderfully entertaining stories. I believe they're available over at our university bookstore. I do commend them for your attention. I had a couple of more questions sort of specific to your writing of the Murderbot series and then I wanted to move on to a couple of topics connecting this to our conference theme. I guess maybe this first question already goes ahead and starts that. Now you, as I said at the beginning you published prolifically before publishing the first one of the Murderbot series and what drew you to telling this particular story? I mean what drew you to speculating about the future of constructed beings of artificial intelligence as opposed to any other science fiction topic? Well before this I wrote mostly fantasy but when I got this idea it was so obviously a science fiction idea and it was so obviously the idea of basically an enslaved person who is treated as a tool who ends up with the choice, the moral choice between saving people that it's actually started to feel some emotion for or doing nothing. That really had to be a science fiction story and AI was the best way to tell that story. I also had, I've recently read Anne Lecky's Ancillary Justice Trilogy which I highly recommend. There's also Autonomous by Ann Lee Newitz books that really get into the idea of not only what AI might be like but the human perception of AI and the difference between what a real AI might be might want, what its agenda might be and what it might think of itself versus what a human assumes it would want. There was also a lot of novels that feel like coming out around that time that were really putting a human assumptions into AI. The assumption that every AI would immediately want vengeance for being held captive and actually a couple of authors, Anne Lecky among them, have referred to that as basically a slave narrative where the being that is being enslaved has to be demonized and shown to be violent and uncontrollable so that, excuse me again. So that its captors are justified in treating it badly and repressing it or killing it when it tries to escape. And there's also the idea that an AI being would want to be human. Anne Lecky I think particularly deals with that in Ancillary Justice where the AI in question starts out as a starship and ends up being with these multiple perspectives and all these multiple different bodies which it considers peripherals and then is squished down into basically one human body and the difference in that perspective. So I kind of wanted to deal with that that haven't really think about what an AI might want for itself rather than or that an AI would not want to be a human that that would be basically kind of a come down for a being that had that much control over itself in this many multiple perspectives. Yeah, in my reading of the Murderbot series I kept thinking of there's a classic science fiction story by Isaac Asimov well known for his robot stories called the Bicentennial Man that is exactly what you're describing this idea of the robot who wants nothing more than to be human. All right, kind of the data model from Star Trek the next generation and one of the many things I enjoy about the Murderbot stories is the degree to which Murderbot's like, oh, I don't think so. I'm not really sure about this and it kind of leads me to another question here. Getting to the idea of our attitudes towards AI and you talked about this sort of monstrous representation and one of my favorite lines from the stories occurs in the novella Rogue Protocol where the Murderbot is commenting on a plot element of a show that he's watching and I do wanna get back to Murderbot's media consumption which is just tremendous and he knows that a plot element of one of the shows he's watching is unrealistic wouldn't really happen that way but then he says, quote, there's the right kind of unrealistic and the wrong kind of unrealistic and it sounds like you're offering this model of the aggressive revenge driven or potentially out of control killing all the humans AI as the wrong kind of realistic and that's so ingrained in our consciousness. Could you speak a little more about that, about what attitudes you think we should or should not be bringing to the very concept of AI? Yeah, I think that's a big part of it. It's really thinking about the unrealistic that transports you to places with your imagination that's liberating versus the unrealistic that's telling you lies and the idea of demonizing a machine intelligence of course for it is definitely a lie. The thing I was thinking of or one of the inspirations for Murderbot in this kind of this section in particular is it's a very old movie called what's not very old. It's called War Games. It's about a supercomputer that a young kid is able to accidentally contact through the internet and he thinks he's playing games with another kid and it's actually basically a computer that's designed to wage nuclear war which was I think it was came out during the early 80s and that was a big theme back then. And kind of the, and this was this movie I think in itself was an answer to something like Colossus, the Forban project which was one of the genre of humans create a supercomputer which then takes us over because why shouldn't it? There's never really a good motivation for that. The computer usually comes up with something it's pretty thin. It reminds me a lot of the recent episodes of the Star Trek series Lower Decks where they basically have a little computer jail for all the computers, the sentient computers that become power mad and try to take over human civilizations and there's just hundreds of them in there. Yeah, I'm sorry, please go ahead. I was just gonna say Murderbot's very much a reaction to that but in War Games, the computer it was such a different movie at the time period because the computer makes a choice at the end to decide that war doesn't make any sense because there's no nuclear war, there's no winner. And so it basically refuses to, it makes a choice not to do it. And Murderbot was kind of based a little bit on that and also a little bit on one of the documentaries in the Making of Lord of the Rings. They used algorithms to place all the wide shots of armies clashing on the planes and so forth. They directed all the little figures with algorithms because it would be to have individuals try to animate all that would be impossible but they had to make them at one point, they had to make it the algorithm more aggressive because they showed these, the early images of it were all the little figures run toward each other then all run away because they've all, they've been programmed with too much of a sense of survival and which kind of gets you thinking about why would an AI wanna fight for us? If it's a sentient AI, why would it make this choice? So yeah. Yeah, I mean it's amazing the degree to which we project our own fears onto this canvas. Now I did say, I have to get back to this that one of the many distinctive elements of the Murderbot stories is your protagonist's obsession with media and the degree to which Murderbot is more comfortable watching dramatized stories about humans than in many ways interacting with actual humans. Another favorite moment of mine is when he's trying to calm himself down at one point and I just made a terrible mistake and I need to own up to this. I have reflexively referred to Murderbot as he and the author takes a great deal of care to make sure that the character of Murderbot is not gendered at all and maybe we'll have time to come back to that. But Murderbot goes through a list of new shows that Murderbot could watch and instead winds up rewatching all 200 and something episodes of one of his favorite shows that he's already seen, again, there I did it again, that the Murderbot has already seen 26 times and I just find that so familiar. So what led you to that aspect of the character and the story, the media obsession? Thinking a lot about how media can teach us how to interact with other people, there had been specific cases I'd read about of kids who were on the autism spectrum to the point where they could not communicate and who actually learned to communicate through with their parents through Disney movies and that was actually provided to start, to learn how to contextualize their emotions by what the characters were doing and saying and feeling and that that actually gave them a starting point for building their whole ability to communicate and that later they ended up being able to go to college and live their lives, you know. And so thinking about that and just thinking how that was really helpful to me when I was growing up, being able to just learn about the world and be comforted by it and also kind of learn how to understand what you're feeling and really be given context and a lot of different things like that and I also think it was something of a reaction to I guess particularly in the 90s and early 2000s there was this idea, I'm sure it's probably still around that if you're a writer you must hate TV and movies which I always think is really strange because without writing TV and movies don't really exist but there's always been this kind of thing and it's completely a sort of a corporate power issue of trying to diminish the idea that writers are ultimately responsible for the content of what shows up on the screen and even though it's a collaboration among various, you know, a lot of people working very hard that the writing is basically at the core of what's going on and so I think when novelists and buy into this concept that writing, book writing is somehow this sacred, separate thing away from, oh, TV writing is not really the same thing at all, it's like you're kind of buying into that and helping that corporate idea that diminishes the power of writers in media, in visual media, but... Yeah, I think, well and more so all the time in the sense that I have many writer friends and I know you do too who are increasingly active in writing for media, for other media, I should say. Well, since you mentioned writers and writing, I wanted to ask you a little bit about the background that you bring to your work. There is this, I don't know if it applies so much anymore but the stereotype that writers come out of English programs and science fiction writers come out of, well, science programs but you have a degree in anthropology and you have a workplace experience as a programmer and in fact, in one interview I located, you said that you're in writing these stories that you were not drawing so much on specific research into AI as you were on your own practical experience as a programmer, so maybe you could talk a little bit about how your specific academic and workplace experience informed your approach to dealing with these issues and the stories. Yes, I was a system, I started out as a system operator for two Lang mainframes, which I don't think they're used anywhere anymore but they were big mainframes designed for business uses and also later worked on a network, a PC network that was attached to them. I was, I started as a backup operator and became a system operator and then I did programming in COBOL which tells you how long ago that was and how, and built databases. So a lot of, you'll notice a lot of murder bots ability to figure things out and it's like the, in fugitive telemetry which is particularly, a lot of the stories deal with mysteries but it's particularly a who done it mystery story. The solution murder bot proposes at one point is to use a database to figure out who the murder is and that's kind of was my model for figuring out how it might think because that was basically my introduction to computers is trying to work on the logic of this and also trying to get to stop people from putting stupid things in the database and breaking it and on my experience, you can't write one, you can't create a form that someone can't mess up basically no matter what you do. And of course it's something we don't like to say out loud that no you can have any kind of background and be a writer. Although I do encourage everybody's labeled a security unit and there is an ongoing concern within the series as you referenced earlier about the degree to which these autonomous intelligences are either are or are not under human control and there's also the pervasiveness of AI I'm thinking of the ART which is another wonderful character in the stories and going back to what we were talking about earlier how much of a risk do you see about the idea of autonomous AI? Is it something that we really, even if we don't think we're going to rise up and slay us that we still need to really keep a tight grip on or is it just inevitable that eventually created intelligences will attain some degree of autonomy? I don't think I know enough about AI to speculate because I know that a lot of people think oh the singularity is coming and things like that but I'm not really, I think that if that does happen it's quite a long way off. I think the thing we need to worry about now is the people who are designing the algorithms. I think we've probably heard about the Tesla cars that were the self-driven cars that, not self-driven, what's that mean? Oh, it, you know. Self-driving cars. Self-driving, yeah, the ones that, yeah, the self-driving cars, I'm sorry. Still trying to get my brain to work after the past few days. You know, running in this people or you know, they forgot to tell it that, they forgot to program it, it hit a bicycle or something and they said, well, we forgot to tell it not to hit bicycles. I'm obviously making this sound, you know, dumber that it even was but why did you tell it to hit anything? It's a car, they're not supposed to hit things. It was in just the fact that you have to worry about the human element, the humans who are programming this and what their agenda is and how competent they are. I think that's our biggest worry now with AI that's autonomous. There's a little film on Twitter of a delivery robot which is probably being directed by someone. It's really hard to tell if that or if it was just programmed on a route that just pushes through a crime scene out on the street and just things like that. So I don't think it's necessarily that what we need to do at our position now is worry too much about what a creative intelligence might do in the future. I think we need to worry about the humans who are programming the algorithms because there's a lot of indication already that that's gonna be a huge problem. Also though, for the idea of a future creative intelligence there's a story called, what is it? I think it's fandom for robots. There's two stories that are really similar and I keep getting them mixed up but this one is by Vina Gemin Prasat and it's about the first sentient robot or the first robot that achieves sentience who's now an outmoded model and has been kind of stuck in a museum in Tokyo and it doesn't get any interaction really except it kind of has to come out for 20 minutes per day and speak to the visitors, kind of do a spiel and answer a couple of questions and then that's it and the rest of the time it's just sitting around this museum with nothing to do and it really comments on the idea that yeah, you created this intelligence. What are you gonna do with it now? It's not like a neopet and you can just pretend that it's not there anymore and let it do a die a software death or whatever but it's a really good story and it actually has a happy ending as the robot gets on the internet and it becomes a fan of a TV show and meets other kids that are fans of it and ends up writing fan fiction. But yeah, I think that is something we have to think about but I think the concern more now is what humans are going to do with autonomous AI. Yeah, before we leave the issue of writers altogether not that we ever really can I did wanna ask that if you could speak to what the role of the fiction writer in presenting these issues and dealing with these issues had a lunchtime conversation with one of our other guests about the issue of stories and narrative and so what do you see as the role of the fiction writer as opposed to the journalist as opposed to the science writer as opposed to the historian? What's the role of your kind of storytelling do you think in considering these issues? Well, I think it's always been to make people think about these possibilities that are in the future and think about them now because like Venus story about what do you do with the first sentient robot you've created a person basically do you treat it like a person or do you treat it like a thing? And to get people thinking about those things as early as possible and it's gonna be the people that thought about those things when they were reading these stories as college students or young professionals and then go into these positions where they can make decisions who are going to have the mechanism already built in their minds to be able to deal with this and it's just kind of like any kind of storytelling at one event a few years ago someone came up and asked me he said we're speaking about science fiction or fantasy in general and he would buy books for his cousins and nieces and nephews and they wanted these but he thought we're really dark, dystopian sort of post-apocalyptic books and it was wearing him a little bit and I said that's how kids learn to deal with these ideas I mean especially why people my age when we were growing up we were kind of bombarded with the idea that we're gonna die in a nuclear war we're gonna die in the middle of a triangle quicksand whatever we're gonna die in all these ways and the stories kind of really help you deal with that and help you they're cathartic in some ways but they're also just kind of building these mechanisms so that you can think about these things and you have think about these things in different ways and think past them so I think it's just part of that and I think that's why we've always had fiction in our cultures and it's always been very powerful Yeah, I have where it's about to the point where we need to open it up for questions from the audience I did have one more question here in an earlier session today the point was made that it is pretty much inevitable that our students who are going into military careers are going to sooner or later be dealing with even working alongside some level of AI and sort of going back to what you were just saying about the power of stories I mean what would you want students who are going into the military and inevitably working with alongside AI what would you want them to take into that experience what would you want them to think about going into that? Probably two things kind of related to what I said earlier is to try to know the agenda of the people who programmed it and think about that and what if that matches your own moral compass particularly if it's someone involved with the Tesla corporation and also to think about if this did become a sentient AI are you going to be the person who treated the robot nicely and gets to survive later or are you going to be the person that kicked it around and doesn't? That's mostly a joke. So I do want to make sure that we have time for questions from the audience so I believe what we've been doing if you have a question from Ms. Wells if you will come up there are microphones on either side of the front here so please come on down and I'm happy to hear your questions okay immediately two intrepid souls one student one faculty perfect balance at all times okay we'll start over here. All right, hi Ms. Wells my name is Jordan Jean and the question that I have today is kind of multi-part but goes back to what Dr. Cox was talking about how when it comes to writing stories about AI we're very fast to hop on the AI is bad we are going to die from the span away instead of the greatness that AI and robotics can get to society because how I see it it could be another technological revolution like how cell phones and internet personal computers brought upon us but I feel like you know when you're a writer you know I would say doom and misery cells better than optimism unfortunately but I've seen films mostly films about how AI will destroy us all instead of more of the good and the bad that it can bring there's a 2013 movie by the name of her I'm not so sure if you ever heard of it or watched it but it's pretty much how AI at a certain point becomes sentient and a lot of people end up starting to form romantic relationships with AI as you know just going off and conversing with people is just way too hard why don't we see any more of that as really more of the mixture of the two that good and bad that AI can bring in robotics why doesn't is the question why don't we see more of that in media more of a more than just storytelling like I mean I feel like you know when it comes to narrating movies can help but in all forms of storytelling whether it's a novel, movie, video game I think there's a lot of examples of more relationships with AI not just not like the sex robot thing from her but the in books and there are there have been recently in movies I don't know if you've heard of Becky Chambers she's got a series basically with it starts out with an AI in a spaceship that's basically part of the crew and ends up having to be put into a human body and then absolutely does not want to be in a human body and that kind of and it really examines the relationships between the AI and the humans and the mistakes the humans make in those relationships autonomous by Ann and Lee Newwoods it gets into that a lot more serious way where it's about basically a human that falls in love with a robot and a lot of the story is from the robot's perspective and the robot just can't process these emotions in the same way that the human does and so the relationship is seen very differently from both sides so yeah I'm not sure I'm trying to think of things that are coming out there's not a lot actually probably if you think there would be more stories now about AI being in I feel like there would be services and things what did you say I said I feel like there would be more stories really telling the good as what AI could do especially I would say nowadays as there's a lot of companies who are trying to kick start AI robotic you know I would say revolution really because of what it can do because there's a lot of what AI can do but I feel like we're quick to hop on the back and wagon of what we don't want it to do so we already got off of what we don't want from AI like you know wiping out humanity so why don't we keep that in mind while doing that and have a more positive AI when discussing artificial intelligence and robotics well I think that's part of the challenge of writing technology in science fiction if it's really difficult it's a really challenge to your imagination to imagine what future technology is going to be like I think particularly at this point in time it was easier earlier when we had technology was progressing at a much slower pace but even say like you look back at the 70s and I don't know if you're familiar with cyberpunk literature I definitely am yeah but you know there's no cell phones in a lot of the early cyberpunk because at the time they didn't think cell phones would ever catch on because they couldn't be secure and they you know at the time they're just told no this is the technology will never catch on and so writers believe that and so they don't have excuse me they don't have cell phones in these stories where obviously you know nowadays we have we you know we all have our little computer that we carry around our pocket we communicate with so I think in some ways it's just like it's a failure of imagination also I think that you're right to a certain extent that we are bombarded by negative images like there's always the you know the robot dog will come and kill you things if you see them on Twitter and everyone's always like oh this will kill us now and it's like yeah you know but there's all there's there's good things we can if it kills us it's gonna be someone it's gonna be a human programming it to kill us so it's really both the human things to blame you know so yeah I think I think you're right about that all right great thank you so much the thank you next question hi thanks for being thanks for being here and thanks for mentioning all the fabulous women science fiction writers that are writing about these topics today so check out everybody she mentioned they're all great especially Becky Chambers but I actually I think this touches on something I wanted to ask you as well which is I think what stands out to me about Murderbot who I love by the way is that it like the stories are told with such a sense of humor right and I think humor is integral to these stories in a way that when I think about other AI stories that I've read humor is often not part of them I think we get that kind of very negative kind of scary tone that the previous question asker was talking about so I was hoping you could actually talk about like why you chose to approach the story with such a sense of humor and how the humor is important to the story of Murderbot I think it's just the voice of the character when I start writing something I'm not I don't do a lot of planning out in advance there's different methods of writing and the two opposing not really sides but diametrically opposed views are basically the planner the outliner the person who basically makes plans out almost every step in advance kind of like a storyboard for a movie and then the people who just kind of wing it and I've always been one of the people who wings it and usually when I can't start working on anything until I get the main character the main viewpoint character and their voice and I don't usually before Murderbot I don't usually write in first person I think I've done one short story in first person since being a starting I first started publishing in 1993 and when I started working on this character and it was obviously first person story and it was obviously to me that this character was going to have a lot of bitter humor and a lot of that comes from me that's kind of my response to trauma in my life is to have a very bitter sense of humor about it so a lot of that sense of humor from Murderbot comes from me and into me that really just that was that that was integral integral to the character's voice yeah we are just approaching the end of time there was a student question over there um Sean why don't you no you were I believe why don't you go ahead and then we'll have to see how much time we have left hello um good afternoon everyone and thank you to the speaker and professor Cox for calling me out bringing me back so my question is about the I think the topic of EI is rather like a topic of I think the fact that we're having this discussion in the United States there is a certain privilege that goes into it because countries like Afghanistan where I come from these are not even like not even remotely as part of the priorities because there are other issues that are going on but at the same time these are the kind of decisions that will be made in America but they will affect the world in in most cases so how do we think that the people who will be affected but do not necessarily play a key role here how their perspective how their needs are being met or how they're even part of this conversation well to a large a large extent they aren't I mean that's one of the problems of the world right now really is we want this global you know connection but we're not taking into account what it's going to do to again the people you described the people who really don't have a voice in this right now it's that's just kind of I think probably the biggest problem of our age actually so I wish I had a better answer for you but I mean you're absolutely right okay thank you so much we have two questioners left I think we can just get them each end so quickly to over here and then quickly over there I'll be quick big fan of the Murderbot series and science fiction all of my life so I would say it's you're up against a lot of tough competition over the history of 50 years of reading science fiction and it's right up at the top so thank you and a big plug for people who haven't read it yet they should go out and buy at least the first book you mentioned you know AI is probably not what we need to be worried about it's the people that are designing and programming and setting it up could you expand on that or expand on another vertical of concern that you think our students should be aware of or a place they should be thinking out of the box with regards to AI as they go forward well just things like the the algorithms that I think they would this came up with Google where because of the the input they were feeding into the algorithm it was starting to say racist things because it was having access to that kind of conversations so I think just people need to be more aware of the all the pitfalls I think there's a lot of people that are there's actually a discussion again on Twitter about failures in products put forward by Google like Google Glass and and things like that and he listed all these different products and said the failure was built into these products from the very beginning and the engineers knew about it but everyone just ignored it because the push to get this product out and this fancy product and look at you know was was more imperative than actually building something that was that was useful and that was good and people are going to want it and I think that's part of the problem with the algorithms when they're rushed when there's not adequate thought put into what is actually going into basically the engine that drives it like for example the the mid-journey AIs and what they do is they scrape art off the off the internet and then put it together into different shapes which is kind of a cool game basically for people but then people are trying to use them as in place of human created art and say well this is my original AI created art and it's not original it's pieces of other people's copyrighted art that they've put together and there's a lot of discussion and really anger among artists right now that make their living selling art selling their art online that these things they've worked on so hard are just going in as fodder to this algorithm without their permission and so it's just there's so many different considerations I think and I don't think we've even come to the end of the number of ways that we can as we as humans can royally screw this up in this kind of beginning era of all this Thank you Yeah thanks and our last question Well I was curious you talked about using basically your experience as a programmer to frame Murderbot's mindset I'm curious how you even begin to approach the topic of an AI or intelligence different from ours when there are so many different human intelligences and ways that humans think and how do you start getting into that like neurodivergence and where does that end and where does AI start I know it's kind of a thorny and difficult question Well it's I am neurodivergent so I basically started with how I think the basically the way I found out that I was neurodivergent because when I grew up in the I was born in 1964 and when I grew up in the 70s you know they didn't diagnose very seldom diagnosed people with you know anything ADHD or anything like that and they certainly didn't with women if you were if you were female then you were you were acting up you didn't have you know a condition that you needed help with you know you were just acting up so so I didn't know I was neurodivergent until I started writing and kind of realized how differently my mind worked from other people and talking about the characters I created and so murderbot is kind of the culmination of that of a lot of the ways the things I found out about my the way my own brain works so that's and it's just kind of been continually astonishing to me the way people talk about well this is the way in AI I would think and it's like is it really it's it's as big a surprise to me as it was to anybody else actually and we are at the end of our time Martha thank you so much this was absolutely fascinating and let's hope the world let's hope the world will write itself and next time we'll do this in person thank you again thank you thank you very much and yeah again I wish I could be there in person and I'm so sad I had to miss it but thank you thank you all