 You can get started now, OK. Good evening, a good afternoon, good morning. Welcome to this panel discussion. AI 24D1, 10 visions for the subtitle is 10 visions for our future. So my name is Xiaoning Lu. I'm a reader in modern Chinese culture and the language at the Suez University of London. I'm delighted to welcome our panelists today. So I'm going to introduce them one by one. Our first guest is Chen Qiufan, also known as Stanley Chen. Stanley Chen is an award-winning Chinese speculative fiction writer, translator, creative producer, and curator. He is honorary president of the Chinese Science Fiction Writers Association and serves on the X-Prize Foundation Science Fiction Advisory Council. Stanley has written many Chinese science fiction, including waist-tied Huang Chao in Chinese. And today, we are going to talk about his latest science fiction, his work. This is a result of his collaborative project with AI expert and a former president of Google China, Li Kaifu or Kaifu Li. So our second speaker is an early career researcher, Mia Chen Ma. She is a PhD candidate at the Department of East Asian Languages and Cultures here at the Suez University of London. She also co-directs the London Science Fiction Research Community. Her PhD project, which is funded by the University's China Committee in London, investigates how science fiction functions as a way of thinking about other fields, such as ecology, urbanism, and politics in contemporary Chinese context. Our third panelist is Dr. Virginia L. Cohn. She is a lecturer in the Humanities at Stevens Institute of Technology, United States. Her research takes place at the intersection of comparative languages and the literatures, science fiction studies, as well as science and technology studies. In the meantime, Virginia is a managing editor at SciFiRA Review. For those of you who are not familiar with this journal, its full title is the Science Fiction Research Association Review. It is an open access journal. Last but not least, we have Dr. Paola Yovani with us. She is an associate professor of modern Chinese literature in the Department of East Asian Languages and Civilizations at the University of Chicago, United States. She is the author of Pails of Future's Past, Literature and Anticipation in Contemporary China, which was published by Stanford University Press in 2014. And this year, her code-edited volume, Sound Alignments, Popular Music in Asia's Cold Wars, just came out from Duke University Press. Dr. Yovani's current research interests include 1970s to 1980s Chinese cinema, realism and inequality, and contemporary speculative fiction. Before we start, I'd like to say a few words about the genesis of this panel discussion. So when AI-2041 just came out, the book aroused much controversy because this is the first time a sci-fi writer collaborated with an AI expert, a businessman because Mr. Kai-Fu Li is currently the CEO of Sinovation Ventures. He formerly works with Google, and he formerly also worked with Microsoft, SGI, and Apple. So if you look at the promotion materials of this book, you'll often encounter such a line. For example, this is a groundbreaking blend of imaginative storytelling and scientific forecasting. It has a good combination of scientific explanation and of science fiction stories and so forth. And Virginia Combe recently actually wrote a criticism of this novel. And this piece is entitled The Tyranny of Neutrality in AI-2041. It was published in Los Angeles Review of Books. On the 30th of October, 2021. Therefore, we feel it is necessary to bring together the science fiction writer, an early career researcher, and the academics together. So we would like to offer this forum, this platform, for them to have a dialogue. So now I'm going to talk a few words about our format today. I'm going to kick off our discussion by asking each panelist a question. Afterwards, they will have three discussions among themselves. For the last 30 or 40 minutes also, I will open the floor to our audience. Please feel free to type your questions in the Q&A box. Please feel free to introduce yourself. Therefore, we know where the questions are coming from. Without further ado, let me start by inviting Stanley to introduce this new book. So what is this book about? Why did you agree to do this collaborative project? The floor is yours. Thank you, Professor Lu. And thanks, Mia and SOA, as for having me here to have this precious opportunity. And also thanks, Virginia and Paula, because I read through the article very carefully, because I kind of agree to all this disagreed. And I think this is one of the most precious opportunity that we can open up and totally have pretty much an inclusive discussion and conversation on the book. Let's get back to two years ago before the pandemic. So because, as you might know, Dr. Kaifuli and I came across back in the day when we both worked for Google. And he came up with this idea like on this collaboration of writing a book with using science fiction together with analysis to portray a positive imagination of the future AI. And that's the impact of the society and individual from different domain. First of all, I have to admit I kind of hesitating for a little bit, because no one would love to work with your ex-boss on anything, I think. But I think after the conversation, I think as you all see, like Dr. Kaifuli, he's a very successful businessman, investor, and also AI scientist and opinion leader internationally. But I have to say a lot of his ideas pretty much basic and straightforward on how technology might influence people, no matter it is individually or collectively. So I kind of think that maybe I can help, because as a science fiction writer, all I used to do is create this kind of futuristic storytelling built up this plausible scenario which is embedded with all this kind of action and reaction from the characters. Then the readers are able to perceive what's the outcome, what's the complex interaction between human and technology and nevertheless, and culture and society and history, et cetera, et cetera. So I started this kind of painful journey for almost two years, because we're running this project in a pretty much like highly controlling way, because we have to finish everything strictly according to the schedule, because we're going to launch it in 2021, because I made up the name AI2041, because AI to letters seems pretty much similar with 41. So if you can see the phone design on cover art, so that's basically a gameplay. So that means we have to finish 10 stories, meanwhile, the translation and editing and all this kind of tech analysis written by Kai Fuli. So it's pretty much a tight schedule. So I have to say the first half year, we are struggling a lot, because both of us had very strong opinions on what we insist. He is kind of insist like we have to describe the future in a very positive way. But I say there's no way we can come up with a very compelling and dramatic storytelling. So I think afterward, we have to set all these 10 stories in different cultural backgrounds, because that's where we can see how this technology adapted to different contexts, because that's what I think is highly possible to happen in the future. Like different people from different society, they might totally have different opinions and attitude towards the AI as a technology. So that means a lot of study, research, works, I have to do. So across all different kind of countries, geopolitics, history, cultures, mythologies, you can name it. So I think we are kind of like dancers, like a swinging dancing. So it's like back and forth, where I'm pushing forward a little bit of the limit. And Dr. Kai Fuli will push backward a little bit what he wants. So I think we get to this point kind of balancing, but still compromising I have to omit. But I'm very happy with it, because I think there are several opinions from me to take on this book. So first, it's not only science fiction themselves or the tech analysis. So you can read from different perspectives. One is you can read only the stories. Another one is you can read all this knowledge base essays. And third, you can read what's the divergence between the two. And you can see how two different forces are struggling against each other. So I think that's what makes the book even more interesting, because you can see the purpose from different people to different authors. So we're taking different approach towards the future. But we come together to agree on all this kind of disagreement. So I think there's something that make me to think this book is actually about go beyond the binaries. It's about how people, how individual in the future, we need to think and act beyond the binary of fear or favor between human and machine, et cetera, et cetera. So I think this is something like we always need to bear in mind, like each one of us have our own limitation and pre-assumption for sure. And of course, there's a lot of simplification on scenarios in the book, in the stories for sure. But I think that's how I got to take it, because this is only the beginning of the conversation. Because I think our story, our narrative, is not only limited to the book itself, but also create with all this kind of review, critiques, discussion, and debates around the book. And even it reviews much more value in time, because the change is happening simultaneously. So I'm pretty surprised, because during this process, I can see what's changed after two years, like Dr. Kai-Fu Lee. So I would say the change is in two ways. So now I think he's pretty much more open to all this kind of humanity issues or cultural issues. For example, we have to shift the focus from AI as a technology to human-centric. And like, UBI is not enough for job displacement in the future if AI take over a lot of jobs. So we came to agree that people need more, more than salary, more than basic income, but dignity, but their self-actualization and their meaning of life from work. So I think we get to a lot of progress after this kind of collaborating project, which make me happy, because there's very few opportunities that you can change your boss. So not to mention you can have this kind of very precious chance to influence a lot of policymakers and engineers and scientists and business leaders across the world. So I think I'm totally happy for this book to come out and receive all this kind of response, because I think this is also part of my mission to be accomplished. So I think I make my point here. And I would love to hear, like Mia, Virginia, and Paula want to say about the book. And we can unfold the discussion later. Thank you. OK, great. Thank you very much. So my next question is for Mia, because as we know, when scholars study Chenqiu fans' science fiction, they often evoke the concept of science fiction realism. In fact, in the introduction of AI 1941, there is a passage I can quote here. So it's written by Stanley himself. So the passage reads, to me, science fiction is fascinating, because it not only generates an imaginative space for escapists to leave behind their mundane lives, play the role of superheroes, and freely explore galaxies far, far away. But it also provides a precious opportunity for them to temporarily remove themselves from everyday reality and critically reflect upon it. So the science fiction realism is so critical. So I wonder whether you can say a few words about this concept and how it is manifested in Stanley's early works. And can we apply the same concept to his new book? Yeah, thank you very much, Shaonin. And also, thank Stan for giving us a bit more information about the collaboration of this whole project. And this is truly an interesting project. And I definitely agree with Stan how he tried to address a variety of cultures across continents, how he tries to, in his stories, involve the different groups of people with very, very social backgrounds. I think that's the part I'm quite impressed by this whole project. And in the meantime, I'm so glad to hear him actually talking about how through his whole project, he's find a way to change the AI expert like Kai Fu Li, which is not an easy job, and how he tries to overcome binaries in his stories and in multiple ways. I have to say I enjoy reading the majority part of the story. And as pointed out by Shaonin, I want to talk about science fiction realism for my part, but also more importantly, I think I want to talk about how such kind of collaboration, well, we know Stan kind of changes Kai Fu Li a bit in some ways, but I want to talk about the other way around and how the involvement of such collaboration also changes science fiction realism within the stories of AI-2041 in many ways. And particularly how such collaboration leads to a possible shift from science fiction realism to capitalist realism as reflected in some of the stories. So as pointed out by Shaonin in the introduction to this book, Stan actually re-emphasized the concept of science fiction realism. So for those audience who has not read many of Chinese sci-fi or Stan's previous works, science fiction realism is a concept actually first proposed by him in the context of Chinese science fiction. I think he can be said this kind of concept has guided his sci-fi writings and also has been repeatedly mentioned by himself in many, many occasions. And in this introduction, he reaffirms the importance of science fiction realism. So based on the studies of his early stories like with the tide and also the statements, his own statements in the introduction, I try to summarize the following characteristics of the style of science fiction realism that he promotes. So I think the stories are usually set in the near future and very closely related to the real issues in contemporary life such as environmental pollution. Like with tide talk about the electronic waste pollution in a Chinese town called Huivi, which is a real city in a real life context. And also some other issues in relation to medical ethics controversy and also the individual struggles of migrant workers in Chinese urban cities and so on and so forth. So it is often directly related to the ongoing social, political and the cultural issues. The stories are created to reveal the multilayers of contemporary Chinese society, but also go beyond China, reflecting the relationship between China and the rest of the world. And more importantly, I think Stan usually leave his story open-ended enough to allow further critical reflections on the topics that he has depicted. I think this is a core of his science fiction realism in my understanding. So his stories usually conclude on a very ambiguous note aspire to encourage the readers to ponder on a variety of solutions, a variety of alternatives. Like he wrote in the introduction of AI 2041 to quote, step in, make change and actively play a role in shaping reality, end of quote. However, I think this new book AI 2041 also presents some very different characteristics in terms of realism. Under the collaboration with another author with influential AI experts like Tai Fuli, such differences are almost predictable and understandable. But I want to talk about how these stories themselves on the one hand still keep some of the key elements of the science fiction realism, but on the other hand, and waitingly possibly and waitingly reinforce what Mark Fisher defines to be capital is the realism. I find a true story from AI 2041, the Golden Elephant and Twin Sparrows particularly demonstrate how capitalism, capitalist realism can transform the previous construction of sci-fi realism and how the participation of the ideas of capitalist realism can significantly diminish the critical power that sci-fi realism usually possessed. So the story of the Golden Elephant revolves around how AI can be used by future insurance companies. So basically, they can adjust insurance plan according to the family's needs. But as a trade-off, as exchange, people will have to share the data link of every family member to the AI insurance company. The parents can even make data decisions for their children. And the story itself, I think that particularly the first half of it actually vividly describe how easily we're intrigued by those traps that by data exploitation. So I think then tries really hard to actually address these kinds of problems arising from with the consumer culture, how this kind of operation of the AI driven, a data-driven insurance company can manipulate the way how it preserves privacy, particularly data privacy. How such data-driven system also reinforce class hierarchy in this story, it is the Indian caste system. So all these parts, I feel, are still very much the familiar formula of sci-fi realism. However, toward the end of the story, the plot seemed to take a very dramatic turn. It seemed the ending of the story has offered a very clear solution to all the issues that have been raised in a story. It's very unlike previous stories. So the characters actually claim they need to become a better AI engineer to tackle all these downsides that have been raised in a story. It almost feels as if there's no way people can actually free themselves from the constraint of a data-driven life. And similarly, another story, Twin Sparrow also concludes on a note of, I would say, AI optimism. So the story depicts the future in which each small child can actually have an AI companion with a very personalized package for their individual development. I have a confession to make when I read this story, I was quite amazed by this kind of AI companion. Because I'm a mother of four-year-old and I think, wow, this kind of AI companion can solve a lot of issues in our daily life. And it completely transformed the conventional way of teaching and learning. The children, they don't need to memorize or recite lots of information, they don't understand. And AI can help them to connect things very easily and identify their strengths and weakness much more accurate and probably much earlier than the human teachers. So basically you can see the story tries to address the issue like how the children's future can be predicted and interfered by algorithm, basically. But meanwhile, the story also touched upon how such radical reform of education systems through AI can risk in destroying self-motivation, creativity in some ways. However, again, toward the last few pages, I feel the story takes another direction. So the twins who have experienced the emotional pain brought by such AI-driven education systems then decide to join their AI companions together and create a new AI to start the game again as implied by the story. So here the story conclude on the note that technology and capital is the mode of production is the only solution to everything. So even you want to initiate changes, you still need to firstly accept the system, become part of the system, then possibly make some modification rather than initiate some fundamental change. So based on the preliminary reading of these two stories as examples, we can see how science fiction realism in AI 2041 has turned into an expression of what Mark Fisher described to be capitalist realism. I think this term through this term Fisher really want to reveal is a widespread sense that even though capitalism is not a perfect system, it is probably the only system that can operate. So in other words, even we admit all the problems that brought by capitalism, capitalist realism still highlights the power of capital and reinforce the idea that the innate human desire is only compatible with capitalism. I think under such powerful discourse, even in those alternative words that envisioned by sci-fi writers, the driving force that keeps everything running is still capital. And with the participation of AI and technology, and you can see this kind of driving force further developed into something called techno-capitalism as termed by Luis Resvila. So this term refers to changes in capitalism associated with emergence of new technology sectors, the power of corporations and the new forms of organization. The conceptualization of this kind of techno-capitalism underlines how advance of science and technology under current global context is fundamentally catering for the further evolution of capitalism. And it is becoming or possibly has already become a perversive atmosphere that affects areas of literary and cultural production. I guess this is why such collaboration between science fiction writers or creative writers in general and the very large and influential type corporations somehow can cause worries because as we can tell from at least two stories from this book, it does demonstrate the risk in producing invisible barriers constraining thought and action. I guess the disparate treatment of the endings of the stories of the solutions raised as we can see from these two examples, particularly if we put them in comparison with then previous stories, we can see how his pursuit of sci-fi realism under such collaboration transformed into something like the acquiescence to capitalist realism and techno-capitalism. Sometimes what is more worrying is even with all these negotiations like Stan just mentioned, the author can still unwittingly become an essential part of this whole process. I feel that awareness or the unfolding of such shift from sci-fi realism to capitalist realism can also help to explain why these stories themselves actually relate dystopian dark stories, but it is fundamentally still in agreement with techno-optimism because capitalist realism is inherently anti-utopian. It holds the view that no matter the flaws, there's no way out. There's no alternative and this is the only possible means of operation. I think this explains even though these stories address the issue of data exploitation or other dystopian scenario, they lack that power to criticize the present reality. Back to the introduction Stan wrote for this book. He explains how historians attempt to challenge the usual dystopian AI narratives and he wants to create some different stories. Some stories may not be entirely utopian or dystopian. However, the outcome is that the stories from AI 2041 from my understanding, appear to be more of a reflection of symptoms only, but somehow avoid to challenge it or probe into the fundamental cause probably because of this collaboration. So we can see the discrepancy here, even though the story themselves cannot be considered to be entirely techno-optimistic and it also is probably not techno-optimistic, they can be utilized to reinforce the status quo and prevent the happening of the fundamental change. So I guess this is my preliminary reading of this book and I look forward to hearing other panelists' viewpoints on this. Thank you. Thank you very much, Mia, for sharing your reading with us. I've also read Virginia's reaction to the book. It's a very provocative piece on Rosentor's review of books. I was struck by the very first line of your review, which you already presents the dialectic relationship between science fiction and technological developments. So I wonder whether you can elaborate on your criticism in this review. Great, thank you everyone so much for having me. Thank you to SOAS China Institute and to Mia for setting up this panel in the first place. And thank you, Stan, for writing this in the first place and giving us an opportunity to have this discussion today. So I think that my role here really is to address the criticisms that I had toward this book originally that cost the panel. I have to say, I think as many of us here have read Stan's work in the past and enjoyed it very much, the criticism is not at all leveled at his skill as an author or imagination for the future. But as Mia also pointed out, a lot of his previous work has been very deeply suspicious of technology. I think it's been very open to ambiguity and possibilities for development. But many of the stories here in AI 2041 really represented more of this capitalist realism that several of the panelists have already touched on. The idea that we have to accept the system as it is and this idea that reinforces the fact that human desires and actions are only possible under the system of technological capitalism as presented in AI 2041. So maybe let me give a very, very brief overview again of the project for those of you who haven't had an opportunity to read it yet, which is the AI 2041, my copy, is a collection of 10 short stories written by Stan that are science fictional in aspect and take place either in or before 2041. The title is very literal, as he already pointed out in his own introduction. Kai-Fu Lee writes that all of the technologies presented in here he thinks have an 80% possibility of taking place in the next 20 years and that the science fiction stories depict some aspect of AI, whether that's machine learning or automated driving systems or smart cities, smart homes, the internet of things. AI covers quite a few individual topics. And then Kai-Fu Lee wrote a follow-up explanation to each short story explaining the state of the field, where the technology is right now and then providing his ideas about how it will continue to develop over the next 20 years. And really what makes a project like this possible at all and what also I think really at heart is the source of all this controversy is the role that both of these authors play. So Kai-Fu Lee, as Professor Liu already pointed out is one of the foremost proponents of AI technology in the world today. He occupies a very prestigious place in terms of the business world and artificial intelligence products, whereas Stan is routinely recognized in the Chinese press and international press as a prophet of AI. He talks himself about how he writes scientific fiction, not necessarily science fiction. And this idea that Stan has like this unique insight into the realities of the world and the hard material development that it's going to undergo is really central and critically important to the framing of AI 2041 as a text that accurately presents and predicts future developments in AI. This is less speculative and more an idea that this is how things will be. This is how they are now and this is how they will continue to develop over the next 20 years. And as a result of the authority that these two figures imbue the book with, this text, I really came away from it feeling like I had three major objections to the text as a project as a whole. So I wanna lay this out really quick and then I'll go back and touch on them a little bit more. Those of you who have already read my review are going to be familiar with these ideas, but I'll explain them a bit. The first is that the text really frames human behavior as a problem that can be and should be solved with technology. The second is that it treats artificial intelligence products as if they are value neutral and that they're only good or bad depending on the people who are using them. So again, there's that human behavior as the problem. And then the final one, and I think this is my most stringent problem with it, is that this text really naturalizes the products that are on the market currently and forecloses other possibilities. So going back to my first issue that I wanted to raise, which is that this idea of human behavior as the problem and artificial intelligence as a solution to the problem of human behavior. Mia already discussed this idea of technological optimism and technological solutionism. And this is a term that was originally coined by tech writer, Evgeny Morozov to describe the idea that these complex social phenomenon, so things like politics, education, healthcare, relationships can really be understood as measurable problems with definite and computable, that's the important part, solutions. And that we can optimize these social phenomenon if we have the correct algorithms, if we can turn them into data points then we can solve for inefficiency in some capacity. And so this really, this technological solutionism shifts our entire understanding of the world and redefines things about human behavior, like inefficiency or relationship problems as problems with technological solutions. So when we understand the world in this way, then technology really can only ever be a tool without any value of its own. Again, good or bad, it's that the behavior is what's good or bad. So we see the shift towards a neutral technological solutionism in quite a few of these stories. But one of the ones that Mia brought up several times and which also really stuck out to me was Twin Speros. And for those of you who haven't read it, Twin Speros talks about these two twins, Silver Spero and Gold Spero, who get artificial AI companions. But one of the central issues in the story is that Silver Spero is autistic and his behavior poses a problem for his brother and the people around him. And the story presents an AI companion as the appropriate solution to this. And that makes sense in a text that is all about how artificial intelligence can help us, but it doesn't take into account the idea that this behavior is only a problem in a system that views it as a problem rather than reassessing our kinship structures rather than reassessing the way that society itself responds to what it considers non-normative behavior. We can throw a technological solution at it and then we don't have to deal with it anymore. And so this promise of AI technology as a solution really not only addresses the issue now redefined as a problem in itself, but also redefines all of human behavior as a problem that can be solved through the application of technology. So this really leads into my next point, which is the idea that AI itself is value neutral. And this is something that the authors mentioned explicitly in the text. This isn't something that you can infer by reading it. Kayfu Lee several times describes AI as an objective technology, one that only acquires ethical value through its use by humans. And this is a very popular idea in technological innovation spheres. And it's just simply a wrong idea. Technologies, especially AI technologies are developed by private corporations. Their IP holdings are heavily guarded and it's often impossible to discover how a particular artificial intelligence system or algorithm was designed in the first place, what data set or corpus it draws its information from or how it works at all. One of the stories that has also been mentioned several times was the golden elephant, which addresses this explicitly. And I think even Kayfu Lee in his explanation talks about how by training it on this data set from the outset, this artificial intelligence insurance algorithm is already using biased data because it can only draw from historical connections. So when the predicted outcomes or patterns that replicate human biases are part of the original data, then the algorithm itself recreates systemic and repeatable errors that perpetuate this discrimination. So you could argue, as I think Lee would, that if it were somehow trained on objective data, data that didn't correspond to the real world, then the technology itself would also somehow be neutral or objective, but that once again, reframes all problems as problems inherent to humans, not problems with the technology. So once again, there's this shift towards humans being problems to be managed with the application of neutral or objective technologies. And so what ends up happening in the course of this book is that Stan and Lee posit that if there is a problem in society, then some form of AI will solve it. And if another problem comes up in the process of this, well, then that's the fault of the human developers or the users or the data that it was trained on, not the technology itself. So if AI does something good, it's because AI is good, but if AI does something bad, no, it didn't, the user did or the data, the human data that it was based upon did. So AI becomes this kind of abstract object in this sense. It can only do good because it's neutral and because it's neutral, it can't do bad because the data it's trained on is bad. Yeah, thank you so much, Virginia. I think you summarized so well about your point, your argument, the tyranny of the neutrality. So I think all the debates surrounding around AI2041 really shows this is a literary event. It's not a single literary text. So I'm going to turn over to Paula because as literature scholars, very often we pay attention to social historical context. We look at literary institutions, editorial practices, et cetera. So for this specific project, what is your view? How should we build the context of AI2041? Yeah, thank you so much. Thank you, Tufan, for writing this book and for sharing your thoughts about it. Thank you to Mia for organizing this panel. Xiaoning and Virginia, thank you so much. I've been struggling with this book for some time now. I learned about it by kind of randomly. I wrote about science fiction in my book but then my focus was somewhere else and then this summer I thought, oh, I should start rereading quite belatedly, rereading some contemporary science fiction and I started from Tufan's early work and then naturally I got into this more recent work and I've been struggling with it because of all these issues that we've been discussed and I think Mia and Virginia really put it very, very clearly. The kinds of criticism I also have in mind. So I don't mean to repeat what they said but one general question. So Xiaoning asked me about context. Well, I was thinking about authorship in particular, right? And you guys have been talking about this collaboration and one question that literary scholars have asked for a long time, like, does what we know about the authors affect our reading or should what we know about the authors should it affect our reading? So we know a lot about Chantufan and but I want to say a little bit more about Kai Fuli. We will already talked about him, Dr. Kai Fuli, AI expert, former president of Google China, author of another book about AI, AI Superpowers and CEO of this company of Sino-Vision Ventures. And so I would just briefly like to say what is, what exactly is Sino-Vision Ventures? The website tells us that it is a leading Chinese technology venture capital started in 2009 with presence in Beijing, Shanghai, Nanjing, Guangzhou and Shenzhen. And then the website goes on to say we currently manage 2.7 billion dollars in assets under management, AUM. I was not familiar with this term. It's so totally ignorant of finance. So 2.7 assets under management between 10 US dollars and Rambin B funds in total, 10 funds into the over 400 portfolio companies across the technology spectrum in China. So elsewhere, I've seen the company described as a full service venture capital firm that actively invests in Chinese technology market, particularly in the areas of healthcare technologies, artificial intelligence, robotics, automation and digital lifestyle. So we can see here that the areas in which Sino-Vision Ventures invests partly correspond with the topics dealt with, the topics covered in AI in 2041. So as I'm sure that you know better than I do, venture capital is a kind of financing that investors provide to small businesses and startup companies that are believed to have long-term growth potential. So somewhat oversimplifying, the goal of Sino-Vision Ventures is to convince you that AI is the next big thing in which to invest and at the same time to create the conditions for the AI industry to continue to expand so that investors like you can profit from it or like me if I had the money. So telling stories of how future markets will evolve seems to be a crucial tool to realize the goal of Sino-Vision Ventures. Such stories as the one in AI 2041 then, they're not just meant to entertain or to educate us. They are designed to directly shape the market so that certain investments are successful. These stories can become if they work as intended by the authors, self-fulfilling prophecies. So AI in 2041 presents the expansion of machine learning or artificial intelligence in general as inevitable fact. As a literary scholar and somewhat repeating what Mia said, I'm very tempted to relate to this rhetoric of inevitability to the rhetoric of socialist realism. Socialist realism was the preferred aesthetic mode in socialist countries and it consisted of representing reality not as it actually was but as it was becoming or as it was allegedly progressing toward socialism. So the task of the socialist writers was to narrate the movement of history toward socialism, which would supposedly bring with it freedom, equality and happiness for the great majority of humanity. As Mia mentioned, the late Mark Fisher coined the term capitalist realism and we could perhaps call AI in 2041 venture capitalist realism for it seemingly describes reality not as it actually is but as it is allegedly becoming or will soon be and moreover it claims that this brave new world laying ahead will bring enormous benefit to the whole of humanity. And this is the socialist pitch in capitalist realism. As a literary scholar, we tend to worry about the complicities of literature with systems of power but I cannot think of any other book that wants to be read as literature and at the same time is so directly instrumental to the expansion of a particular form of capitalism. The form that Shoshana Zuboff has called surveillance capitalism and that she defines as the new economic order that claims human experience as a free new raw material for hidden commercial practices of extraction predictions and sales. AI in 2041 presents us with various scenarios and again, what is not in question here is the storytelling talent of Chen Qiufang that really transpires from this work from these stories but the book tells us very, very little about AI's environmental and labor costs and about the consequences of all kinds of extraction. So I beg your pardon, but I think for me this book has been very valuable because it really made me aware of how we are caught in the system of surveillance capitalism and of the need to read more and to understand more about it. So one book that I'm still reading now is this one and I would like to recommend it to everyone, Kate Crawford, Atlas of AI and I just let me read just one brief, sorry, I lost the place where I was. Okay, why? Okay, in this book, I argue that AI is neither artificial nor intelligent, rather artificial intelligence is both embodied and material made from natural resources, fuel, human labor, infrastructures, logistics, histories and classifications. AI systems are not autonomous, rational or able to discern anything without extensive, computationally intensive training with large data sets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI scales and the ways of seeing it optimizes, AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power and it's about these inequalities of power that we need to learn about and that even AI 2041 doesn't talk about them but it's a stepping stone to read more broadly around these issues. So thank you so much for, and I'm really looking forward to discussion with the people who are here in attendance. Thank you so much Paula for your wonderful stimulating remarks. I think it really brings in different strengths of critical investigation together and time flies very fast and now we're hitting three o'clock. So I'd like to open the floor to our audience. And so if you can type your questions in the Q&A box, I would really appreciate it. While we are waiting, I actually, I already have a comment from Emily Jim Emily Jim is currently a PhD candidate at Yale University and she is also the translator of AI 2041. And I think I have a quite long comment from Emily but it's really, really interesting. For example, she talks about at many Chinese science fiction events, one of the biggest questions that our writers and translators often get asked is what makes Chinese science fiction Chinese? But the AI 2041 provides a new response. So according to Emily and science fiction or rather scientific fiction in this case can be written by Chinese writers without having to be labeled as Chinese. And then Emily particularly points out the multiple boundary crossings especially cause attention to the role of the translator and we not only have collaboration between the writer and the AI expert but also collaboration between them and Emily. So for instance, she elaborates, she said in the process of writing in the process of writing was standing in producing the translation there was a lot of back and forth between him, the writer and the AI, the translator, Kai Fuli who is a perfectly bilingual and the book's English editors. So most of those conversations happened in multiple languages. The Chinese language and the English language, tech jargon, rhetorical devices and the everyday speech to make the concepts accessible to the layman. The production of this book it itself merits close analysis because it breaks down traditional roles in creative publishing and the boundaries between writer, translator and the editor are blurred. There is no longer a single authority. So I think this is quite nicely echoes to Paula's question remarks on the authorship. So who are the authors here? I wonder any panelists want to address this, have any comments or your response to this question? Or if you have any other questions you want to raise to each other, please go ahead. I just want to say that the issue of the translation is very remarkable in this case because, and the question of languages because I think Kai Fuli wrote directly in English, right? Whereas Chen Xufan was writing in Chinese and was being translated at the same time. And so I apologize. I feel that we have to apologize to the translator for not mentioning that because the translator is often the invisible labor that is kind of neglecting. So my sincere apologies for that. No, I don't address the translator's name on the book cover. So I was quite surprised to be honest because I also have a copy. Yeah, yeah. So that's really, really bad and I'm really very bad about it about this erasure of the labor of the translator. At the same time though, I want to be cautious about all this vocabulary that we constantly use like overcoming boundaries. And I mean, in the end we have also to ask ourselves, what's good about overcoming boundaries? I mean, what's the ultimate goal? And I still strongly believe that the ultimate goal of this work is advertisement and kind of propagating and naturalizing a way of thinking that's felt as dominant. It's a narrative of the winners. And in that sense, I want to be careful that our rhetoric that we like to use in the humanities of overcoming binaries and boundaries doesn't, you know, doesn't, ask ourselves, why is that good? Well, and if I could jump in there really quick. Hi, Emily, you know, I think that in many ways, actually the fact that the work of the translators is hidden from this book, you know, it's not on the cover. I think that, you know, there were multiple translators for it, Emily, Andy, I don't remember everyone. But the fact that this labor is in, that brought this product together and is indispensable to the final product really is analogous in many ways with how AI itself functions to make the human labor invisible. We see this final product at the end that we attribute to, you know, Stan himself and Keif Lee. And it was this back and forth, you know, projection between the editors, the translators and everything. But at the end, it's not really about the quality of the writing, which is spectacular as always. I think that no one here has any problems with the writing, the translation, you know, the descriptions. It's what is this book trying to do and how do the positions of the two authors make this incredible project? Because I really do think that so much of this is due to the fact that Stan is a prophet of, you know, scientific and social development. And Keif Lee is an expert in AI. I feel like if anyone else wrote this, we would not be reacting to it the same way. This project is presented as a fact, a scientific fact of development. And that's what so much of this controversy revolves around. Yeah, I echo with Paula and Virginia just said, I think translator definitely should be better acknowledged like Emilia and all the other translators. But I also want to bring awareness of another translation. Maybe we didn't pay that much attention. The translation in Keif Lee's notes and analysis, which I find quite interesting, because Stan mentioned like we can read the stories just themselves. We don't need to read the analysis or notes. We can separate the stories from this analysis. But when you actually separate the stories and then afterwards, if you go back to read all this analysis, I find like there's a very clear intention of translating some very serious issues like data exploitation that has been addressed in the story into something much less serious in this kind of analysis. For example, AI can be better trained or the purpose is how we should like monitor the use of technology rather than abuse technology. So I think this kind of process of translating something very serious or real real dangers into something less serious. Actually, I think remind me of the German sociologist Alex Becht defines to be like an organized irresponsibility. There's also echo with Virginia's comments on the neutrality of AI, how AI, there's no wrongdoing of AI. And also echo with Paula's comments on the context on a very complete context. So I think the organized responsibility is really to reveal two things that we often overlook. So like the policymaker and a large corporations, scientists and other groups of influential people basically they can form a very powerful alliance to translate some very serious issues into something much less serious. And these issues are actually created by themselves at the beginning, but the creation of such kind of like a discourse called risk discourse is help them to abdicate their responsibility in creating all these problems from the beginning. And then they claim these consequences are inevitable in the development of human society. I feel like the involvement of the creative writers and translators somehow a bit unfair because they do kind of committed a lot of like their intelligence and the works but often kind of unwittingly involved in such a process of organized irresponsibility. And the second key message about this term, I think is how this very same group of people like influential individuals and companies basically the people in a society who are in power then claim they are able to offer solutions to these issues to monitor and minimize all these risks. So I think in the name of helping us to tackle all these issues, they stand in even more powerful position to define what is right or wrong, what is amendable and what is inevitable. I think this comes back to the question of authorship. I feel like it's a very complicated process of involving authors, translators and all other people. And they kind of all be unwittingly involved in such process of forming of organized irresponsibility which I think we probably not only for this project but also for the future collaborations between creative writers and translation and all the other people. But I do feel like it's very good like the Chinese sapphire writers not only write about China but also about the rest of the world but there are definitely some context we need to further discuss on that. Okay, thank you Mia. I think we have a question coming in. So I'm going to read it out. It's a commercial writer. So this is from Ian Oe. Oh, sorry if I promise your name incorrectly. As a commercial writer or a writer who is trying to survive mostly on writing myself I would like to raise the issue of marketability of literature which determines a writer's career path as a researcher as well. I'm curious what the panel thinks about the job a writer who has to write palatable and a soluble fiction in order to maintain a healthy career which this book or project clearly offers an opportunity. Any of you? Right, I can take that briefly and maybe Mia, Virginia and Paula can jump in later. So first of all, I think I need to apologize for didn't have all these translators name appeal properly because I tried quite a few times to help them be recognized on a pretty much appropriate position. For example, on the cover in the table of context, et cetera, et cetera. But obviously I don't own the biggest fucking power in this project. So I have to admit that because I wrote in Chinese and be translated into English and Kaifu wrote in English and be translated into Chinese. Later on in different editions. So I think like both of us like take advantage from all these laborers from translators. So I think I have to apologize for that. And for Yen's question, I think you make a very good point that it is indeed very difficult to sell books, especially literacy books. And not to mention like a translation in English market, because as everyone knows, it's like totally dominant market for English writers and books. So I think before three body problem, nobody wanna translate or read or buy any kind of books, especially from China as a genre of science fiction. So I think this totally something we have to bear in mind. And I think this have to say, I have to admit this project is kind of pretty much commercial. And you can see, and I think Virginia and Paula make their point very clearly and powerfully enough to convince me that I previously mentioned the change is too late. So I'm also be changed during this process of collaboration because all you mentioned are these kind of issues in the storytelling and between the storytelling and the tech analysis is exist for sure is there. So I think this book is kind of a very representative schizophrenia symptom of capitalism as dealers put it. So yeah, I think this book helped me to get much more exposure in the market. So yeah, so from that perspective, I think it helps me a lot to write whatever I would love to write in the future. And for sure, I totally agree with you on the neutrality of technologies and also AI is not the ultimate solution for all this kind of issue we're tapping to. But for this book at the very beginning, it's positioning is like this, right? So everyone has to accept it. So I think I did what I can to keep the balance and play this sophisticated enough game to around all these kind of different issues and humanity and social and cultural issues there. But I think there's still more to be explored for sure. So I think, yeah, this, all you mentioned is a perfect material and starting point for my next book actually will be sequels of waist height. So there will be two more books. So I'm currently working on that. So it actually tapping to all the questions you just mentioned, like the decline of capitalism and the possible outcome, but definitely not AI. So I think maybe tap into the view of spirituality and Gaia for sure. So I think that's something, it's like a parallel project I'm doing like in the past two years, I did a lot of few studies. So for the minorities, people and also shamanism. So I think totally that's something didn't fully review in this book AI 24 watt, but I keep all this kind of thinking and research for my next few books because they are all belong to myself and it's all solo. So I'm pretty happy that you pointed out and I think it's pretty much a good sign to keep me aware of how as a writer, as an independent writer, how to stay sober and not to be kidnapped by the venture capital or whatsoever capital. So I think this is something maybe each one of these authors my face because it's so difficult as a writer, especially a science fiction writer to make a living out of writing. So I think Yan might agree with me and I think we all need to find a way to balancing what we really would love to write and what the market might would love to read and buy. So I think there's something everyone struggling with, but I'm pretty happy that today we're pretty much open up and to all this kind of discussion and conversation. Thank you. Thank you for your very candid response. It just reminds me of some Chinese filmmakers. They would maybe make one commercial film and make enough money and to realize their dream or make their art film. So I see the pedorals there. Anybody else wants to respond or any questions among yourselves? Oh, I will have another one coming in, great. This one is from Varsha Gupta. Hi, this discussion today has been amazing. I'm currently a master's student studying international politics and I had a question since it's close to my dissertation topic. Could you talk more about how feminism will be explored in the seacores? Chengqiu Fan, Stanley, are you going to explore feminism in your seacores? Yeah, for sure. I think I didn't do it quite good enough in the first one. So because it was written 10 years ago, actually. So I think in the seacores, there will be still a female protagonist and we're gonna, I think the good power will be the dominant narrative in the books of force coming. And also we gonna talk about like how this kind of gender issue appear in the cyberspace because right now we're talking about metaverse, crypto world, et cetera, et cetera. So I think that the problem is not solved there, like virtually, it's not naturally so, but maybe it will transform to another like appearance and maybe it's much more invisible and also the exploitation is invisible there as well. So I think many things and also like technology as a way of committing violence, especially slow violence is pretty much a topic I wanna discuss because in the previous book, actually we rewrote a part of the violence scene with using introduced technology as a medium, but it make me to think, is it like to soften the violence or to strengthen the violence itself? So I think that's a question we have to think very cautiously because right now we're experiencing this kind of extreme discourse and behavior across the social media. I think for everyone who experienced those traumatizing issue, my omits is much more violence than the physical harms. So I think that's something I would love to discuss in the next two books as well. Thank you. I had the comments and then a question. One is, yeah, I really appreciated the question of the writer who asked about market. I don't have any answer, but I just wanna say I appreciate the struggle and also I want to reflect on the fact that we are caught somehow. I mean, we are communicating thanks to some ways, thanks to some aspect of artificial intelligence and also we're caught in the system and in a way we complain to our author that is not being critical enough and then they say, oh, don't worry. I'll be critical in the next. It's all, I just want to register a kind of awkwardness in this because we have a desire for critique in a way. We have a desire for the critique of the system in which we live and that desire is part of the system itself and in the end the critique will be part of the system itself. So I just want to register that we are always circulating within a certain system and it seems that the way to flourish in it is to alternate. The way to flourish in this system is to be a little bit with the system and a little bit against the system and then we are the most successful in the system. So again, there's no way out, but I just want to express my, I don't know, sense of a little bit despair. But finally, I had one very practical question that is, if I'm not mistaken, the book AI 2041 was launched more or less simultaneously or maybe first in English and then in Chinese. And is it true that it was published in Chinese only in non-simplified characters? And if you can tell us a little bit about the publishing release process in Chinese and if there is a version in simplified characters. Obviously this is part of the question. Right, so now I have traditional Chinese edition coming out in July and English in both US and UK market coming out in September, but there's no concrete plan for simplified Chinese edition yet. But I couldn't get into much detail but it tap into the field like everyone supposed to understand, especially those who have publishing, publication experience in mainland China. So I think the problem is not on my side. So, but we'll see because there's nothing sensitive there in the book if you have read a content. So it's pretty interesting because if like ultimately the rest of the world can read a book, but besides simplified Chinese market that would be another performance arm I have to say. And it's making it more interesting and I think there'll be more study around it. So I don't think that's gonna happen but yeah, we'll see because next year there'll be a lot of big events happening in China. So yeah, we'll see. But yeah, I agree like as Paula mentioned like each author should like stick to what they believe in and not to compromise whether like for own collaboration or like solo writings. But I think one thing I've learned from Dao De Jing so from 1,000 years ago is like you have to be like water. So it's not directly or go against something your enemy stand for, but I think is to regularly to reshape their roots to change the position in a much more silent way. So that's what I believe in and maybe you can call it like kind of compromise, but I think this is something I've learned to in my previous experience in the society. So I think, yeah, but still agree to disagree. And I wish, yeah, next time, yeah, it could be something more powerful and more criticized about what I really believe in. Great, thank you so much for your answer. I think I'd like to follow up Paula's observation. And I actually, I'm quite inspired by Virginia's review. I think Virginia mentions maybe someday we wouldn't even need to turn to advance Stanley Chan. There will be AI version of Stanley Chan analyzing all the, you know, all your writing style, et cetera. Maybe this AI writer can present a social critique and that would be even more despairing for us. Yeah, I think we have time for one more question for either from our audience members or from our panelists. One more question, the last one. If no one is asking question for now, maybe I want to raise one question for Stan. I think we can all agree our criticism or discussion today is not for you because we all agree like the stories are beautifully written and you've done your part to negotiate with a very powerful system. But in the meantime, I want to bring out the context as Paula said, it's very complicated for contemporary Chinese writers. And also there's participation of the state's expectation basically like how Chinese government and the state trying to consider Chinese sci-fi as a new cultural export. So you are also dealing with all these people from the state and from the different kinds of state-owned agency. And in the meantime, in the mainland China, you also have to deal with a very variety of groups of people. And I want to know how do you think of this kind of complex context and not only for yourself but for other Chinese sci-fi writers? How do you position yourself and how do you deal with this kind of intricacy basically? Thank you. Thank you. So I think I'm not that kind of pure writer. Writer, so I'm like a complex. So I used to work for tech giants and I have multiple identities. So I think what I've learned from like maintaining all this kind of different identity is like you have to understand what people want to listen to. So you should adapt to the language they used to accept. So I think that's the most effective way to make the conversation and to achieve whatever goal you want to achieve. So I think because all of different parties here from the government, from the private sector, from the university and from the mass audience, they all have different expectation on science fiction, I have to say. And maybe either one of their expectation fell into this kind of super broad spectrum of how science fiction could be defined and function as a genre. So I think totally it required a lot of efforts and wisdom to make the balance because as a writer, you always have something to say from the very bottom of your heart. So that's something you totally believe in. But meanwhile, you couldn't like 100% represent what you truly want to write, right? So I think this, for example, in the States, there are so many PC political correctness there. So you couldn't write anything as freely as you want. So I think there's something pretty much common exists all around the world. So we have to learn this kind of grassroots wisdom from the street light, you have to be adaptive and you have to play step by step very cautiously to move forward. And to, but you have to keep that roadmap in mind because you know there's some ultimate goal there is to deliver the message you want to say. So that's something you don't want to forget. So to me it's like missionary, that's something you have to tolerance and you have to sacrifice a lot, even like being yourself. So I think that's something pretty different from those who say, I just write what I want to say. I'm a true writer. So I'm not that kind of person acclaim I'm a true writer. I think I'm not. So, but maybe I can stay, I can stand longer because I can witness and observe what's going on. Okay, thank you so much. Paula, do you want to make a quick response? Well, yeah. I find it difficult to imagine us as completely separate from the context and I have something and then there is the, I mean, this notion of me and the world, I think in very different terms from what you've just expressed. But I totally respect what you just expressed and let me ask you a last question if you wanna answer it. Only have two minutes. Okay, most fundamentally, what do you think is it in the bottom of your heart that you hope to achieve through science fiction? I think it's about how we need to coexist with others, not only about like other cultures, other nations, but other species. So I think that's something we have to get rid of our ego. So I think that's the only exit for human being as a civilization can maintain our existence on this planet. So I think that's the ultimate message I would love to deliver. Wonderful, thank you so much. We're right on time. Thank you again for all the panelists for your wonderful participation. Also, I'd like to thank all our audience members for being here today on Friday afternoon, your precious 90 minutes. You give us your precious 90 minutes. So thank you all. Okay, bye.