 Hello, hello, hello, and welcome. I'm Meroen Kilili, we are DM25, a radical political movement for Europe. And this is another live discussion with our coordinating team, featuring subversive ideas you won't hear anywhere else. And today we're talking about artificial intelligence. Recently, AI has got a lot smarter. GPT-4, the latest AI tool from the Silicon Valley company, OpenAI, was released to the public just a few weeks ago and it's making big waves, trained on vast amounts of online data, human-generated data. It can do some pretty amazing and quite scary stuff from writing news stories like a human to coding to developing business plans. It can even pass university exams. This new technology is going to become a permanent part of our lives and it's getting better all the time. The implications of it and how to ensure it serves humanity rather than ends up killing us all in a terminator type scenario are being hotly debated. What do these advances in artificial intelligence mean for politics, for activism, for decision-making? What are the risks of it all, really? And how can we make sure that the benefits of this technology are shared by all rather than the elite few? Our panel, including our own Yanis Varoufakis and our crew of activist doers and thinkers from all over Europe will be unpacking this vast and rather technical issue today and you out there on YouTube. If you've got thoughts, comments, rants, things you want to say about AI or anything in general, as long as it's related to the topic, then please put them in the YouTube chat and we will put them to our panel. And before we kick off, I want to just mention something. DM25 in Athens on April 30th, from April 30th to May the 5th, we're holding what's called DM Academy. It's the fourth time we've done this. It's a unique learning and connecting experience with workshops, skill sharing, policy debates. It's brilliant. And as a thank you to those of you out there who are joining us live, you should see a link which is pinned at the top of the chat. If you would like to register and come along, you click on that link. The first three people that click on that link and register will get free entry to DM Academy. You'll have to make your own way there in Athens, but you'll get free entry and we'll waive the fees for the first three people that join. Okay, let's get back to our discussion of artificial intelligence. We're going to kick off with Yudit Mayer, our in-house technology guru from Berlin. Yudit, the floor is yours. Thank you, Mehran. So I will try to keep this not so technical but focused on the implications that it has for us and maybe a general sense of how it works. Basically, I've seen AI disrupt two fields up close. And I would like to tell you about them before I go into the implications because I believe that these two fields mirror and foreshadow what is happening now with GPT-4 and with AI tools in general. And the two fields that I will briefly talk about are board games and translations. So with board games, I mean chess computers and similar AIs that are as good or better than humans at certain games. The first game that AIs learned to play well was tic-tac-toe. I mean, it was really simple. They just tried all possible combinations and it identified the optimal path for each. So a similar but more sophisticated method was used for chess. With enough computer power, you can simply calculate the best chess move and computers are vastly better at calculating than humans. So it was no surprise that in 1997 when computer power had gotten strong enough, Google's deep blue beat the reigning world champion at chess. Now, some years after that, I learned the ancient Asian board game Go and it was touted that unlike chess, Go would be an impossible game for computers to master because you have hundreds of possibilities for each move. All in all, they turn into more possible game sequences than there are atoms in the universe. So such a staggering number is impossible to just calculate it out. And indeed, there were about 20 years of stagnation during which computers could play Go at the level of a weak or average human but nowhere near the level of the pros. But then in 2016, Google's DeepMind AI beat one of the world's champions at Go. So what had changed? What changed was that Google used a form of machine learning, namely neural networks, which are kind of a black box that plays millions of games against itself, maybe billions, trillions of games against itself and thereby teaches itself to recognize the patterns that appear in good games, games that is winning and to recognize the patterns that appear in bad games. So pattern recognition was the key to winning against humans at Go and they're also the key to GPT-4. Now the other field, it's actually very similar at translations. So I have a degree in computational linguistics. So I'm acutely aware of the difficulties involved in getting computers to translate human language. The Americans have been working on machine translation since the 50s. They first used a rules-based approach and then they used a statistics-based approach. Neither of them worked very well. You know, when I was in university studying computational linguistics, translators would charge double for fixing a Google translate result compared to their rate for just translating it themselves. But then just one year after an AI beat the world champion at Go, this problem was mostly solved by the company Depot. They pioneered the use of machine learning through a neural network. Again, the same kind of thing, not exactly the same technically, but the same kind of AI with a neural network to do translations. And Depot's translations were immediately an order of magnitude better than Google's. And Google was forced to scrap their model and use machine learning as well. So by now, Depot and Co have completely disrupted the field of translations. Human translators are usually just reviewing and improving machine translations because the machine translations have gotten to be decently good. I mean, they still don't manage to render feelings or wordplay or legalese, but they're good enough. And this means that they're good enough for home use. Whenever there are situations where misunderstandings or mistranslations don't have any series consequences. They're not good enough for professional use. Companies still need to hire translators, but they can reduce the human translator's time by suggesting things. And that is, I believe exactly the same that we're seeing now with GPT-4 and similar AI tools. This is the same evolution, but on a more general scale, not just translations, not just games, but any field that depends on recognizing and applying patterns will be disrupted this way. So for example, removing background noise from audio, removing the background from photos, identifying key messages from an article, creating a PowerPoint presentation on the basis of a text that you've written, creating a digital artwork in the style of Monet. All of these are patterns that machines can learn and apply. That is the basics of GPT-4 and the later GPDs that will be arriving now and any companies that are basing their own websites, their own tools on these platforms. But the thing is, patterns are not everything. So chat GPT impressed a lot of people because you could ask it anything. You could, for example, ask it for a diary recommendation and a meal plan and it would spit out a reasonable answer. What's important to understand is that chat GPT is just imitating a pattern. It has consumed 100 different blog posts with dating advice and it replies with a mix of all of them, but it doesn't know why it is recommending avocado toast. Unlike the bloggers, it's not a nutritionist and did it for anything else that you want to ask. So it reads how a lot of humans have answered this question and then it replies on the same lines. You may have heard of the trick question, what is heavier, one ton of feathers or one ton of bricks? A lot of humans can be tricked into answering that one ton of bricks are heavier than one ton of feathers. But if you ask chat GPT, what is heavier, five tons of bricks or one ton of feathers, chat GPT will say that their weight is identical because it's a trick question, yada, yada, yada. So it's not intelligent. It has just mastered this pattern. And even when pattern matching, it is too bad at spotting the odd men out. So this is actually something unique to humans. Evolution has taught us humans to immediately focus on whatever looks out of place. For example, a dark shadow that might be a leopard. So by contrast, the currently available AIs are only good at imitating correct patterns. They don't notice when something is wrong. A human actually did recently beat a Go AI by using an encirclement strategy that would have been blindingly obvious to any human opponent and the eye just didn't see it because it's not a typical pattern seen in Go games. Now if I'm playing a Go game and I see a pattern that's completely out of the blue, doesn't look like anything I've ever seen before, I will be on high alert and I will definitely notice when something is trying to encircle me. But the AI does not have this reflex. And similarly, there was recently a case where there was an AI that was trained to spot soldiers approaching it in camouflage. And it completely failed to raise the alarm when the same soldiers put cardboard boxes over their head and approach that way. Other ways that GPT-4 cannot be said to be intelligent is that it has no reasoning. It often fails at basic math. It can generate a blog post about nutrition that sounds exactly like the hundreds of blog posts that already exist, but it won't write a truly unique blog post. It won't be the first to think through the implications of the result of a study. So expect Google search results to be even more crappy once everyone uses AI to write their blog post, it would be more and more and more of the same. And it has no true creativity. It can imitate Monet or any other famous painter, but it won't create its own unique style. It's also incredibly biased because it's always imitating the most common patterns. It's even more biased than humans. Humans at least sometimes can escape groupthink and question why a black applicant should be less hireable than the right one. It's also less likely to integrate the implications of counter examples. It always goes with the majority of examples trying to imitate whatever the majority is doing. This kind of AI does not have self-awareness. I mean, there is tries to prompt chat GPT to display some kind of self-awareness, but actually it just replies with tropes about AI that it has learned from our own science fiction. And it has no self-directedness. It can't set goals or tasks for itself. It's actually a good thing because otherwise it might set itself a goal of killing humans and achieving immortality or a task of producing as many screws as possible, even if it has to dismantle every building in the world. Some people are unfortunately already trying to give GPT this ability to recursively set its own tasks and it could become a problem, especially because there are no arms control treaties regarding AI. So if it's not intelligent, what is it? What can we expect? Well, it is still a very powerful pattern matcher and it will definitely disrupt our economy. In a similar way, I expect a similar scale as the introduction of the personal computer. I don't think any professions will be lost completely but everyone doing computer work will be a lot more productive. I myself experienced this. I use AI now to help me code to do programming and I'm 30% faster when I have the AI suggesting things that I might want to write because coding very much relies on the same pattern over and over and over again. If 30 people have written the same kind of code then you probably want to use it too. So it's really good for this kind of profession. And we have to be very attentive to ensure that the profits earned through this productivity increase are distributed fairly. Right now the trend is to the opposite. I've already seen some companies starting to demand that the workers lower the wages because their work could be done by an AI. Actually, the AI would just produce very broad work and nothing creative, nothing really special. On the bright side, for freelancers and startups that might actually be good enough. They could stand to benefit because an AI generated logo or an AI generated blog post might be good enough for them. They can save the cost of a human logo designer or a human newsletter writers. The result would be inferior in quality. No global brand would ever use this that if you don't have a team of 100 writers or 100 designers at your disposal, it might be good enough. It might give you an edge if the others are not using it. And one point that I have to add and I will end on this because the media are not writing about it but I think that it deserves a lot more attention is that this generation of AIs will certainly also revolutionary military drones. The kind of drone that can autonomously select and shoot at targets because distinguishing human targets from the background is a pattern recognition task. It's eminently suitable for this kind of neural network and computers are already the most accurate shooters anyway. Thank you very much, Judith, for setting the scene there and also for drawing some of the limits of the current technology of AI. Janis, who is yours? You did, thank you. It was a brilliant introduction. I would add absolutely nothing to the technical aspects of AI but I will take a political position and if I may also a moral position. To begin with, let us not commence our discussion on a technophobic basis. Let us celebrate for just one brief moment the ingenuity of computer engineers, AI people who've managed to produce GPT-4 like AI chat boxes. It is quite remarkable. It is something to seriously celebrate in a way that we celebrate the cathedrals of the Middle Ages without having to like feudalism or the lords or the rivers of blood that were shed for these cathedrals to be built. Let's separate the nastiness which gave rise to it which owns it, which is the economic system of extraction and exploitation which goes hand in hand with it and let's just celebrate it for a moment like we celebrate the pyramids. The pyramids were the result of slavery and of massive suffering but we still celebrate the pyramids, right? We celebrate the Parthenon. The Parthenon was built by slaves owned by awful Athenians. I think AI deserves one moment of serious standing applause from us. I haven't had these days, I don't have much free time but I did play around with it for a bit a few weeks ago. You know, I asked it to produce a 500 word summary of DM-25's manifesto in a Shakespearean Iambic pedometer and it produced a brilliant poem of version of the DM-25 latest manifesto, not the original one. If any one of you or me had actually written that and I think we would have deserved a medal. So the people who actually programmed this bloody thing to write a Shakespearean version of our manifesto, you know, that's all. Well done. Of course, you know, whether it's the invention of iron, of the coming of the iron age of the steam engine or chemicals or, you know, nuclear energy or quantum mechanics, it can be a wonderful tool and it can be a weapon of mass destruction. It can be the best and the worst of artifacts. A few points or comments in the reaction to things that you did already told us. I don't really care whether it's intelligent or not. I mean, actually, I would be really scared if it were if the singularity had been achieved, where, you know, how acquires conscience in space Odyssey 2001 and decides that you humans are here to, you give me contradictory orders, I will eliminate that. We should be very pleased that they are not intelligent and in the end, they are producing wonderful stuff out of very, very boring, additive and repetitive processes. Remember the reaction initially to Darwin's theory of evolution, skeptics, opponents of Darwin say, hang on, are you telling me now that the human eye that's hugely complicated organ was not designed by God that it was the result of billions or trillions, infinite tiny little mutations of cells that were evolving without a designer, without an intelligent being, bringing together this miracle of the creation. Yeah, it was all a random thing. Why do we exist in this world? For no reason, we are, for absolutely no reason, it's just a fluke, it's just an accident that we are here. The human mind cannot accept that easily. We are too up ourselves, we are too egotistical to accept that there is no reason why we exist and that the only solace we can get is from doing good things, things that we consider to be good for no reason, for our all moral standards, because the only way of being good as opposed to being bad is for no reason at all. If you're being good in order to achieve something, then you're not good. Then you are simply expedient and rational and efficient. So, taking the example of the drones that you did talk about last year in, I don't know the name of the guy, but in last year's BBC Reath Lectures, every year the BBC has its Reath Lectures in honor of Mr. Reath or Sir Reath, who was the first governor or controller of the BBC. And so last year's lectures were dedicated to this, to the effect of artificial intelligence driven drones on war and peace. And he was an expert, he's an expert of AI and he was ringing a loud bell for us, saying that this is going to be our Armageddon. We're going to become, if we have one serious threat, it's not so much nuclear power, it's drones. Because now, as we speak already, you can actually go out there and buy, and it is legal, it is not even legal, to buy swarms of killer robots, killer androids. They are on sale, they are for sale, they can be purchased for 10,000, 20,000 euros and they even have face recognition. So you can send a swarm of tiny little flying robots into a building, they will scan the whole every room and they will find Amir or Yannis or Maya, kill us and then leave. Do I care that it's not intelligent? No, I don't give a damn. It is scary enough that it can be self-propelled and it can make decisions without being connected. You see the drones that the Americans use in Afghanistan and Sudan and wherever they use, yet they are awful machines in the sense that they disconnect the pilot from the real action. The pilot sits somewhere in the Vada, in the cocoon, but still there's a pilot. There's somebody guiding this thing, somebody has to decide, okay, now I'm looking at David, I'm going to press a button and kill him. It's a human being that has to do that. With autonomous, AI driven, face recognition based drones, you just throw them out in the air and off they go and they choose their targets and decide when, not intelligently, but on the basis of their programming, whom to kill. And once they're out there, there's nothing you can do to stop them. So that the singularity, when you move from machine learning, from reinforcement learning, this is the basis of AI devices now, to intelligence, that singularity is not even necessary for us to be seriously scared of the potential of AI. That's the first point I want to make. The second main point with which I want to leave you is this. As you know, those of you who know me, I have been very concerned with a mutation of capital, which I identified a few years ago, or think I identified, a version of capital that has evolved as after Big Tech took over the internet with algorithms and then increasingly with AI, that renders capital, this kind of capital which I call cloud capital, because it lives in the cloud, in the metaphorical cloud, which is of course not up in the cloud. It's in servers, in optical fibers, in cell towers, in the wires of our computers and our cell phones and so on. So it's really very physical, but we think of it as a cloud because it's a way of conceptualizing it. This kind of capital has evolved, it has mutated. It's a mutation of capital, which is too toxic even for capitalism to handle. You know my theory about techno feudalism, that this kind of capital is so toxic that it has actually upended the basic pillars of capitalism and what we are living through now is a kind of feudalism, living on the cloud, the cloud feudalism or techno feudalism. That happened even before AI. This is not a science fiction story. It's not like Terminator where you imagine a world where technology develops a singularity that we were talking about becomes intelligent, takes over and starts doing things to us like terminating us or making our life more miserable than it is. This is my theory of what has already happened. Even before chat boxes and GPT-3 or 4 or whatever. The basic idea is this. Capital has always been a produced means of production. So if you look at a fishing rod or a steam engine or an industrial robot, very high-tech industrial robot, it's something that we produced not to consume it, but in order to use it to produce other things. So a tractor. You didn't produce a tractor in order to raise it down the street. You produced it in order to produce potatoes or wheat or whatever. So it's a produced means of production. That was what capital always was. And of course, because of the asymmetric ownership of these means of production, those who actually owned them, especially after the 18th century, acquired a lot of power because if I have a fishing rod and you don't remember Robinson Crusoe. Robinson Crusoe salvages some tools, including fishing rods from the ship wreckage in which he escaped and he's on that deserted island on his own and he uses it to fish. But then Friday comes along who doesn't have a fishing rod. The ownership of the fishing rod by Robinson Crusoe gives him enormous power over Friday because he makes Friday fish for a portion of the catch. That's a wage. And suddenly he becomes a capital with power over. So produced means of production. Capital are two things. They are produced means of production, but also due to asymmetric ownership of the produce means of production. It's also a source of power for the owner of capital goods. And what happened when we moved from fidelism to capitalism is that power stemmed not from land anymore as it was under fidelism, but it stemmed from owning the more advanced machinery, fishing rods, steam engines, railway systems, telephone networks, and so on and so forth, electricity glids, which were produced means of production. The moment Big Tech took over the Internet Commons and privatized it with Facebook, with Twitter, with Amazon and so on. And suddenly we are no longer in the Commons like we used to be in the 1990s when the Internet was first created. But now you enter a platform that is owned by Jeff Bezos or by Elon Musk or whoever. Mark Zuckerberg, whoever, whoever. Then suddenly you move into a kind of digital fiefdom owned by a digital lord or a cloudelist or call him whatever. It's usually him. That platform, whether it's Amazon or anything, is effectively an algorithm. That's what it is. It's an algorithm which does two things. The first thing it does is it communicates with you. It gives you an opportunity to train it to know you. So you train it to train you to train it to train you to train it to train you. This is what happens with the algorithms of the platforms in which we use, whether it's TikTok or particularly Amazon. Amazon is fantastic at giving you advice. The books it recommends to me are always spot on, always. It knows me. It knows what I want to read. It never gives me, never anymore. Never does Amazon give me a suggestion of what I should read that I don't want to read. I really want to read it. It's very accurate because I've trained it to train me to train it to train me to train it and so on and so forth. So, you know, when now it gives me, it's got enormous power over me because if you suggest a book, I won't even really even look into it. I will buy it because 99.9% of the time I want to read it. And when I read it, I say, well, thank you, Amazon. I really, that was a good book for me. So that's enormous power for a seller, but Jeff Bezos and Amazon.com is not a seller. It is a feudal lord of the skies, of the cloud because he doesn't make profits from selling the book. He collects rents from the vassal capitalist who has to go through Amazon in order to sell the stuff. So that's feudalism. So it's a rent which was the way in which wealth was accumulated and the feudalism is back, back. It has made a huge comeback. It's not profit, capitalist profits are being squeezed because if you are producing, if you're a capitalist producing something and you sell it to Amazon, Jeff Bezos collects 35%. 35%. That's a huge ground rent. I call it the cloud rent because it's not on the ground, but it's on the cloud, right? Now, why is this important? Because if I'm right that this cloud capital has, without us realizing it, caused capitalism to evolve into something different, something worse, something closer to feudalism, which I call techno feudalism. If I'm right in this, and I do believe I'm right in this, of course, the vast majority of people are there disagreeing, but that's fine. If I'm right, however, then artificial intelligence turbocharges the power of cloud capital because the capacity of Amazon, the algorithm, of any algorithm to convince me of stuff increases exponentially. It's powered to even manufacture desires I don't have, let alone simply service the desires that I have. It goes through the roof. And that is awful. This is not a criticism of AI. It is criticism of the kind of socioeconomic exploitative system that has evolved out of capitalism, which I call techno feudalism. The society that is being created as a result of these algorithms and AI driven algorithms, the cloud capital, is going to be even more unstable than capitalism. Capitalism was always producing crisis, one after the other. Capitalism is synonymous with crisis, not just 2008 to 1929, but crisis all the time. Because, you know, as Mark said, capitalism is very good at producing a lot more stuff than the people out there who work for the capitalists have the money to afford. There is a mismatch between output, supply and demand, always. And when there is too much excess supply, then the system has a crash. It burns down, bailouts and so on, and then it reboots. But if this was the case, if instability was inherent and crisis generation was inherent in capitalism, in techno feudalism, it is a million times worse. Because if you think about, let me give you an example, right? The profits of Amazon, the total revenues of Amazon.com, all of the total revenues of Amazon.com, only 10% go to workers. And when I say workers, I don't only mean where has workers, I mean even the top notch engineers, only 10%. 90% is kept by the company. Whereas General Motors, Ford, Volkswagen, right? It's more than 93% that goes to salaries and wages. So what this means, it means that the wealth that is accumulated, that is extracted by cloud capital, simply never sees the light of day, never trickles down. That means that you have a lot more productivity, a lot more output and a lot less demand. So the crisis of techno feudalism is not going to be far worse, macroeconomically speaking, than the crisis of capitalism. That's one reason. The other reason is that we have exploitation that engulfs the whole population. And under capitalism, only proletarians could produce capital. Steam engines, industrial robots, right? Who would produce those? Proletarians in a factory building steam engines. So a factory would be building industrial robots. There will be some workers employed and other industrial robots. So workers and industrial robots would work together to produce industrial robots. So exploitation of labor happened in the process of accumulating capital, of wage labor. But when it comes to Amazon and Big Tech and so on and so forth, right? Every single person is producing part of the cloud capital. With every post you post, every video you post, every Big Tech review you upload on Amazon. Every time you move around the city or the country with your Google Maps switched on, Google's capital increases. So this is the first time in the history of market societies when capital is being built, not just by proletarian wage labor, but almost by everyone, including the top-end managers, including the capitalists themselves. So you have universal exploitation and a large chunk of labor, which is completely unwaged, going into the reproduction of cloud capital. So artificial intelligence is simply turbocharging this instability, the crisis and exploitation of humanity by the owners of the machines at a level which I don't think is consistent with the survival of humanity, of society. The crisis we're going to have are going to be so gigantic that the wars will proliferate. This clash between the United States and China now is a clash of two techno-fidelisms. The American techno-fidelism with the Chinese people. Now, if you combine the power of AI-driven cloud capital with the power of finance, which already happens in China, Tencent has WeChat, which is a bot, effectively, that combines, on the one hand, it combines the cloud capital that I was talking about with banking services. And if you add to that various blockchain-based systems that Wall Street is creating, not to liberate us from Central Bank, but to imprison us further under the clasps of Wall Street, then you have a dystopia. Now, this is not the fault of AI. Here's how I finish. Many people are worried about surveillance. The so-called surveillance capital, which I think is a bad, is not a useful idea. This is not surveillance. Capitalism is a complete different. This is a techno-fidelism. But the idea of being surveyed, of your data being harvested and used against yourself, that's all correct, right? Personally, unlike Judith and many of you, I'm really kind of bothered with what the machines know about me. I've got this Google Assistant here. It's always switched on. He knows everything about me. I'm kind of bothered. I don't even put a bit of plaster on top of my camera. I let the machines and the cloud capitalists know everything about me. I can't give it down. That's my personal thing. I respect those of you who do not want the owners of cloud capital to know stuff about you. My concern is not what they know. My concern is what they own. I'll say this once more. I don't give a damn about what they know, about the data they have harvested. I care about what they own. And what they own is this kind of AI-driven algorithmic cloud capital, which gives them enormous power to extract value from everyone, value that cannot be distributed, cannot be distributed through the market because the market has gone. Amazon.com is not a market. It is a simulation of the market. It is a cloud fiefdom. It is a trading platform, but it's not a market. There is a huge difference between these two. And therefore my concern is one. I do not want these people to own AI-driven machines. So like a good old-fashioned Marxist, I will concentrate on what matters, not the redistribution of data, the distribution of income. No. We need to take over the means of production. And today that means also not just the means of production, but also the means of behavioral modification, which is what cloud capital is. So we have a big revolution ahead of us at a global level to take over the AI-driven machines and to turn them from machines the purpose of which is to modify our behavior in the interest of the very, very few and turn them into machines that help us collaborate and do great things like, you know, I am a star trekking. You know that. I want the universal translator. I don't want a human being translating for me. I hate that. I want an automated translator that translates adequately, what I say in every language of the world. I know you hate that because she's a polyglot, but I'm not a polyglot. And I want to have a universal translator like, you know, Captain Kirk has. And I know that there will never be a translating machine that will be able to translate literature well. It will be able to translate it so that, you know, I get an idea of what the author wrote, but it won't be literature in the language in which it translates. Then translators will be liberated from the run of the mill bullshit task of simply translating stuff just to translate it. And they will all have to become, you know, authors because that's what a good translator of literature does. He or she writes a new book based on a book written in another language. Thank you. Thank you, Yanis. A couple of related points from people try me on the chat here. Valpnos and others, they say, AI like chat GPT and stable diffusion use using the collective work of our societies to produce new things only strengthens the argument for a form of UBI or shared dividend. And Ipatios reminds us that open AI, which is the company that's making all these advances at the moment with chat GPT and so on. It started off as a nonprofit. And now I believe it has an investment from Microsoft and it's going to set off kind of arms race among Silicon Valley, Apple, Facebook, Google, they're all gearing up to compete against it. So for the rest of you who are about to speak, I'd be keen to know what you think about that. And lastly, violin says that chat GPT couldn't tell me how many homeless people could have been housed with the resources used to create it. Okay, Maya, Maya Pedevich. Well, thank you, Yudith. And thank you Yanis for these observations because I'm not a tech person, not also a person from finance. So I can only relate to this issue from my point of view. And of course, as someone that's using the internet, also for my work and working on the computer, I was playing with GPT for chat for, I don't know, a couple of weeks now. And of course, watching a lot of videos to see what people say about it because for me, I think it's very interesting to see what society is focusing on at this moment more than trying to realize what this technology is going to be like. First, because most of the people that are in a way using the chatbot don't care. So as Yanis doesn't care who is surveilling him on the computer, a lot of people don't care what is going to come out of this AI system that we don't know much about. And of course, there's a lot of people that are in some kind of doomsday mode thinking that we're all going to die and that machines are going to overtake us. And that a lot of people are going to lose their jobs. A lot of my designer friends are like really panicking because they see that, I don't know, my journey can make much better pictures than them. And even people from the music sector, from the electronic music are seeing that a lot of jingles can be made by AI. So you can see that this machine can do a lot. Is it singularity? Is it like a sentence? I don't think it is at the moment. But as Yanis said, we don't care about this point. Of course, we will care if it completely takes over the whole system and we don't have any kind of control. The thing that was interesting to me from all of these things, and there is a lot and a lot. It's a big hype. Everybody knows it. But I think that we have to be careful about this control issue. Even the people who made it are now, I think a little bit afraid that they're not going to control it. But when we think of control, we also think a little bit about, so we're now trying to control it so it does not kill us all or whatever. But I don't think it's this kind of control. Because for me, this letter that the Future of Life Institute published asking for a six month pause in the development of AI, together with Elon Musk, for me, this was a very interesting letter because we were asked ourselves, why is this pause? Are they afraid that's something, I don't know, some kind of rules they will happen? But then I also read some very interesting aspects of this letter. And that is that this letter actually did not, in any kind of way, talk about very important things concerning the AI system. And that is they were mostly focusing on this, as they call it, a long termism that actually ignores the harms resulting from the deployment of AI systems today. So they are looking mostly at, okay, in the future, this and this can happen. But we are not looking what are we supposed to do with the things that are related with developing the AI today. And I think this issue, I did not think about at all because all of us think that this AI system, of course, in a way, works for itself, like it's some kind of an entity that we use. But actually the workers exploitation and the really massive data test to create this product is happening all the time. And it's happening day by day while they are training the system. So I think that this is one very big and huge aspect that we should think about. Also about the deep fakes and the exploitation of synthetic media in the world, where at some point we will have no idea what is fake and what is not, which will make a big problem in lots of aspects of our lives. And it is interesting to see how this regulation functions in the West and how it functions in China. Because actually China has its first AI law on ethical issues and you have to put like a seal mark on the deep fakes so you know that something that has been produced by AI is actually not real. So that you can in a way recognize fake from real, which I think is a very, very important issue. And the regulations in China, there is an ethical and regulatory concerns that are related to this, which I do not think is happening so much in the West and with the GPT-4. I don't know if somebody knows something more about this, he can say, about these regulatory issues. And also what Yanis was mentioning, which was of course also the issue while we were using all of our social medias and that is of course data mining and the concentration of power in the hands of a few people that are going to in a way be the ones that are getting the most profits out of the AI. So I think that at this point, we should try to focus on more on transparency. Also we should focus against all kinds of exploitation of workers that work on the field of developing the AI and try to see how we can in a way use this AI system for the benefit of society and not for the benefit of the corporations. Because it's not before of this Terminator 2 scenario and her and Ex Machina, Kubrick's audices, the space audices scenario. We will have all of these things that are everyday things and that is that this AI is going to work, I think very much similar to all the other digital things that we have been connected to in the recent years. Thank you, Maya. Amir, Amir Kyair, policy coordinator. And before you speak, I just want to comment on something. There was a piece in New York Times by Bruce Schneier, a security consultant. I'm sure you'll be familiar with him, Judith. And he was arguing how one of the great fears with the AI is that with this new level of AI is that it could be used to essentially to automate lobbying efforts so that an AI could very quickly single out which decision makers you want to lobby and then automate lots of messages to be sent to them and do so at scale. And that way could end up influencing policy. So Amir, I'll forward yours and then Judith, I'll bring you in also for your comments on that. I think they still like to be wind and dined. So I'm not sure if the AI messages is going to sort of carry the same effect in the halls of the European Parliament. I think we talked about it a few meetings ago about I think lobbyists have, I think, 15,000 or some high number of actual physical passes to the parliament. So I think they still rely on that old school model, but this is of course worrying because it's about strategizing those meetings and so on and so forth. For this session, I want to just quickly pick up a little bit on what Maya mentioned and that despite all the beautiful things we see from this technological progress, the power of open AI and all of these other systems is built on content of the public, on everyone's content, literally like the language model used for this relied on Reddit and Wikipedia and millions of books and so on. So it's all of our content that's relied on. And of course the underpaid workers throughout the planet and the case in point with chat GPT is the workers in Kenya who were paid between $1 to $2 an hour to review toxic material to try to flag it and make chat GPT less toxic. So that becomes workable, right? And of course the intermediary in this case was taking home $10 an hour in that sense. So it's that same model again. And that's not the only case in our Facebook doing the same thing and so on. There's also other developments we also have to bear in mind and looking at the, we talked about the ownership of these companies. One of the owners of open AI, Sam Altman. He has another concurrent project going on at the moment called Worldcoin which is the building a global identity and a potential network through Irish scanning of users. It's banned in the United States but they have teams throughout the African continent, Southeast Asia and so on. And they've the busy Irish scanning 1.5 million people so far and offering monetary rewards for every person scanned and they're promising to do all of this in the open and with public collaboration and maintaining right to privacy. So given the exploitation we already see in how the chat GPT is running its business in terms of the workers and so on. We have to remain vigilant and I would love to see some of the members to get involved in this issue and investigate it more deeply. We have a thread on the forum already so that could be a starting point there. And very briefly finally, we touched on tonight on issues related to autonomous weapons systems that there is a process. There's been a process of the United Nations for about 10 years or so under the existing committee to which reviews conventional weapons and lethal autonomous weapons systems killer robots fall under that sort of area of designation. And there has been global South countries have been calling for an outright ban on autonomous weapons systems but there's been lower resistance of course from the United States and its NATO allies against us. So those negotiations of course are continuing and they might be at some point some mechanism to coming in play but that's not really going to be the case from what we can see in that sense. And one of the major issues as well especially the scenario that Yanis mentioned is that a lot of the rules of international war which would come into place even if there was an agreement as you just was mentioning on autonomous weapons systems is that they're only coming to play when there's an international legal war. So with everything else that's happening like the sabotage and espionage and so on when there's no declared war there will be no law or no prohibition preventing this from taking place. So even given the precise nature of these weapons it's going to be difficult to control from legal point of view in that sense. Thank you. Amir and you're right. Yes, politicians would like to be wind and dined and AI can't do that. It was a Christianized concern and the article I referenced was that they could automate the messaging part and there would still be humans that would be winding and dining them. So that's kind of a tag team effort. Daphne. Daphne Del Carre. Hey. So like I think everything the technical aspect the security aspect the aspect of war everything we've talked about has covered multiple areas and I'd like to talk a little bit maybe about the nature and climate a bit in this context. I mean, before you did was saying that the energy cost of AI is very high. I don't know much about this but if we come to how capital uses technologies any technology and how it shapes them by ownership as Yanisso well talked about one of the examples I could think about is Monsanto who is now collecting a lot of data and they want to start this thing called precision farming which would be like a smart farming that would be more and more mechanized that associates climate conditions with soil conditions with all the necessary inputs that would be necessary to maximize yield from said regions. So this is as usual like going at the opposite direction should be going in terms of environment. You know, we want to go towards lower input farming that is more in harmony with the land and that doesn't like that doesn't put efficiency above all and deplete the soil and contaminate the water and one of the other things that environmental agriculture is about is food sovereignty. Nations being able to produce as much as they can their own food not to rely on long supply chains that are energy intensive that are exploitative harder to control and so on. So this is just more going through the other direction which is becoming more technical and the inputs will be adjusted. I mean, Monsanto is now apparently going into like a deal with John Deere with like smart trucks and smart berry pickers and all this more and more high input, more energy intensive more natural resource intensive farming. So it's again further entrenching us into the system because it is at the end of the day a question of ownership. Who owns the technology? Their needs get serviced and what are the needs of capital? It's more profits. So it's not our health or the sanity of our food or the health of our soil, right? So I just want to touch on a historical thing before I stop talking is that back in the time, you know, you say, don't be a Luddite, but the movements of the Luddites that is supposedly maybe was taking his name from Ned Ludd, these textile workers in Nottinghamshire in England. We think that they were scared or we're encouraged to think they were scared or disgusted or humiliated by the technology so they were like so brutish that they destroyed it. But of course, we know, like labor historians have looked at this and we know that these people very, very quickly noticed what this meant. This new machinery we met for themselves because at the time it was so hard. It took such long training and such the requirement of such skill to be a textile worker that that gave them this power and they noticed that this only power this was in the early 1800s. So already these people that had little power noticed that this one thing that was given to them this one bargaining chip that was afforded to them by their craft was being taken away and they noticed that capital always wins. If you have the money, you can buy the tech technology and one day you're a secure textile worker with high skill that you think the future is okay and the next day you your livelihood the livelihood of your town is gone away. So who owns it? Is the question of the day and I finished by just a little chance that was very famous at the time in these towns. So it goes like this. Chant no more of old rhymes about the bold Robin Hood his feats I do but little admire I will sing the achievements of general good. Now the hero, the dottingham shire. So let's own these technologies. Thanks. Thank you. Thank you, definitely. And on that point on ownership, you did. I'd like to bring you back in. What's your take on the on the letter from Elon Musk and various other eight well various people asking to pause AI development for six months and what do you think of the arms race between open AI and other Silicon Valley giants incidentally Tyler Cowan the economist he was arguing we must we must develop this as fast as possible because if we don't China is going to do it. So what do you think about that point of view of ownership. Well it's really a tricky thing. I'm not sure if you noticed just a few days later or was it a few days before I'm not entirely sure that the timeline but around the same time as this letter appeared a Google announced that they're building a much much much stronger AI with like trillions of parameters that they're considering and it will be ready by the end of May. So we're ready to more for more disruption. I think that this is really a train that cannot be stopped simply because so many people are experimenting with with AI in their own homes. Obviously building on bigger companies efforts because you can't have an AI on your home computer you can't train it on that. You need a certain amount of computing power in order to play with the big boys but for everything that can build on these things I see so many AI startups now companies creating all kinds of tools and so on that is still possible even on your home computer and there's no way of just stopping that and the people who created GPT-3 did not put in enough guardrails to prevent AI being used for ill right now. Like you mentioned the example of identifying lobbies for who you want to lobby. Well someone who was involved in the creation of GPT asked it how to stop the spread of more and more powerful AI and the answer from GPT was various things but then it came up with the idea of assassinating people who are involved in the creation of these strong AIs and with a little bit of prompting it would give a list which people to assassinate and why with the justification. So already chat GPT can be used for incredible things and I think they had the cats out of the box I mean we would have needed a kind of non-proliferation treaty to ensure that AI is only ever developed by governments or even better in a socialist system with public oversight but now that so many companies have the ability to create their own AI products I don't think that a few people agreeing to delay will have any effect at all other than giving the other people a competitive advantage. So something that Janis mentioned and there was also mentioned in the chat was the idea of our data who owns our data unlike Janis I'm not so sanguine about that I am very much in favor of data unions like the guy in our chat and this is actually a DM position as part of our tech sovereignty pillar we are strongly in favor of data unions and using our power as consumers or as creators of this data to create a world for ourselves a world in which we are not just providing the stuff for free and allowing companies to get rich out of our unpaid work this is very important Goldman Sachs predicted that 300 million jobs will be lost or degraded by AI worldwide so it means they will have a lot more a lot of companies will see more profit with less workers and it's really high time for us to achieve some redistribution of the benefits that are coming from that because 300 million jobs being lost would be a great news if we were in a Star Trek world if these people 300 million people don't have to work anymore they can just spend their time philosophizing and exploring the world that would be great but that's not where the world we live in these people will probably go on unemployment benefits if their country even offers that so we really need to up the speed at which we introduce a social change to introduce a universal basic dividend for companies profiting from AI or ideally of course just take ownership and ensuring that it's used for social good Thank you, you did Maya bring you back in Yeah just one last note concerning what you said and I think it's very important and that is the say I raised that I think is going to be a next question that we're going to be talking about of course concerning the west and the development of AI in the Silicon Valley and in China and what you mentioned the state regulated AI system would be of course better because then you would know in one base what is happening and we would not have a lot of corporations doing things without having no idea what they're doing but then when we see China and how this country functions then you're like okay I'm not completely sure if I would like this kind of a state to be regulating me and we know of course how they use their surveillance surveillance systems and this AI is going to be very successful in this surveillance but then as a movement the M25 I think that we should think of a state that is going to be regulating and controlling AI but in a completely different sense and in this kind of of course environment I think that regulating the AI would of course all of the other things and having legal regulations would be would be one way out I think at this moment Thank you Maya, two comments from the chat sorry Nose as I agree with your technicalism theory Janice but chat dpt is a very significant step towards levelling the playing field of making power and random pseudo says why don't we own our data and put it in real market places instead like data unions and data pools I'm aware that we're six minutes past the hour but Janice I'd like to bring you back into closes what do you think about what you've heard so far the issue of ownership and how that's working out and also the guardrails why some person called Sam Altman is making decisions which is going to we've got no say a few comments firstly I was exaggerating to make a point regarding that I don't care who owns my data the point that I really wanted to drive through is that what truly matters is who owns these machines because if they own if they're privately owned they will find a way of harvesting our data anyway you know social democratic system of regulation where you know keep ownership rights the same you know the oligarchs owning the machines with the state giving us rights and protection of our data it's an impossible dream that will never happen so that was affected with the point I was making in a good society I mean in my novel Another Now I just sit down in a system whereby firstly you have a bill of digital rights where everybody owns their own data and no one can use it without their permission so I go as far as that and at the same time you ban all free services and you have a micro-pay system where every application you use you need to pay for and part of that you can be paid through universal basic dividend or income but you create so essentially we're on the same page with the people in the chat with the rest regarding the jobs that are going to be lost well there's nothing we can do about that you cannot stop people from using translating AI machines in order to retain the services of run-of-the-mill translators that's simply tink-a-noot ordering the tide to stop rolling it just won't happen so you're right what really matters is that we speed up the social transformation the democratization of our companies the one person one vote one share the ending of private finance because it is the combination of share ownership that is private ownership and private finance power of the very few over the many and with AI technologies that power becomes absurd absurd and unsustainable even for the capitalism that you might call it itself now regarding China is China a society I would want to live in? no, of course not I would probably be in prison as we speak because I can't keep my mouth shut as you know but there is one aspect of China that could become the basis for a genuine economic democracy and that is that there are no safe property rights so you have Jack Ma with Ali Baba and all these conglomerates and the state the party can decide Mr. Ma, you have misbehaved you're out that is remarkable, that is a power that a society should have not the party, not 20 old men who are the political bureau in communist power but a citizen's assembly a society which has ways of making collective decision making telling even socialized companies that operate on the basis of one employee, one share, one vote that you know what, you overstep the mark here we're going to dissolve you and that is not in the social interest in the common good that's remarkable, that's something we want so there are a lot of good things about China that we need to copy there are a lot of good things about more liberal societies that we need to copy the thing we need to copy from China is that you don't have a right to trade in AI controlled and AI capital goods services and goods that is God given society has the right to take it away from you society can impose upon you restrictions on what you can do with people's data but what I think we need to seriously reconsider is the way in which this very great power, political power over outfits, over entrepreneurs over designers, coders and so on how to democratize that how that is not concentrated in the hands of very few men in some political group Thank you Yenis and with that we're going to close, we've got a bit over the hour there's so much to discuss I always realize with these things how ambitious we are to say we'll cover this issue in anything remotely in any remotely thorough way there's so many things to discuss as someone mentioned earlier in our internal chat we're going to do a series of these things I mean on AI we're going to definitely look into that I'm sorry to everyone whose comments and questions I didn't get the chance to put some of the things that you guys, especially DM25 members out there put in the forum were already answered so I didn't put these questions to the panel Thank you again to the panel thank you to you out there if you would like to join DM25 and put into action some of the seeds of the ideas that we're discussing here today then the address is dm25.org and you can become a member in a few seconds join us again for the next session two weeks from now same