 Tilburg University presents Thijsig Talks. Hello, welcome to the 17th Thijsig Talk. Sorry, this is the 18th Thijsig Talk. I'm immediately making a mistake in the beginning. Thijsig stands for the Tilburg Artificial Intelligence Special Interest Group, and every month we have a podcast and we talk in a podcast about an AI topic with some specialists. The Postcard Format has shown you that this is the third time we're doing this and there's been some interest in it in the last months. And I got several requests of people who said, can't you tell us a bit more about what artificial intelligence really is? Because one of the goals of these podcasts is to introduce artificial intelligence not only to specialists, because they know a lot about artificial intelligence, but also to a more general public that then can learn about artificial intelligence. So I thought, who can give us an introduction to what artificial intelligence is? At our university, we have a course, Introduction to Artificial Intelligence, and I have two people with me here, Emmanuel Keuleges and Steijn Rotmans, who teach that course. So I thought, I'm gonna ask them to talk to us about artificial intelligence. Now, for the people who are live listening in, please keep your microphones off, turn off your image. And if you have questions, feel free to ask them in the chat. I can watch the chat here and then I can ask those questions. If you have a long question or you would really like to contribute to the discussion, that's possible as well, then I can open the speakers and we can actually listen to you. But I assume that most of the questions can be asked in the chat. I think, yeah, that's it for the introduction. So, Emmanuel, Steijn, welcome. Thank you for being here. Very happy that we can have this talk. Do you want to quickly introduce yourselves a bit? So, Emmanuel, we usually try to combine a senior and a junior researcher. You're the senior researcher here. Tell us what your accomplishments are. I'm an associate professor in the Department of Cognitive Science and Artificial Intelligence. As Peter already said, I'm teaching Introduction to AIs. It's basically the big introductory AI course that has about 320 students. All of our Bachelor of Cognitive Science and AI students are in there, but we also have a minor in AI that all of the students of the university can take and that course is also in there. My research is on a slightly different field. It's computational psychonegistics. It's got an intersection with AI. And then, Stan, who's next to me, teaches the practicals in the course from this year on. Yeah, that's correct. I started last August doing PhD. And besides the PhD, I also teach a lot. So, in the first semester, one of the courses I taught was Introduction to AI. And in these practical sessions that I provided, we mainly focused on the application of some of the methods in artificial intelligence that were taught in the lectures. Okay, thank you. Well, actually, I teach also many courses. One course that I taught at some point was called Understanding Intelligence. And one thing that I always said in the first lecture there, I don't know what intelligence is. So, if I don't know what intelligence is, it might be really hard to know what artificial intelligence is. So, do I dare to ask you what intelligence is or should we immediately delve into artificial intelligence? No, you can ask us what intelligence is, because it's a really good question. And the way you sidestepped it, basically, by saying, like, I don't know what intelligence is, is usually what's done in artificial intelligence as well. So, Turing famously came up with this proposal, like, let's not focus on what intelligence really is, but we assume that humans have intelligence, right? And so, if we have an artificial system that behaves like a human being and that we can interrogate and we cannot distinguish it from a human being, when we test a human on what we think intelligence is, then we assume that that system has some kind of ability to think. Okay. So, very often, the definitions are sidestepped in these issues. But you're going really far by saying the computer that can think, or because that's what artificial intelligence then would be about. If you talk about machines having intelligence. Yeah, like human intelligence, right? Because I think that's what you were referring to, is what is human intelligence? What artificial intelligence is an entirely different question, which has like four, five, six or more answers, depending on how you look at it. Okay. So, can this system, it might be then best, that was my thinking here, that we try to do it in a historical way, because I know, and we had a little bit of a talk beforehand, and you were also suggesting that the ideas on artificial intelligence have changed over time. So, how did it start? What is the, let's say the basis of artificial intelligence? Where did we start, and then where did it go? Somebody, can one of you talk about that? Yeah, sure. I would say that the thinking about artificial intelligence really started with Alan Turing, who developed this idea of the Turing machine, his test of testing if a machine is really intelligent, if it is distinguishable from human intelligence. So, if you talk about Turing and Turing machine, what time frame are you talking about? It's the 1950s. 1950s, so the artificial intelligence started there? I'd say so, yes. There's also time frames that start a bit, or even a lot earlier with, for instance, the cart thinking about what intelligence is, but then we're back to the same discussion as before, do we need a grasp of intelligence to really understand what artificial intelligence is? Okay. And if you model it like Turing as approaching human intelligence, then very soon to that comes figures like Asimov who proposed rules that the machines should follow. Okay, now you're really fast, I think. Maybe we should bring it back a little bit to the core first. So, if you say that, so you mentioned Alan Turing, and Alan Turing is an artificial intelligence and a really important figure. He was not the only person who worked on it, but I think he did things from a practical side, but also from a theoretical side, to a more philosophical side. But if we talk about artificial intelligence, they usually talk about really practical things. So, can we maybe focus them, start to focus there? So, what is the practical stance of the artificial intelligence? What did people do first? What do you need? Do you need a computer, for instance, to have artificial intelligence? It's a good question. I think in contrast to Stan, I would go further back and basically say like, there's always been this aspiration that human beings have had, that you want tasks for which you ordinarily require human intelligence to be performed by a system that's not human. And in the core that aspiration is the aspiration for artificial intelligence. It starts to become really interesting. The more complex these systems become, that you can do tasks with. So, you start having mechanical computers in the 1800s, for instance. Those start to be developed. You have Charles Babbage's differential engine and really the start of calculators, right? And calculating things is a task for which you would ordinarily require human intelligence. Once machines have achieved that, you could say that is a form of artificial intelligence. What's really interesting about Turing and why that happened in the 1950s is the development of electronic computers. Because at that point, people really start to understand that there's nothing that cannot be done by a computer in terms of operations that you can think of. So, computers are theoretically able of computing anything. That's what Turing came up with. And from that point, the possibility of doing many, many different tasks for which human intelligence is typically required starts to be a real possibility. And that's why you have typically the start of artificial intelligence is put there. So, I could try to summarize here. So, you say, look, if we're not gonna define what intelligence is, we say humans have intelligence. And artificial intelligence is imitating human intelligence somewhere with something else. That's definitely a way that you can approach it. Okay. And then the next thing that you said is that basically, if I interpret this correctly, the access to the machine, such as a computer, allows us to do what? To further this idea or to do more, basically you say computers a general machine. Yeah. So, exactly, it's a general machine. So, what Turing proved is that if you have a computer that has certain functionality, and it's not a lot that you need, that you, with that computer, you can do anything in terms of computation. So, you can emulate any other computer, any program that you are able to write or to think of, or even a program that can be programmed by a computer itself can be executed on any computer. And so, the possibilities then become boundless. And I think when that happens, people, AI researchers, really started to be interested first in problems which are easily modeled in a computer. And this is a field that you are very familiar with. Sure, but I'm asking you. Computer games, right? For instance, games were one of the big first applications of AI, because typically when we think about it, when people are playing chess, we think like, okay, those are really, really smart people. You need a lot of intelligence to do that really well. And that became a focal point of AI at some point. It's like, how can we develop systems that play games at a human level or that can perhaps beat human players at this game? So, would you then say that a computer that can play a game is artificially intelligent? Well, in the context of this game, this artificial intelligence is very intelligent because it's able to beat humans and then is sometimes indistinguishable from humans in that regard. But we know that's, I don't think that there are many people today that say that, well, if a computer can play chess, that it's intelligent. Yeah, I agree that, well, this first notion of having an artificial intelligence that really behaves like a human, has a human mind that can function in every aspect thinkable like a human could. And this was for a long time, I pursued in AI. And these applications like playing chess or playing other games or maybe outside of the game domain having just specific applications has been for a long time after that being the focus of development in artificial intelligence. So, okay, so what I hear you say now, if I'm talking this correctly, is that we talked about the start of artificial intelligence with computers in, let's say the 1950s and you said, okay. And then they said, let the computer play a game to show that it's intelligent. But I get the impression for you that the aim was to do more than that. Yes, I'd say so. It depends on the person, right? And the approach you would take. So what Stan says is like this pursuit of thinking or acting humanly. And those are two conceptions of AI. There are other conceptions of AI which is about acting in the best possible way, thinking rationally and thinking rationally. And how those problems are solved in AI is very often not by trying to imitate human thinking but trying to come up with the best possible way of solving a problem. And most of AI is that it's a different way of solving the problem. It's not how humans do it, but very often it's more successful than what humans do, right? So you achieve better results at playing chess not by trying to imitate it in the way that humans do it but trying to play it in the most optimal way and to try to solve that problem. And that for me is definitely AI. But AI is a moving target, right? From the moment something is solved and we think, oh yeah, computers can do that, then suddenly we think like, oh that's not artificial intelligence anymore but it absolutely definitely is AI in the sense that you have solved with computers a task that ordinarily you would think requires human intelligence. Okay. I hear now two things. First you say thinking and you talked about acting. Yeah. So acting is moving a chess piece. Yes. Thinking is how you decide where to move that chess piece. And you say, well actually the computer moves that chess piece better because it has a, so has a different way of thinking or actually it has a way of deciding where to move the chess piece that it can calculate. And probably we don't know yet what thinking really is and maybe we should again should get into that. But I can imagine that you then would say, look, I actually think that the computer is not intelligent when it plays a great game of chess but when it plays a game of chess like a human. Yeah. So that's definitely something that you could do. So you could think about saying the way I want to test if something is intelligent is, if it perhaps does things intelligently but it also makes the same mistakes as humans. But then you take humans, human intelligence as your objective. And that's not what happens in AI usually, right? That's the pursuit that many people have and many people think that's what AI is but that is absolutely not what happens in successful applications of AI. Successful applications of AI thus far rely on methods which are distinct from how humans do it. It's not by imitating humans that this is achieved. There are all kinds of romantic ideas that it is that it's like that neural networks for instance that limited our brain architecture but that is honestly, well, any word that you can imagine it's not true, it's just math and nothing more than that. Okay. So it's then artificial intelligence research. So artificial intelligence research. Let's see because what artificial intelligence really is we have now an idea about the research in artificial intelligence is that then defined by the problems that it's trying to solve. Yeah, that's definitely one way of looking at it but maybe Stan because his research is perhaps at least in one branch of AI like that. In a way, yes. So the way of artificial intelligence research is not as Emmanuel said to emulate how a human would solve a problem but to solve problems that humans typically have a hard time solving a problem and maybe the reason why we have a hard time solving a problem is that we can't think in a certain way that the computer is suitable to think within quotes in that way. So for my research, it's about finding patterns discovering related events in large amounts of data. And it turns out that humans are quite bad at that. So in a large database of events, we as humans looking at that are very bad at finding patterns that occur often. Maybe some trivial patterns can be found artificial intelligence is way better at spotting these patterns and also at deciding in a way which patterns it should say are important and are interesting in which patterns are not. Okay, what you're describing is commonly known as data mining or data science or learning from data. And that's been a lot of, so five to 10 years ago, there was a lot of focus and research between universities and companies on data science. We need to have more data and analysts, et cetera. And that has shifted a bit to artificial intelligence now but basically what you're telling me now that it's the same topic. Is it the same topic? It's a very related area. And the same methodologies that are used in say data science or artificial intelligence are applicable to both fields. So for instance, machine learning is used to well, it's using methodology from artificial intelligence to machine learning, making machines learn something is in a way acting intelligently and having machines that can learn something is very helpful for data science to be data-driven in some way to have a data-driven company or perform research in a data-driven way. I have to make a historical note again about this use of this term machine, right? So this is not something recent. Artificial intelligence was not called artificial intelligence immediately from the start. So people talked about machine intelligence at an early point. So machine learning, for instance, you could also just say now artificial learning. So artificial is supplanting this term machine and at some point we will not talk anymore about machine learning. Now actually a big branch of machine learning is called deep learning, right? So it's really a term which was coined by Turing and people around him in the 1950s when they were doing research in the UK about the possibilities it was machine thinking, right? That's what at that time that was a term there for artificial intelligence. Can we maybe go and probably not in too much depth but the history. So you start somewhere with Turing in the 1950s. First computers came up. People started talking about machine learning, machine thinking, so how did it develop? So what kind of, because we also mentioned deep learning, which is popular now. So that also came up somewhere. So how does this develop? How did the thinking about this develop and how did the technology develop? Because of course computers in the 50s, you see them sometimes in a museum and not people putting flux into things which are very simple. So I think one of the biggest accomplishments of Turing is actually that they could foresee where this would go to. But how did this go? Just can you give an overview together? Yeah, so maybe it can start broadly. So you had these initial systems and really development of basic computer algorithms that you can apply to tasks such as playing games. So an early accomplishment of AI was for instance, the ability to play checkers really well. And then chess. And those things are things which are hard when we think about it that the algorithms are really not that complicated. They're relatively simple algorithms. The more data you can process, the more computational power you have, the further you can look ahead in the game. But it's the same thing all of the time. And then there started to be a little bit more lofty ambitions and perhaps people got too overconfident about what would be possible in a short amount of time. Maybe you know something more about that. Yeah, sure. So every milestone we see in AI, there's immediately a lot of people that say, oh, so now this is possible. This is a big leap. So surely this other thing will soon be possible as well. And we will approach general artificial intelligence that could solve any problem that we throw in it relatively soon. And well, this field of artificial intelligence is surrounded by a lot of hype in that regard, but also from a lot of disillusionment because these anticipations did not always come true. And because of that, maybe there's also been more of a focus on trying to model specific tasks like games to demonstrate intelligence in that sense. But more recently, in recent developments, we see a shift back to artificial intelligence models that have been trained to perform one specific thing, namely language models that with some fine tuning can be applied to a much more broad sense of problems, maybe a manual. Yeah, so also to go a little bit back, right? So what happens at that point and where do we can't cover everything? But at some point there is this linking between the functioning of the human brain and what you could achieve possibly with trying to imitate that and then to try to make systems like that. And there's an interesting development in that respect is the perceptron, right? The perceptron is a supposedly model of a neuron one nervous cell that you have in your brain, it has input, it can do a calculation on that input and then give an output. So it could, for instance, classify something that you input on the neuron, you have different inputs and based on those different inputs, it could say that it's A or B, right? And if you connect many of these things together, then you can compute many different things. The idea was that you could have something that's similar to the human eye, which has all kinds of input receptors, nervous cells, something would fall like an image on that eye, but in this case an artificial eye and based on what the eye would see, you would be able to say, well, that's a cat or that is a dog or that's this or that's the ambition. And so you have this modeling of the perceptron that is the start of what is called neural networks. And I think this is a term that many people have heard and know and there was lots of expectation about neural networks. And I think it took about 40, 50 years before the really impressive results started to show. So you had what is called a very long AI winter in between that, where you had many people, researchers interested in AI doing work with those neural networks, but the results that they achieved were not impressive. I mean, as research does for long periods of time, you don't always need to achieve impressive results, but at some point due to some insights, but mostly the large availability of masses of data and the increased computational power that was available around the beginning of the new millennium. So 2005 to 2010, things started to speed up. And suddenly those neural networks were able to achieve spectacular results all at once. Okay, sure, you mentioned neural networks. You mentioned 50 years, so they evidently came up in the 1970s or something like that. Earlier than that. Oh, even earlier, okay. I know, because you talk also about vision, and you say artificial eye, but what you basically mean here is there's a picture, picture has pixels, there's a lot of pixels. Exactly. In some artificially intelligent structure, and then it says, well, this is a cat. Now we know that it took a long time before computers could actually say this is a cat. But there are results on that, because the early neural networks didn't do things. I know that that was why suddenly zip codes were introduced because the computers could create zip codes on mail and then maybe younger people don't remember, but there was a time that people got mailed to your letterbox, that's less now since we have email, but that was why zip codes were there. Yeah, and there are also, I mean, could do that, let's say, could do that very imperfectly. There was probably a lot of mail that needed to be processed by humans. It's still the case, by the way, today it gets better and better, but still even in the US Postal Service, for instance, that's all of this processing now, and it works extremely well to recognize the full address, by the way, not only the zip code, but still you have massive people who sort just a fraction of the letters by hand in some center. I can imagine that because sometimes I can't read my own handwriting, so expecting computers to be there. So when we're talking about results, yes, you had initial results, for instance, with digit recognition. That's a very, very small task, right? So it doesn't scale. But is it really small? It's a relatively small achievement. I think in the context of what we see now, the ability to recognize single digits, which was for a long time a focus of AI, there's a specific data set, the MNIST data set, and everybody in an introductory machine learning course works on and try to predict the single digits with all kinds of different methods. When you put it in the context of what's happened in the last 10 years, it's a very, very small achievement. But I am saying what we do in the last years is still the same thing except bear? Two things, much better data, as I said before, and amazingly increased computational power and then a few mathematical tricks and insights and the ability to scale these systems. But the data itself should not be underestimated. And so especially for the people who are here, those AI systems that we have today do not work without data. That the intelligence doesn't come by itself when you have something like Google Translate, for instance, or you use something like Google Lens, you point your camera to it, and then you say, identify what's there and it comes up with sometimes the correct answer. What's happening there, the reason why that works is one part, the systems itself and the inside of all of those mathematical tricks going on, but for a big part, it's the data and the massive amounts of data that go in there. So you can imagine that a system like Google Translate is trained on an enormous amount of sources of parallel data, right? So you have, I think, and you must have talked about this last week. You have, for instance, European Parliament has texts which are translated into all of the languages of the European Union or most of them at least. And so you have all of these language pairs and massive amounts of data and you can correlate those with each other. You know exactly what's being said in one language that corresponds exactly what the other thing is. Another source of those data are subtitles from film and television, right? So you have subtitle film, a subtitle film. And if you keep feeding those systems with more and more and more and more data, they get better and better and better at translating. So it's a function of data image recognition, the same thing and people should know, for instance, that when they're solving those Google captures, when they go into a website, what they're actually doing is providing labeled data, saying like, here's a traffic light, here's a bicycle for those AI systems to be trained on. So that's human compute cycles. Yeah, there's two questions. The first is we were talking about history and they said AI winter was mentioned and now we don't have an AI winter probably. Let's say, what's the AI winter that you mentioned which started 70s, 80s or something like that? Did that last until now or was there more in the history in between? That's one question I had. And the other one, let's continue with this one first. Oh, it depends on the perspective you take, but there's an important milestone, I think is ImageNet, right? What's the Europe ImageNet? Do you know what that's saying? I think it's probably 2014 or 2012 or something like that. Early 2010s. Yeah, early 2010s. And what happened there is like, AI has this benchmark tasks, right? So where different systems compete to achieve the best possible score on a particular task. And one of those tasks is predicting the correct label for a particular picture. So there's a picture of a cat, a dog, something else. And you need to be able to input the picture into the system and as output you get what's on the picture. And you chain a system to do that and then you give it a new dataset that hasn't seen and you look at what the performance is on that. Many systems compete on that. In one of those competitions, there was one system that was submitted to it which stood out from all of the other results. And that's ImageNet. And it's one of these first very successful neural network approaches. And from then on, I think that this modern AI research has basically been dominated by variations on those big neural networks that you can train on large datasets and with lots of computational power. And that's been applied to a host of other problems. So this approach has been very, very successful. But as I said, it's a function of the data and the access to computational power that make that possible. Yes, and I think this task that you mentioned for ImageNet to classify what is in images is a factor of why artificial intelligence has sparked so much interest because like the average person, this task is very easy to visualize. We can imagine us doing this task and seeing a computer do it is very cool and something that's very close to our own world. And the same goes for language models for either generative models or classifying models for language that we can really easily imagine this being applicable in our lives. And we really can see these results for what they are for seeing the image being classified correctly as opposed to maybe more theoretical advancements, such as the perceptron that you mentioned earlier, being able to separate two classes that are maybe split in a certain way that's hard for the perceptron that can be done by neural networks that string perceptrons together, which is of a lot of theoretical interest but doesn't spark so much interest maybe for the public. Yeah, okay, yes, I can imagine that because for the public it is indeed what you can do and a lot of things are called artificial intelligence nowadays, probably also because artificial intelligence techniques are underneath, but people want to solve lots and lots of different problems. But what I, that was my second thing that came out of the whole talk is that the way you describe it is incredibly mechanical. And if it's incredibly mechanical, okay, we have a data structure we load images in and keep loading images in it and then we check the labels and keep loading and then it classifies. So then I'm thinking, where is the intelligence? You see, but that's exactly what's happening. It's like from the moment it's able to play chess, you know, okay, it can do that. That's not intelligence anymore. So we would have obviously considered that some kind of intelligence, but it's not human intelligence. And I think that's something that you need to understand that what people are trying to do and the successful applications of AI are usually doing something that humans can also do sometimes better, sometimes worse, but tirelessly without having breaks, without having strike and things like that. And so in that sense, it's akin to the industrial revolution, right? So where you have machines which start to replace the manual labor that you have in the factories and suddenly you can do things by stringing these systems together, by applying them to a particular task that you were not able to do successfully before because it required many craftspeople to do a very specific task that you needed to be trained on for a long time. So I actually like this very much as a way to make people understand what artificial intelligence is, namely industrial machines replaced manual labor and artificial intelligent machines can replace at least part of the... Intellectual. Intellectual. Yeah, absolutely. So that's a very good way of looking at it. And that's why, I mean, these calculators, right? That's the same thing. You're replacing the people doing the computation. Those, by the way, are the original computers, the original originally people called computers. You're replacing them by a machine that does it for them, first by punching in the numbers perhaps, by doing all of those computations on a computer. And it's still more or less the same thing, but there's so many more things that we can do now. Yeah, yeah, okay. Well, that was actually the next thing that I would like to discuss. We are now in a point in time, there are a lot of interest in AI, a lot of things are happening, a lot of low hanging fruit is being plucked. Where will this go? What are people thinking about doing with AI in the coming years? Or what are things that you can foresee? You probably both have ideas on this. I have a question for Stan. Yeah, yeah, because you are actually the future of AI. You are still also the future of AI, but thank you. You? I'm a little bit. So last year, Dol Eek came out, which is a generative model that can generate a picture from text input. And this really strings together to previously a bit more distinct fields in AI. So there was a lot of progress being made in images and there was a lot of progress being made in language. And with models like Dol Eek, these two fields are joining together again because you give it some text. So there's language processing going on and then it creates an image. So there's also image models at work there. In the start of 2023, it's expected that GPT-4, a very large language model is finished. It's training. So I'm looking forward to that. So maybe that can take over some writing or creating course. Oh yeah, absolutely. So I'm very much looking forward to that. And also because such a big language model has been shown to also perform well in older tasks. Okay, yeah, because that would actually be more interesting, I think for me at least, because what you're now saying is look, we have Dol Eek and then, okay, we make a better version of it. Okay, I believe that you can make a better version of Dol Eek because it's just expanding what we have. More data, more competition. More data, more competition and power. So, but are there other things which we can do, maybe which we're not thinking about or maybe are thinking about, which actually would be able to do, which we might be unsure about yet. Where is it going? Or is it unpredictable? It's surely very hard to predict also because it's hard to say which tasks are impossible for computers and which are just hard for computers and which with more data become feasible. In a sense. But the application of language models to different tasks than seeing what kind of sentence this thing you give it is, is approaching more to a more general artificial intelligence. Maybe you can say like what does Deli do for the people who are... Sure, so Deli takes in a sentence. So you say I want a picture of the sea and I want some rocks in the sea and I want it to be like an oil painting. And then Deli creates an image that looks as if it's an oil painting of a rock in the sea. Yeah, yeah. And I have seen some examples which were impressive and I've seen many examples which were not impressive at all. And so I don't think there's a lot of cherry picking going on here. But on the other hand, how many examples have you seen of actually humans doing these tasks and creating impressive oil paintings? Not a lot, honestly. Yeah, that's the point. The thing is that I also saw results by I got the impression. So give Deli the assignment three times to do something similar. And I saw basically the same image returning, the same side table, the same bed. We need to recognize it for what it is. It's incredibly nice that you can do that. But this is nothing more than text to image association. There's a lot of that in the world, right? Where you have pictures and you have descriptions of those pictures. A lot of those things exist. Those systems are really, really good at associating those things, figuring out all kinds of patterns that go in between them, but not in an intelligent way. It's really pattern recognizing inside computers with billions, now trillions of free parameters, which all together create a system that looks to be creative. But it's still pattern association. What I would like to say is like, we also don't know how human creativity goes if that is really different from it. And if we are not just giant pattern associators ourselves. And that's a philosophical question. And I tend to agree with that, that you can complain that the computer is just repeating what it's already received. But for humans, originality is also pretty hard. Usually you're repeating things that you maybe don't remember, but you are influenced by whatever you saw before. Can we lift, because here's the thing. So I can type in a sentence and I get a nice picture and another nice picture. And that's nice because maybe I can make a painting of it. But is that really important? You can argue about that. But I think AI is important. Can you tell me why AI is an important topic? What's your opinion on this? My opinion is that it's very important for, well, everyone to know something about artificial intelligence. Because you only need to know something about something. If it is an important topic, why is it important? It's important for the general public because everyone encounters AI on a daily basis. If you use a phone, you use AI. If you use your email and find an email in your spam folder, that's AI. If you Google something or use another search engine to look up something on the internet, that's AI. Nobody uses another search engine. But for science, it's also especially important to know something about AI. Because every field, whether it's astronomy or economics or if it's theology even, artificial intelligence methods can really have a big impact on directing the attention of research to the right places. That's kind of a positive view. But it's also instrumental. It's like you're saying you should look at AI. You should be interested in AI. And AI is important because it affects your life. So in the same way, you should know what, I don't know, engineering is because you need to understand how all of those machines work that make your clothes and that build your cars. And I think some people are just not interested in knowing how all of these things work. So is there more than, yeah. But you're basically saying why it is important to know something about AI. And I can imagine that because indeed, it has a lot of influence because it returns in many places. I was more thinking about the lines of the following. And let's say, Elon Musk wants to send a rocket to Mars because he wants to colonize Mars. And you can see, you can look at it. And one perspective, you're wasting a lot of energy, a lot of manpower, a lot of resources to do something that is probably never going to work out anyway. And then, but you can also say, look, but humanity is incredibly important because humanity will destroy itself in 35 years. And so we need to colonize Mars to survive. There are two views. And one says it's completely not important. And the other one says it's basically incredibly important, probably the actual truth is somewhere in the middle. But I was actually thinking about, so if AI would just be about creating pictures, it's probably not that important. But other things why you would say, well, actually it's important for something, for the survival of humanity. Well, that may be a lot, but yeah. And that's really interesting because then you're taking a very noble view and you're thinking about what kind of societal problems can AI solve for us, perhaps? And I think very often that's going to be about optimization problems. So how can something that is really difficult for us to achieve right now be solved nonetheless? Because solving the problem needs enormous amounts of computational power and specific methods which have been developed in AI to solve the problem. And it could be, for instance, really into energy production, distribution, any societal problem that you can come up with. The worry that I also have with that is that you would still not be looking at other ways to do it. And you would place a needed faith in unnecessary faith in AI. And as you said, the answer is going to be somewhere in the middle. In some cases, it's going to prove extremely useful. For instance, at sustainability. If you can find ways in which to do things which require much less energy, for instance, and AI can help with that, it's great. It's going to be an amazing result. But at the same time, AI today is incredibly wasteful. I don't know, it's probably a few percentage of the global energy output is caused by training these very large models that do text to image generation. So these are all things that we need to take into account. And as with many other things, it's how you use it that's going to define if you use it for good or if you use it for bad purposes and both are possible. Yeah, it's a very powerful tool and that makes it very potent but also in a sense quite dangerous, especially if we forget what's going on inside such a model. So from a governance side, letting an AI make decisions about actual people. Certainly you're not in a lab anymore with just looking at numbers but you're really affecting people's lives. So for people who apply such a model and help it to make decisions for them, it's very important to still know what's going on and how it works and what the basis of this AI is. And that would be then one of the reasons why people should know something about AI. So you even mentioned theology. I would not think that theology students would need to learn about AI. Which students need to learn about AI? Would you say? I'd say for every scientific field, there is a scientific community that tries to incorporate AI into that field. So personally I have a background in economics and there's people using reinforcement learning to let simulate firms setting prices and demonstrating some boring theoretical stuff about markets. For in the case of theology, there's a text analysis going on. Of course, the success of AI is dependent to a large extent on the amount of data you have. I think theology has a lot of data. So there is a lot of text for humans to read but there's relatively to some other AI applications, there's quite few data. Yeah, so I think that is one of the major issues that there are lots of fields where there's a lot of data but also many challenging fields where there isn't a lot of data. Is there a chance for artificial intelligence to do something with those? I see some nodding. I see you see me nodding because it's a big problem and there is an economic reality to what's happening in AI. It's not like that it's being tried for many different things and what you see in the world today is all of these big tech firms develop AI in areas where you basically know it's going to work and you will have fields where those methods are simply not applicable or will give bad results and that's usually because not enough data. And as a point of comparison, these big language models that we keep talking about, they are trained on masses of data. Basically, you can download the entire internet and that's about what happens but this is more than you have in a million human lifetimes in terms of language information that you get in over your entire lifetime. These models are trained on orders of magnitude more data than that so that's the amount of data where things start working really well. What's surprising that humans do it with much less. We do it intelligently like a child by two years old can do an amazing amount of very fancy tasks. If you look at it from that point of view of what we ask from AI, classifying objects, no problem. So fields which do not have a lot of data are AI is useful but it's going to be the classical AI before this data intensive AI comes along. And I think it's also really important for students to know those classical methods because there is nothing worse than having that thinking like you have a hammer and therefore everything is a nail. Or is it the other way around? You have a nail and everything is a nail. But that's it is like the way people think very often today is because we have these very successful systems any problem we'll have we throw those that exact architecture at that and that's nonsense. But many of the achievements in AI in the past are Bayesian networks for instance. Those are very applicable to many problems which involve situations where you have uncertainty and they work admirably well with orders of magnitude less data than... Yeah okay so basically what the message here is so if you're a student in any topic at all you should learn something about AI and if you then learn about the modern AI and you think yeah but that wouldn't work for my field then remember that there's a lot of other AI as well that might be applicable to you. That's what we teach them first by the way we don't do the like the in even a bachelor's students in AI the really the fancy deep learning stuff that's in their third year because you can't get there without knowing everything that comes before it. So when we actually started out before we turned on the recording you said look one of the interesting things is that obviously AI started with artificial general intelligence that was the IT so an intelligence that can do the same thing as a human then it became more task oriented but now recently it's getting a bit more people are more interested again in the artificial general intelligence. So I would like to hear a little bit more about that but I was thinking so one of the things of artificial general intelligence is that it could be intelligent like a human so maybe it can then work with less data so that's why I saw that Linke you say something about it. Well the move towards general artificial intelligence again is from the mechanistic data point of view you have a model that's you give it data and you tell it what the data is and then after some time it has learned oh if we if you give it new data it says oh it's a picture of this or the sentence means this. Recently there have been applications to models that were trained on a specific task but perform recently well in some other tasks as well. So some people are getting interested in maybe we can then train like one big model on maybe a broad task but if you give it new tasks it can learn those tasks as well and I guess that's how it is for us as well. Okay so then then the view of artificial general intelligence would be if you can solve a lot of different tasks then maybe you can just add new tasks to it and also be able to from your previous experience of learning tasks to learn even more tasks. Interesting so we are almost at the end of the hour is there something that you would like to add maybe on your own research or something else? I talked a bit about my research I'm very... You did yeah to hear about Emmanuel's research. I mean once I'm going to get started on it it's going to take too long and we have about one minute left so maybe we should conclude. Okay well then I think we have at least had some views on what artificial intelligence is and how it works and has stressed the importance of it which I always like to do because I think it's really important and then I rest for me to thank you for being here and who knows we can talk more about a particular implementation of artificial intelligence in general intelligence in a later talk so thank you. Thank you.