 you know, discussion for half an hour or so, right? So it could be interesting, yeah, to exchange idea also during my presentation. So let me tell you what I'm going to talk today. It's essentially, you see the title, you know, is the challenges of the transition from, you see the funny title from AI to AI, right? Which apparently, you know, doesn't seems to convey any information, but if you look more carefully at AI, you'll see there is a conversion from, let's say, big A as more I, right? Which in my understanding means artificial intelligence where the emphasis is on the artificial component, right? And then, you know, on the opposite, artificial intelligence where you have a small A and big I means that, you know, you are mostly trying to understand, you know, the meaning of, let's say, intelligence, right? So you will see that essentially, there is a very interesting, you know, discussion that you can now have concerning the field of artificial intelligence, and especially, you know, on whether we are at this time, right? So let me briefly tell you what I'm going to talk today. Essentially, you know, I will begin discussing what the learning artificial intelligence is mostly nowadays, and the meaning, you know, of this transition, right? What could be the meaning of focusing more on the quality of intelligence? Something that, as you will see, has to do with developmental intelligence issues, and also, you know, more with laws of nature, and with another fundamental, let's say, fundamental point, which is, you know, how to bridge more, let's say, abstract representation of the world with learning, reasoning and learning, right? So bridging logic and learning. And I will finally, you know, tell you about a sort of new framework in science and technology that could essentially open. So let me, you know, before telling you what is next, mostly what machine learning is nowadays, you know? In machine learning, you essentially are in front of neural networks where, you know, look at the protocol, right? It's a protocol of learning, which is pretty easy, right? Suppose you want to recognize, for example, a picture, or let's say, in a very simple way, you are given this character, right? And you want to understand what is the category. Oh, please be in mind that when you have a look at a picture like this, right, this is an undivided chart. At the end of the day, what you perceive, what you perceive is a signal, right? So it's a sort of electrical signal, and you can only, for example, in the brain, it comes from the retina and you have to, let's say, come up with a sort of interpretation. Something similar happens in computer. The only difference being that in computers, you know, what you have here is the outcome of a sensor like a camera, okay? But there is no remarkable difference between, you know, what is happening in a retina of the eye, or for example, a mammal, and what happens in nowadays computers, okay? Essentially, you know, you come up with a representation of the input, and what you have to do is that you have, you know, let me accept other people coming, okay? And so what you really have to do is that you have to come up with a sort of interpretation, okay? So, in nowadays, there is a lot of emphasis in a sort of, let's say, very elementary communication protocol. What you have is, you know, the machine, the purpose of which is to recognize a character, for example, or a picture, and then you have a certain supervisor who is supposed to, you know, tell you what is the information, okay? So, let me show this picture, right? So this is just an example of what an artificial neural network could be, right? So, for example, you know, look at how, you know, this is- Sorry for the interruption, but are you sharing the screen? Oh, thanks, thank you so much. It's so important to share this screen. Yeah, thank you. Yeah, sure. Okay, so let me share the screen and maybe I will also quickly go back just to, you know, summarize what I did. And maybe since a few people came later, let me show again this picture. Can you see now? Yeah, sure. Okay, great. So if you look at this picture, you know, I was telling you that, you know, a machine, just like humans, typically try to make sense of pictures, for example, right? They want to find an interpretation. So I was telling you how, for example, a retina of a mammal could be really very similar with respect to a camera of a computer, right? Because at the end of the day, they both provide a signal, which depends, you know, on the light, right? So at the end of the day, there is, for example, in this very simple process, you see zero and one depending on whether or not you have a pixel which is on, okay? So pretty easy, right? You have zero, zero, zero, you see here. So let me, for example, you know, look at this. You have something like zero, zero, zero, and then you have one, one, right? And then you have zero again, and you go in the second row, and so on and so forth. Okay, so that is a representational issue, right? So given this pattern, you have a representation of the pattern. And interestingly enough, there is a very simple, let's say, communication protocol, according to which what you have is that the supervisor is simply, you know, telling the machine what is the category, okay? So for example, it's telling, hey, look, this is a pattern two. And interestingly enough, you know, just like in nature, we have artificial neural meters, right? Where you see neurons, you see the neurons here. So 26, 27, you know, they are neurons. They are connected to the input. And interestingly enough, there is a computation of this activation, the purpose of which is to compute the output, right? And to return a sort of code for the given information. And so here you have the input and learning, you know, means that there is this guy here, remember the supervisor who will, you know, tell the machine that the purpose, you know, of learning is that whenever you apply this chart here, the machine is expected to return the code that you can see here, right? So I think you can easily capture the reason why you have zero, zero, and then you have one hot here, right? So this hot output is simply the one which is encoding, you know, the fact that this character is character two, okay? And so at the end, learning means that you modify the weights, right? And the purpose of modifying the weights is to return exactly, you know, the solution. So there is an interesting, let's say, biological metaphor, right, in the learning process. So just wonder if you have any question at this time. Concerning, you know, the process. So this is a sort of artificial process, right? But it's very much inspired to what happens in biology and, you know, let's say nearly any individual in nature. Okay, so learning, you know, means that you modify these weights, okay? And in the last few years, we have seen, you know, the emphasis on deeper texture where, you know, you have a lot of layers, right? Not just one. And we have seen that if you have big neural networks, where there is, you know, the possibility of gaining more abstract representation and to approach, you know, very complex problems. It might be interesting for you to have an idea of the number of parameters that you have in nowadays neural networks. Well, please be in mind that nowadays, you know, the number of, you know, weights, look at the neural network like this, right? So you see any connection here is associated with the number, with a real number. You see, you have only one hidden layer here and in deep network you generally think about, you know, several hidden layers, okay? Overall, the number of parameters could be of the order of billion, okay? So please be in mind, you know, this number, we are optimizing during learning functions with billions of parameters. You know, a challenge that years ago was just regarded as a sort of crazy, you know, objective, right? And nowadays, you know, this is really possible. So for example, you know, in the last few years, what we have seen has been, let's say, an impressive, you know, an impressive evolution in terms of performances. You've probably heard about speech recognition, which is one of the challenging technologies nowadays. And if you look at this car, you know, this is just telling you what happened in terms of recognition rate when neural networks and especially, you know, deep learning, where, you know, deep learning means that you are given neural networks with a lot of hidden layers, right? So you see the impressive impact in terms of recognition rate and error, you know? So you see the dramatic reduction of error about 10 years ago. And the same, you know, happened for image recognition. So problems that I think you have seen and you see daily if you use, you know, ordinary services like, for example, Google or Facebook, you typically have this technology inside. So, you know, this is the time in my lecture today to, let's say, especially if you like science in a very general sense, to ask whether, you know, this evolution can be regarded as a sort of paradigm shift, right? In science, people in epistemology typically, you know, talk about paradigm shift when something really new happens. And so you may wonder whether, you know, there is a truly paradigm shift, okay? And well, what I think is definitely true is that the founding fathers of deep learning that are typically, you know, considered Geoffrey Hinton, Yanlecan, and Joshua Bangel, you know, they definitely understood that deep learning, you know, was already enough in terms of methodology for dealing with grand challenges, right? And they understood that the power of GPU and huge availability of data were already enough, you know, for facing very important problems. But, you know, what I want to tell you today is that if you really look in perspective, it could be a terrible mistake to believe that, you know, everything has been done, right? Because this technology, you know, what I want to reinforce the idea that this technology, especially in terms of methods, you know, the methods were more or less around 30 years ago, right? What we are seeing recently has been this great combination of, you know, computational power and especially GPU and huge availability of data, right? Which was not available, let's say 30 years ago, okay? So I think this is an important point for my talk today, right? It's a sort of, you know, I very much would like you to look in perspective because it could be once again, you know, the progress in AI, right? So I'm just mentioning, for example, this popular classical paper, sorry, book by Thomas Kuhn, he published a fantastic book on the structure of scientific revolution, you know? And this is what, you know, he wrote. I'm just quoting what he wrote. And so you may have a look, right? So you may ask yourself whether we really have seen, let's say, very important changes in terms of methodologies. Well, we have seen some changes, but once again, you know, the most innovative idea was the understanding of the possibility of using these methodologies for real-world applications. So it's interesting to see, you know, look at this slide here. So I'm proposing a sort of funny challenge, right? So for example, suppose you have a, you are a sort of research agency that I'm depicting as Santa Claus, okay? So a guy who, in principle, can offer you a lot of money, okay, for new projects. And so the interesting formulation here is, look at what Santa Claus is telling you. Somebody told me you have got access to a lot of training data that you use for massive learning tasks, that's fine. And interesting, so you also, I've got two computational resources for learning, right? And, you know, somebody told me that learning in research labs lasts for a few minutes only or hours or week at most. Why don't you let your computer learn just like children? So that's the big question, right? Nowadays, what you have are fantastic results which once again depend on computational resources, but the learning process doesn't really take place like in humans, right? So learning process take place in labs, let's say for a few minutes, hours or a few hours, or maybe a week, right? But there is no actually, you know, a sort of, let's say, life in the environment for computers, okay? So they don't really take the opportunity of learning continuously from the real world. And so that could be a big challenge. What could be, you know, for the years to come, a new methodology and new methodologies that make it possible to achieve this fundamental task. So it's probably a matter of quality of intelligence. And that's why, you know, in this talk, I was telling you about the transition for artificial intelligence with let's say big A, so big artificial component to artificial intelligence where you have a big role in the quality of intelligence. So that's definitely a big challenge, the transition, right? So from machines which are very powerful and which exploit a lot of the artificial component to machines where the, let's say, the quality of intelligence may increase. So let me give you a few examples, right? So let me show a case where the quality of intelligence is the marketer. Look at this. Oh, it's in Italian, but, well, you can easily understand from what happens, right? So you see, we are talking about 15 years ago, maybe more. So as you can see, you see, it's pretty easy, right? This is a minestone robot, something which costs less than $300, and essentially, you know, there is a case where the quality of intelligence that matters. Okay, so this is just to tell you that we are talking about, you know, more than 15 years ago, and I can tell you that it's not just a matter of 15 years ago. This was a solution which was available, you know, about 30 years ago, maybe more. And the quality of intelligence there is definitely very high in computers because essentially we are using new methodologies for solving games that normally humans don't regular, let's say, don't regularly use, right? So it's a definitely different component of intelligence. It's not only a matter of, let's say, velocity, right? There are new methods for approaching the problem. But, you know, it is not always the case that the quality of intelligence is very good. For example, you know, in linguistic games, it is not really the case. I'll tell you of a funny challenge we were involved in years ago. This challenge on crossword. I don't know whether you typically solve crosswords, but, you know, for a computer cracking crossword, you know, solving crossword, it's an interesting, you know, challenge because you have essentially to read the definition and to come up with the answer and to properly put the answer in the scheme, right? And so the idea is that you essentially try to answer all the clues and then to fill the puzzle grid, right? And so years ago, in my lab, we collaboration with Google, we introduce a system, you know, which was capable of solving crosswords, you see? So you have the crossword here and essentially you take the definition and you put the definition, you send the definition to Google and then you return the documents on the basis of the definition. And there is a mechanism for filtering the information and ranking, as you can see here, the candidates and then you solve at the end of the crossword. So interestingly enough, you know, you can get very good results. So we were talking about 15 years ago, more or less. And so you can still find this article which appear on new scientists, right? Where, you know, they were claiming that there is this possibility at the University of Siena of, you know, competing with humans in this very challenging problem. But interestingly enough, this is an example where computer are not still, let's say competing, not capable of competing with a real expert and that's interesting, right? Because at the end of the day, the quality of intelligence is still pretty poor. When I say that the quality of intelligence is poor, at the moment, you know, computers are not, let's say, incapable of competing with humans in terms of answering properly, you know, to crosswords. What computers typically can do better is the constrained satisfaction that you have to implement when you allocate the words from the grid. But in terms of the quality of language, the quality is pretty poor. And more than a decade later, you know, the quality is still an issue, okay? And the same, you know, happens in computer vision because in computer vision, you can probably read article where people says that now computers are better than humans at recognizing and sorting images, right? But, you know, my claim here is that it is not really the case. So we have very good performances, but even very serious problems. Let me show you this example, right? So look for example at this picture on the left and this picture on the right, you see? The picture on the right has been obtained just by adding some noise here, right? Oh, apparently for humans, there is no remarkable difference if you, let's say, look at this image or this image, they are, you know, they are bus, school bus, right? Well, interestingly enough, you can attack a neural network which try to recognize, you know, this image here. And the problem is that if you put this noise, you know, you can confuse a bus for an ostrich, which, you know, it's definitely terrible, right? So it's an example of artificial intelligence with the big A and the small I. So, you know, in principle, image recognition is working very well, but you can attack neural network by, you know, introducing appropriate noise and then you can definitely, you know, fool the decision of the neural network. Here is another example, right? So this is a panda, right? But if you just start this noise, you see for humans it's the same image, but interestingly enough, you know, the machine end up with a conclusion that this is a gibbon, right? Or look at this, you have this main here, right? And then suppose I start to dress these, you know, these glasses, right? And just we are proper glasses and you become an athrix, right? So the crazy thing is that, you know, if you classify, for example, the sex, you just change the sex by dressing, you know, these glasses. And so this is just telling you that there are remarkable problems nowadays, right? So great results, but at the same time remarkable problems. The same happened with, you know, traffic signs. So very serious issue. Okay, this is just to tell you that, you know, the time has come for probably looking a different mechanism for the emergence of intelligence and there are a number of people, you know, all around the world who are trying to see how to come up with algorithms for computers when looking more at the law of nature, right? With the final purpose of better, you know, understanding the quality of intelligence. So let me just show a few things just for, you know, looking perspective. And one of the big problems that are definitely open is that, well, we need machine capable of working in an environment where there is a gradual exposition to information. Remember what, you know, I was telling you earlier, I was telling you that we very much would like to see a machine which can learn from a big stream of data, right? Just like humans and not from a collection of data, let's say for one hours, you know, or for one week. We very much like to have the machine gradually exposed to the environment and capable of learning, you know, just like humans. And the other big problem is that you also need to explain your decision. So intelligence is definitely not just a matter of learning. It's a matter of representation of what you have been learning, right? And so it is extremely important also to explain your decision. So the time has come for bridging different schools of artificial intelligence, you know, which are more focused on learning or on reasoning issues. So nowadays, you know, it's a very nice time, exciting time because there are a number of studies, for example, where you can from text generate images. So nowadays, you know, we have a preliminary experiments for generating, for example, just like in this case, a bird from its description. And this is, you know, an interesting field. How can machine generate then self, you know, a new concept? So let me just show you a perspective challenge, right? It is based on this preliminary experiment that we carried out with our students. So look at this, oh, there are students in this room with the smartphone. And as you can see, this is a sort of, if you look at the students with the smartphone, you know, it's a local screen, it's a sort of screen, right? So each smartphone is a sort of pixel. So look at this, this is the Italy flag. You see there is mostly green, white and red, okay? So of course you can localize the position and, you know, send the appropriate color to the smartphone, okay? So look at what happens, for example, now. So still the, okay, so look, this is a sort of way, okay? Similar to the polar that typically you see, you know, in during concept or soccer context. And look at, now we can also write, we will write Siena, you see S-I-E-N-A, okay? So just to tell you that, of course, the resolution here is definitely very small, but suppose, you know, you have, instead of one of the students, suppose you have thousands of people, right, with the smartphone, in principle, you know, you could define an interesting creativity loop because machines can now generate, for example, images, and you could generate images from tweets coming from, for example, people who are watching a concert, for example, suppose a rock star, okay? Who is, you know, giving the concert. And so people, and let's say the typical context where participants and fans tweet messages, you know, these messages could be collected by a computer, which could generate a visual, you know, a visual pattern, right? And the visual pattern could appear on, you know, on this screen, this human screen, right? So remember what I showed you before. Essentially, you know, you can send the information that you generate directly, you know, on the, you know, here. So it's a very nice, let's say, creativity loop because people themselves could see what they are generating, right? Because they send tweets and the tweet, you know, go to this computer, which generate, you know, the image and the image goes directly, you know, here. So that's an example where you can also see how the quality of intelligence could also find interesting loops with the creativity of humans. So that's the loop, right? So the artist will send the information to the crowd and the crowd which send the information to this artificial intelligence generator, which send information to the human screen. And of course, the human screen send the information to the artist who could probably be stimulated when looking at the human screen. So just to close this talk today, I want to tell you about a new framework of competition in science and technology. So if you look at the quality of intelligence, instead of, you know, assuming that as we have seen in the last few years, right, everything is based on, let's say, on a sort of central structure, cloud computing in big companies, maybe something could change. So, you know, in this picture, you can see a sort of a brief history of computer science, right? So computer science began, you know, at the end of the 50, and then in the 60s in big companies, right? And then, you know, you move to personal computers and then to, as you can see here to cloud computer, which somehow are interacting, you know, with smartphone, right? So intelligence is, you know, either on the left or on the right, okay? But nowadays, you know, most of the intelligence is in cloud computing. And one possibility, if you look at the quality of intelligence could be that most of the information will move to, you know, smart computers as a, you know, typical, let's say, smartphone, which are, you know, continuously and increasingly becoming more and more efficient, okay? So that could be quite a natural evolution if you improve the quality of intelligence. And interestingly enough, you know, you see a sort of cycle in, you know, this bouncing between global and local evolution of intelligence, right? We began in the 60 having, you know, this big computational resources. And now after, you know, the passage to personal computer, we are back to intelligence, which is mostly at the level of cloud, but it could be the case that we, you know, soon we'll go back to local, you know, support of intelligence, especially if we are capable of, you know, obtaining and improving the quality of intelligence. Because in that case, you know, algorithm will become more efficient and it will be possible to support the intelligence in, you know, smart, smartphone. Okay, so let me just, you know, conclude this talk, mentioning that the evolution of this kind of ideas are likely to need a different environment, where instead of, let's say, having the typical process where, you know, machines rely on huge collection of data and computational resources, strong computational resources, we could start thinking about, you know, an approach like, you know, the one I'm claiming here, the emplenair, which refers to painting outdoors and come mostly from French artists, you know, and the principle is that you can go beyond benchmark, beyond, let's say, the kind of intelligence that we have nowadays and think of machines where, you know, the intelligence is mostly at the smartphone level and where, you know, the information is continuously collected, you know, everywhere where you live, and not just from collection of the big databases, which, you know, is handed by big companies. Okay, so here is more or less a list of items on what I covered today, but, you know, the major message here is this transition from concerning the quality of intelligence. The transition on the quality of intelligence, in my understanding, you know, it could be extremely important for the evolution of societies, not just in a matter of technology, it could be a matter of, you know, better establishing the relationship between, you know, humans and companies and, you know, maybe this is a way of improving also the control of where the intelligence processes are located and especially, you know, the problem which we have been continuously experimenting, which is essentially due to the accumulation of big amount of data just in a few companies. Okay, so thanks a lot for, you know, your nice, for your kind attention. And of course, I would be pleased to talk with you. Of course, it's definitely very difficult for me to understand, you know, what's your background and so please accept my apologies for a talk which may, you know, I'm not sure could actually be accessible, but there are a few things that I very much like to emphasize and that one is please don't believe that, you know, everything has already been done. You are young and please consider that there is a lot of, you know, possibilities for the years to come in the evolution of artificial intelligence and, you know, unconfident, you know, the major message here is that it's not just a matter of, you know, technology and, you know, evolution in big company. It could be the case that it's mostly a matter of quality of intelligence. The quality of intelligence is an issue which is, you know, covered in science nowadays. It's not very well perceived outside. A lot of people, you know, believe that everything is now in, you know, in the hand of big companies like Amazon, Facebook, Google, and it's definitely true, you know, from to some extent because they have fantastic technologies, but please be in mind the major message today is that the technology they have in terms of methodologies, you know, are essentially rooted in achievements that in artificial intelligence where, where let's say studied 30 years ago, more or less. So that's the main message, right? And, you know, things have been moving fastly and it could be the case that for the years to come, we will see new transitions, right? New strong transition. So that's the point. And as you know, when you, new evolution and new solution in technology arises, you may have a dramatic change, right? In how things have been moving. And the major message here is that, you know, unconfident we need to address the access of data and the sensitivity of data, you know, how data could be handled in a more confidential way, right? How you could better respect the privacy. This is an issue, especially in Europe, right? Less important probably in other, you know, in other places of the world, but I'm not sure that on the long run, you know, this will be negligible. I think that this would be an important issue because this technology, you know, are used everywhere. Okay, so thanks again for your attention and it's time for, if you have any question, of course I'll be pleased to answer. And most importantly, you know, you can keep my reference also for, you know, discussing later, right? So you can just drop an email to me if you'd like to discuss and to, you know, know better what we have been doing in Siena. Thanks again. I see also Piero here, right? Piero, you are here, right? Oh, no, maybe Piero is not here. So let me see, no, Piero, yes, Piero is here. So. Yes, yes, I'm here. Okay, so I just wonder if there are no questions, as I said, I'll be just pleased, you know, to in case people want to contact me also, you know, you can just, you know, drop any mail. Let me put my mail in the chat. Okay, so as for any additional information also, you know, what we have been doing at the University of Siena in this field. Well, you can have a look at this lab. Let me see, okay. So that's the link of the lab where, you know, we work at the University of Siena. This is my email and okay. And in addition, it could be also useful to put another link, which is probably the most important one because the University of Siena we recently federated a number of labs that are involved in artificial intelligence at different levels, right? So if you go at this link, right, you will see about 25 labs working in the field of artificial intelligence, ranging from laws to medicine to the field of economy and of course in core technologies like machine learning and computer vision, natural language understanding and mostly applications in the field of medicine. Okay, so have a nice day. And I, you know, I hope this could have been useful for you and I see additional information from Thiero in the chat. Yes, I want to remind, I want to remind to the students, I'm sorry, I'm sorry, Marco. Yeah, sure. I just want to remind the students who want to know more about in general, our university, our courses, our programs or any other information. They can write to our email address. I just copied it in the chat or an appointment or a meeting one to one through the website, the portal, this portal. They are, and they are looking at and that's all, that's all. Just a different information. Yeah, thanks Pierre, you know, this is important because, you know, it's just showing at the more higher level, you know, the kind of interaction you may have with our university. But please feel free to, you know, in case you become more curious in this field to drop an email to me. And, well, you can be in touch with students, for example, of our lab and it could be maybe sometime even more interesting and useful than let's say talking with me but of course I'm available. So you can just click on this link on CN Artificial Intelligence Lab and you will see a number of, you know, PhD students who enjoy artificial intelligence. Okay, so have a nice day and okay, Pierre. Yes. That's all, right, today? Yes, right, right. You can close, if there are no questions, you can finish, you can close the recording and thank you for your participation and your availability. And thank to the students who came and see your very interesting speech. Yeah, and of course you are welcome in Siena. Bye, bye, thank you. Thank you, Professor. Oh, you're welcome. Okay, I have the slide today. Should I mail you to the email to ask for the slide? Yeah, sure, sure. Let me just drop an email for me at this address. Okay, so you see the address and yeah. We'll send an email directly to you, okay? Thank you, Professor. You're welcome.