 Good morning, everyone. Thank you for being here. My name is Aldo Pisano. I come from Italy and I'm a PhD student in learning sciences and digital technologies. I'm also a high school teacher, so these two things in my life are really important because my research is really close to the practical way I lead my work and my use and my approach to AI. So I must start with an apology with you because this won't be a technical speech. This will be about ethics, education and the high end because they are my research interests and I work for the University of Calabria. So what are we trying to do with this practice that we're building is find a way to build a framework about not only an AI literacy in schools but also about the AI and ethics literacy, which means that we could find, we should find a critical approach to the use of AI while our future citizens, future social actors, future workers are still in the process of education. So if we start from the very beginning, it's really important for us to educate in fostering awareness about the critical use of AI. Now allow me to guide you from this narration, what I want to do today is to tell you a story about this research project. So we'll start from some methodological premises and then we'll try to focus on the AI and ethics literacy model and then we go to, we try to understand if we can find a model about a sustainable approach to AI. About the methodological premise, I have to say that as I've said before, we'll work with the ethical and the educational approach that's really important and we'll see why and we'll see how it is important because we should assume that education shapes the world and the world that we're going to live with the AI and that our, the future citizens, our new generations are going to live with the AI. So on the other hand, we've tried to find out some guidelines in order to approach the ethical use of the AI. In this sense, we will use a risk-based approach, a responsibility-based approach that's really important to me and then the human centered approach. These will, they will help us in coordinating and they will guide us through these ethical use of the AI and in building this framework. So before we start, and as I've said, this will be very philosophical, I hope not, I hope to find you all awake at the end of my speech and from a very, you know, looking at this risk map, we have a very philosophical main risk that is the prevailing of a mathematical model. Now, if the mathematical model prevails in education, we run the risk of weakening critical thinking. If it prevails in politics, we run the risk of overtrusting AI systems and absolving ourselves of the schism-making processes and in society, we run the risk of thinking that we can rely only on rigorous demonstrations and not more on dialectics. Well, this means that we can, we could risk to weaken the idea of dialectics dialectics as free debate skill and as free argumentation skill in order to find a different solution in a way that starts from a forum, a way through which we can talk together in a democratic way, in an inclusive way, finding different principles and different rules to answer to different challenges in different scenarios. So that's the point of a democratic approach to what we call truth. And the problem about truth is a very ethical problem when we talk about the AI and now we will see why. Now, you know, I think that AI is really useful in many ways in education too, because big data association between big data or risk prevention or pattern creation, but even in education it is really important because it can help us in personalizing learning and at the same time it can help us in feedback regulation with our students. It can give a very quick feedback to our students. But the problem is when we talk about ethics and when we talk about a high something strange happening because there is a very trivial idea of ethics today. When I usually ask, would you think about ethics? What do you think ethics is? Usually people answer me, you know, ethics is something that is that thing that tells you what you must do and what you must not do. As I've said, this is a very trivial idea of ethics. Ethics, ethics has its own development for centuries as technology, as society, as social processes. And today we talk about applied ethics. Applied ethics means that we have to know the new scenarios and we have to acquire knowledge about new situation in our social life, about the new technological developments in order to build new principles to regulate these new tools, especially about the AI. So the responsibility ethics help us. It's not just forbidding us something. It's helping us in guiding and coordinating the use of the AI. That's the point. It's really trivial, it's really simple, but it can be useful if we apply it at the educational framework. And in this sense, it can help us in developing new skills in our new generations. So where they are going to live in the new world, where they are going to share their life with AI systems, where they're going to live with AI systems. They know how to critically approach them, even if they are in a social space or in a workspace. So in Europe, we have two important documents about the AI regulation. One of these has just recently been approved last Friday, I think, and it is the AI Act. It is a very strong legislative model. And on the other hand, the starting point of the AI Act is where the European guidelines for trustworthy AI. Now about the AI Act, as you can see here, it says us what are the main risks that can come from AI use. So the first point is the unacceptable risks, which means manipulation, social scoring, and all these practices that should be forbidden by the AI Act and in Europe. Now the point about the AI Act, it has been discussed a lot because you cannot deny possibility to companies or industries to develop new systems. And at the same time, by the same time, you should regulate the use of the AI to prevent and to preserve democracy, to preserve pluralism. Now, how can we adopt a system that can help us in regulating the AI? So the point was about the trustworthy, the European guidelines for trustworthy AI. This was the starting point of the AI Act, and it provides us, as you can see, a system of principles that every company or every industry can use case by case. If I am an industry, I'm not involved in the... If I'm not damaging the environment, then in this case, I can choose that my first principle, my priority, will not be sustainability, but it could be accountability. So the trustworthy AI provides us a flexible model of principles that can be used and can be adapted situation by situation, and it is a model of ethics that can work in order to respect the freedom of technological development. So all these points are now under discussion in Europe because we have this fight, a fight between industries and Europe. But what we really need to do, what we really need to understand, is that if we don't know the problem, we cannot fight and we cannot face the challenges. So let's make an example. Between the end of the 18th century and nowadays, we have witnessed an increasing development of use of cars, vehicles, and what have we done? Have we denied the possibility for everyone to own his car? Obviously not. What we have done was to reshape the world, was to redesign the world, building streets, building interstates, and at the same time, we build new rules. That's what a model of ethics can help us, building new rules for new scenarios that are coming up. And about this in education, when we talk with the project that we are working on in the southern Italy with these schools, is to build a model of responsibility by design, which means that from the very beginning, when they are pupils, when they are students, I can work with them in order to build and to foster their awareness about the ethical use of the AI, and in the world where they are going to live, and where they are going to share social spaces and work spaces, as I've said before, with the AI. So ethics and education are helping us because while low is low, it takes time because technology is growing and it's developing in a very fast way. Education is now. Ethics is now. As a teacher, I can tell to my students, I'm just giving an example, I can tell to my student, okay, are you controlling your smartphone or are you controlled by your smartphone? This is a very simple question, very simple strategy. And we need to work on these little strategies in order to build this awareness about the use of the AI. Because if we look at the problem from the perspective of the company, from the industry, we are just working in a technical way. But if we look from the perspective of the educational field, there we have some problems. Because if you go into a school or if you go into a university, maybe they don't know that on the settings of their smartphone, there's a system to check the screen time. Maybe they don't know. It's just a problem about information. It's a very simple problem and we can resolve it through this model of responsibility by the side. This is the feedback loop that helps us and can tell us that education can shape the world and it is not new. So I need you to jump back of 2500 years, sorry for this. Here we are in the 500 before Christ. And the Paideia model is an educational model. Paideia is a word which approximately means a Greek word which approximately means education. We can translate it with education. The Paideia model is a model that still works because it's very simple. It works like this. We have the educator. The educator has the students to grow up as an autonomous individual and at the same time, it helps him to grow up as a future citizen. But those students that will be citizens, they will be educators as parents, as workers, as teachers maybe. And so here we go again with the feedback loop of Paideia. This is a model that still works, but the question is, has it changed for centuries? Obviously yes, because between for centuries we had the mediation of technology. Technology put itself between the social actors of Paideia. And today the problem is not about this because we always had a tool of mediation. Now we have e-book from Blackboard, now we have digital board. That's not the problem. The problem about the AI is that it is not between the social actors of the Paideia Circle. It is among us. It is fro us. So it is not only in the classroom. It is outside the classroom. It is now in the social life of our students. It is in their familiar life, think about the IoT systems. And so they need to know that they should live with the AI using an approach that it's not denying the possibility of to be supported by AI decision making as a decision making tool, but how to coordinate their new actions, their actions as future citizens and as future workers in a system in a world where they will live with the AI. Because we are in a moment of an anthropological twist. This anthropological twist is what the philosopher Luciano Floridi would call the on life era. The on life era is an era where we share our identity as a physical identity and as a virtual identity, both supported by AI systems. So we need to know that we live in this new era and we live in a wider space and that we need new form of citizenship. And that here we go to the AI and ethics literacy framework. As you can see here, it is not only about students. It is about teachers too. We cannot think that we can only educate students. We should educate teachers and those teachers will educate other students in this ethical approach today high. But today we will focus just on students. And when this question and when this problem about if we need and why we need an AI, if we need an AI and ethics literacy came up to my mind, it was 2019, it split into three other questions. The first question was why an AI and ethics literacy, how an AI and ethics literacy, and where an AI and ethics literacy. About the why, it's what we have talked until now. And just this is a little briefing for you. Why it's just to promote a critical use of the high in order to prevent manipulation, prevent discrimination, protect autonomy, responsibility, and support such a society and politics. About the how, this is a very core topic. Because here we need to understand that we cannot only educate towards the high or with the AI as a tool. But we need to educate the AI itself. I'm thinking about those students who will be programmers, who will be mathematicians, who will be engineers in industries, and they will train algorithms. So in that sense, they need to know now that the algorithms that are more pervasive and AI generally will be more autonomous, more adaptive. It needs to be educated to answer to different scenarios in the specific way that the scenario requires. I'll let you understand in a few minutes, because this is what we are doing in schools. This is very simple. These are some best practices. We are working eight hours with different classes and different students, and we are working on these main topics. First of all, the automation bias. Our students need to know that AI does not own the truth. In order to do this, we are developing beside these awareness that can be a more technical education like data awareness or computation or coding. We need to promote another approach that is the one for which we are educating to critical thinking, which means I can be supported by AI system. AI system can support me if I am chatting with chat GPT. But what I need to do is to question chat GPT and try to find divergent and different solutions from the ones that chat GPT is giving to me. This is a way to develop critical thinking. Is it telling me the truth? Is it a fake news or is it a data that I can collect and that I can keep for myself? This is the point. Through this, we are developing argumentation and debate skills with our students. Not only a debate between two students or among a group of students, but even between a group of students and the AI itself. We ask some question to chat GPT and we try to understand if it is telling us, if it is giving us the best solution for a specific problem that we are trying to solve. To find the best solution, this is the problem about the truth of the AI. We also need to develop another topic, another skill that is the frame analysis skill. Is that answer, that principle, can that value be applied to that specific situation? That is the point about the use of the AI and about the education of the AI itself. Then we are working on dopamine fasting strategies too, like, as I've said before, checking the screen time and beside this, in order to develop the critical thinking, we're working on some fact-checking rules. Which means that when I'm looking for data and I'm looking for news or I'm just searching something on the website or looking for something on my social space, I need to know if that new is correct, is it true or not true. We have a really simple strategy that is the lateral reading of the new. Lateral reading means that it's very trivial for us. I get out of that website and I ask to another browser, for example, or I open another page and then I ask that page, is that website a fake website or is that news a fake news or is it true. These are really simple strategies and that's what we need to work on when we are educating to the critical use of the AI because for us, as I've said many times today, are very trivial things, but not for new generations. The new generations will be the future workers, will be our future politicians, will be our future social actors in our world. We're at the end, almost at the end. We are doing this, doing this work in 12 classes in the southern Italy and we are doing it during the hour of civics, which means adopting an interdisciplinary approach and we are working on some questionnaire that can help us in collecting data at the end of this experiment, of this research project. We can collect data and try to understand if this model can be wider and it can be developed all over Italy and maybe for other people who want to share it. Then what we are doing very simply is that we're working not only on knowledge but also on competences to build this social sustainable circle of the use of the AI. To do this, we are thinking about this large ethical model so we start from the individual, we work on the citizen and we work on our future workers too. We are rethinking that payday a circle in order to help our students as future workers in future companies to be aware about danger, our hay hay can be dangerous, especially in the use that we have with training algorithms, with data sets that could be incomplete for example and they could not represent some group of people. So it is sense and I am almost at the end, we are developing another idea that is not about schools but it is about industries and companies. This model is the ECHAI model, ethical counseling for AI. This is between the programming phase of the AI in the company and the application phase of the AI in the company or in the industry. This can be very useful because it helps in preventing irreversible risks, irreversible ethical risks like discrimination because if I work in my company, if I have a sociologist or an ethicist and I build an interdisciplinary team, they can help me in understanding which data I can use to build a data set and that data set could help an algorithm to be trained in order to avoid discrimination. On the other hand, it is useful to save autonomy, which autonomy, autonomy of the employers because in a more, in a world and in a workspace where AI is more autonomous and it is more adaptive, we need to increase the awareness about the employers themselves. Looking at the human computer interaction model, for example, I know that if the AI is really autonomous and it can interrupt my task, I need to know where I can cooperate with the AI until it does not interrupt my task while I'm working as an employer and when I have to take a decision and when the AI has to take a decision. That's the point about setting autonomy. With this model, we are trying to bring industries and companies from a level of ignorance to a level of knowledge about the ethical governance of the use of the AI. We have two different models. The first one is a soft model. It's a model of networking. It's very cheap for an industry or for a company because it's just to inform about the ethical use of the AI. I can just adopt some principles and some rules for the governance, the AI governance of my company. I can upload them on my website and then I create a network. Those companies and those industries which don't know anything about this can share this knowledge and so maybe they can adopt other form of principles to regulate their own system of AI. In the end, we have another model that is even the ethical counseling for AI model, but it is a hard model for industries, which means that it is more expensive for them because it is education-based. We provide training programs for our employers about the ethical use of AI or we can engage or we can hire people from other organizations, other fields like ethicists, sociologists and anthropologists that can tell us how to work ethically on my AI system. In this way, I can build an interdisciplinary team that is really important nowadays for the ethical regulation of the AI. In the end, what I want to say is that everything we are doing is a bottom-up process. We are working with education, we are working with students, we can work with employers if we are an industry or a company, but we also need other tools that are top-down intervention tools like politics and especially in this sense, politics can help us preserve pluralism, complexity and truth, where truth means, and I want to end with this quote by Hannah Arendt, from a political perspective, truth as a despotic character, which means that if we can rely too much on the AI and we think that AI owns the truth and we don't adopt a critical approach to the AI system in order to understand the specific frames where I am applying the AI systems, it could be a problem for inclusion. And to build inclusive scenarios, I need not to take one truth and say, okay, this is the truth and it is the same for all the situations. This is not a point. And this is what AI could do. What I need to do is with that critical approach we are talking about, to be intelligent about the frame we are working on. And intelligence is adaptation to the specific situation, even if we are using an AI system. And that's why intelligence is really something different from computation. Thank you very much for your attention. And if you have any questions or if you need some, if you want to share ideas or join the project, these are my contacts. And thank you very much for your attention. Oh, this is a very strong question. I'm happy from an ethical perspective, but if I look at it from the perspective of a technological development about industries, I can think that some strong legislative model can compromise technological development. But from an ethical perspective, I'm happy, especially when we talk about the social scoring tools and manipulation tools. So that's the point where I'm really ethically happy. Thank you.