 Felly, hi allan amser. a i mi gydag i gael gyda'r Parlym Fosloedau. Mi ydym, Maggym Chacman, MSP, a dywedd yn Ysgolchydd Ffuture Ysgolchydd. Felly, fel y bydd yn fwy gydag i'r hynny i'r hynny i ni i dda'r ffordd. Yn cael ei ffaintlygu, mae gennym yn ysgolch iawn, mae'n fyddechrau i'r bydd ym mwyn gwybod i gydag, I'm absolutely delighted many of us as well. I think I sit somewhere in both camps most of the time. I'm really excited by the opportunities that it presents for our society, but I'm very aware of the challenges and risks that it presents to us as well. This year in our Festival of Politics we are in our 19th year of thought-provoking discussion and debate, y rhan o'r bach o lyfodol, o'r cyflawn, o'r lyfodol, o'r gweithiau, o'r cyfrannu, o'r llwyth, o'r llwythau, o'r llwythau. A ydydd yn y tâlo ffyrdd o gael o gael o'r gael o ddisgusion i fynd i gaelonodd ac oeddwn ni wedi'n gwneud i'r ysgrifennu a chael cael eich ddechrau yn gwneud y ddim yn ddweud ymlaen i ddweud. Rydw i'n eich ddych chi'n gwybodaeth i chi'r ddweud, ac mae'n ddweud yn ymddych chi'n gwybodaeth i gael wrth gwybodaeth ymlaen i chi'n ddweud o'r ddysgu sy'n ddych chi'n ddweud o'r ddweud mewn rhan o'r llwyddau sydd eich ddweud ac o'r ddweud o'r ddweud o'r ddweud o'r ddweud. We are very pleased that you are here to join us in this Where Are The Ethics in Artificial Intelligence discussion, in partnership with the University of Edinburgh. Later I will be inviting you to get involved with comments and questions. We want this to be a discussion rather than just you listening to us on the panel. But you are also very welcome to contribute online via the platform that used to be called Twitter. If you go to at VisitScotPal, you can contribute in the discussion there and there's a whole host of other online platforms as well that we've been participating in over the last few days. We'll also be on the Parliament's TV channel as well, SPTV, so welcome to all of those who are joining online. I'm very pleased this evening to be joined by Brian Hills, Dr Atusha Kaziazadeh, who is joining us online, and Professor Georgios Leontides. Brian is Chief Executive of the Data Lab, which is Scotland's innovation centre for data and artificial intelligence. Atusha is a Chancellor's Fellow in the Philosophy Department and the Edinburgh Futures Institute at the University of Edinburgh. Her research is on ethics, safety, the philosophy of AI, philosophy of science and their intersection with sociopolitical philosophy and philosophy of language. Georgios is a Professor of Machine Learning and the Director of the Interdisciplinary Centre for Data and AI at the University of Aberdeen. His research interests revolve around foundations of machine learning, AI for agri-food sustainability, AI ethics and AI for industrial applications. So I'm sure we'll have a wide-ranging discussion, but to kick things off, if I can put a couple of questions to our panel members, could you give us an overview of some of the things that AI is currently capable of and is currently doing? How are we using it? What does it benefit our lives at the moment? Brian, if I can start with you. Thanks, Maggie, and hello everyone. In the Data Lab, we work on a lot of collaborative projects between our universities and industry in data and AI. And lots of examples from different sectors in the application of this now. For example, you'll probably see quite a lot in the news coming through about the use of AI to help healthcare. I think there's a recent example in Aberdeen, actually, on breast cancer screening breakthroughs and the use of AI to identify anomalies and images and flag them to surgeons and clinicians to look further into as well. We're also seeing applications of manufacturing for predictive maintenance and analytics of machines and lots of different sectors and climate as well. We're starting to see it used in terms of climate modelling and various other things. So, as I guess we started Data Lab nine years ago, we're kind of in that big data hype cycle. When everybody's talking about all of the data and what we could do, we're kind of going into the hype cycle of AI at the moment or the next wave of AI, as I call it, because it started many years ago research in this field. And we're starting to see some of the practical applications come through that. And I'm sure we'll get into the challenges and there's significant challenges around that too. Thanks very much, Brian. George, in your work, where have you come across really quite powerful uses of AI? Definitely. I think a very exciting area which has been quite important over the past decade is also the application of AI for forecasting. So, we have some successful stories of how you can predict, say, strawberry yield production or you can also predict biodiversity loss. So, it has a clear and direct involvement in climate crisis. For instance, if you were to mitigate the possible bad effects of climate crisis, we've seen cases where we can use machine learning for predicting anomalies in your clear reactor so that we can predict events that might happen, which, as you can imagine, nuclear industry, a potential event can have very severe consequences. Of course, autonomous driving. Perhaps you've seen the electric vehicles like the likes of Tesla and Lucid and other brands. Many of the systems that they are using in the background are AI systems for object detection and these computer vision systems that can identify pedestrians or objects. Systems in the background to avoid those objects. Obviously, in other cases, as Brian mentioned before, we have, for asset management, we have been working with large companies like Siemens for developing systems to predict emissions in gas turbines, so then we have applications that have a direct industry benefit. In the past, we've done stuff with refrigeration systems. We had a very big project with Tesco where we were able to create a system that can automate the scheduling process of defrosting cycles of freezing freezes and then you can imagine that being able to have those applications and have a financial benefit for the companies, but also an environmental benefit given that you might be able to reduce carbon emissions and that has a very important factor that we have to consider. Thanks very much. Etisha, welcome. I can come to you. We've heard some of the scare stories around AI threatening our very existence and leading to extinction. Do you share any of those concerns or where do you see the value of AI? I think at its core AI means algorithms and we've had algorithmic systems, artificial intelligence systems since 1954 and onwards and then there's been, as the other panellists mentioned, various different cycles, very different success stories and lots of work for decades has been done about the social implications of artificial intelligence systems. I personally have worked on the social implications of recommender systems so how artificial intelligence systems are used in social media and how they can sometimes socially manipulate us, how can they result in polarization, so these are some of the negative implications of some of these systems that are used to connect us and are used to facilitate our network connection to the other people. Also, of course, the kind of rise of generative artificial intelligence models, in particular the public release of chat GPT has made AI, this generation of AI, of course, accessible to the public and there are a lot of very serious social questions about how these systems might result in the displacement of some labour sections, how they can again increase and speed up the social manipulability, the positioning of different people in different filter bubbles because you can generate realistic content by these models very easily and then basically you can target people by different kinds of content, personalized content. These are all social questions about the social implications of the systems and sometimes you might think, if these social implications in a negative way speed up in the society, they might very much break up the democratic societies that we have set up because if you lose your capacity to distinguish between a tool or tool information as compared to a made-up information by a generative model, if you can't trace where do these informations are coming from and if, of course, in some sections people lose their jobs without society having thought about replacement for their positions, we can think that societies in an accumulative way can go into a chaotic situation and so then that can result in some national and global chaos. Discussions about essential threats of artificial intelligence can be interpreted in so many different ways, some of those ways are very speculative and some of those ways I think when basically these existential problems are understood in terms of an accumulation of many other ethical problems, I think they deserve attention. Thanks very much. You spoke there about the use of AI to manipulate and to make people believe or think differently and I suppose one of the questions or one of the challenges was things like deep fakes and the sort of representation of reality that is not real at all. From a philosophical angle, how does that change in your view what it means to be human? I think that's a great question and again we've had this deep fake technology since let's say a decade ago but now with the rise of these generative models it is possible to create this deep fake technology and the deep fake content much faster and then embed that into social media like spread that much quicker and I am personally very concerned about the epistemic confusion that we might be situated in if this deep fake content is produced and is spread more quickly and so of course if we go into a situation of epistemic confusion gradually we can't like, we might gradually lose the sense of trusting in each other, trusting in different democratic institutions and many things that you know throughout time different human societies have set up as ideal organization, ideal institutions that kind of give us meaning to be human I think those would all be kind of like challenge so basically I think in short then this deep fake content can impact very seriously the sense of trust that we've had in other people, in institutions, in democracy in different values that we always thought are good human values and also they might kind of this deep fake content again might kind of at an individualistic level make us very epistemically confused and I think those are very challenging kind of situations and those aspects of human, what does it mean to be a human would be affected by this technology Thanks and Brian if I can, following on from that I suppose there's a relationship between how we understand how data is used in the round and its abuse, misuse in an artificial intelligence sense where do you think other than deep fakes and that capacity to manipulate where are the areas for concern that you see coming out of this? Yeah I think there's lots of areas that are under research just now areas such as bias in the data sets and then there's influencing the outcomes of people's analysis and I think something that changed my viewpoints significantly on this was actually way back in 2016 so what Satusha had said, a lot of this stuff has been around for a while but it's coming to the forage now through the press and various other means but I spent my life in industry doing data analysis running data teams, building software and I moved to the data lab and then this book came out in 2016 called Weapons of Math Destruction by Cathy O'Neill how big data could drive inequality and impact democracy and it's somebody who'd been working in the field and never even thought about the side effects of this stuff before or through that type of lens and you've seen a lot of books and research from that point that I've been aware of that are really opening the door on this invisible woman as well and technology is not neutral by Stephanie here actually when I was writing these things down and preparing for this the thing that I noticed was they're all written by women which I think is a great thing especially and we might get into a lot of the narrative in AI or the language we use is misused I think so talking about godfathers I don't think is helpful talking about hallucinations I don't think is helpful talking about existential risk well let's look at Hawaii or let's look at Europe or let's look at Canada burning just now climate is existential so I think that's kind of leaving me into also the language and how we portray this stuff I think it's really important for us working in the field like many people are doing to kind of bust some of the myths around this to have a proper debate about it and on that point about language and the power of fear actually that language can convey just what are the things that we need to be considering when we think about machine learning when we think about how we get rid of bias in that learning process that's a very big topic and I think I'm very happy that Atusha introduced quite a few topics around this area because I think it's very important to understand that sometimes we speak about ethics and we disconnect the ethics side of things or responsible AI or transparent AI with a technical side of AI as if there were two different things but actually there are two things so multiple things that have to coexist and they have to be considered when we conceptualize new technologies and I think one of the main problems that we have at the moment is that all these tools that have come out in the past few months such as GPT and Dali and stable diffusion mode generate images and text they have been made available to the public and anybody can use that without any safeguards and they are very cheap to use that's another thing that's kind of new for us in the past technology existed and there could be deep fakes but it was very hard for the public to find those tools and download them to go through the whole process of using them whereas at the moment everything is free for consumption so from my perspective I think it's very important as we move towards an accelerated progression of AI because I think what we've seen in the past year has been an unprecedented explosion of new techniques and of course we have the headlines about the GPT but you have to appreciate that in the background there's a huge amount of work that has gone into from companies, from universities and I think catching up with this progress is very hard so even if we start today to regulate or define an ethics framework for such a GPT as we know it today maybe until we have this framework established maybe we've moved on and then we always have to catch up and that cannot happen so I think we need to kind of shake the foundations of what we do at the moment re-establish the way we do stuff around AI to be the main blocks and then the technical contributions and the technical developments have to follow through those lenses otherwise there's going to be always a catch up which is not going to be a nice way to see the AI developments in the future from my perspective Thank you I could go on asking questions all night but I'm whether that's not my job so to open this up to you happy to take questions or comments from any of you please just raise your hand we do have a roving mic to come to you if you can keep your comments or questions briefish then that means we can get more discussion in I've got somebody right at the front Thank you Thank you very much to all the panelists this is a very good conversation I think we need another day for this kind of topic because we won't be able to fill we won't be able to exhaust all our questions because I am actually a master's in data science student and I hate AI because of the ethics around it I'm quite cynical about it and I know because I work with a lot of data and there's a lot of issues around data privacy but my question is for people like yourselves Brian because you've got influence in your position where we Scotland in terms of actually cultivating education and awareness of supporting people and supporting organisations to handle privacy to be able to ensure that users are not vulnerable because of this ups and this AI that's coming out Yes, good question I'll just give you an example from the data lab perspective as you know there's lots of people doing masters in data science right now across Scotland and we've funded over a thousand students now across 13 universities in Scotland the pattern that we saw as we were doing calls for universities to take the scholarships is when we ask for how are you teaching ethical uses of data quite a few didn't answer the question some would answer and talk about research ethics which is great stuff but the thing that alarmed me was how are we educating people and the students coming through to know about the choices that they're making when they're using and handling data you know if I was to think back to last century when I started out in this stuff I was dealing with SMS data from phones and decoding messages and doing various other things for network operators I was never given any guidance on what I could and could not do with the data and so for the students that we're involved in we make it mandatory that they need to do a data AI ethics course our first one was created by our colleagues at University of Edinburgh and if you receive government funding you need to do this and we're evolving that programme at the moment so I think from university education what I would like to see because a masters is very condensed as you know it's very intense you're learning a toolbox of techniques with often very clean data so you can learn that is actually building these things into an interdisciplinary approach so you have knowledge from other faculties and departments as I'm sure George and Katoosa could talk about to learn about how to do this responsibly and I also think there's value in learning about history and learning about arts and humanities because we're creating these technologies to benefit humanity apparently but we need to learn about the side effects from history of bringing new technologies in etc and the thing that concerns me is if we're spending a lot of time teaching data science to people and they've got the toolbox they don't know the right ways and the wrong ways to apply it and I think that's definitely an area we need more focus on Tusa can I come to you what's in that toolbox how do we discern what is a right or wrong way to use these technologies I actually want to say two things so one is that I think this is a sociological issue that every couple years we start using different labels for the same set of technical tools so when you say you do data science you definitely also like the study of data science includes the study of AI although for various sociological reasons we might have a masters in AI versus a master in data science again when we go back to the history like artificial intelligence is a field in which many different people try to design different algorithms in order to automate different aspects of human reasoning and so whenever we use statistical tools we design algorithms that employ statistical tools to analyze data we are using some version of artificial intelligence so I think we need to kind of maybe first demystify this notion of artificial intelligence as being some very special things like algorithmic reasoning I think according to many different textbooks or instances of artificial intelligence reasoning and I also maybe this is like a personal response I think it is unfortunate that this whole label of AI ethics was put on a lot of efforts that people were really trying to do in order to cultivate this mentality of thinking about the social implications of technology so ethics could mean just normative ethics and normative ethics is a theory of how we have to do the morally permissible thing like there are Kantian theories and the ontological theories utilitarian theories and different virtual ethics and etc and so a couple of people try to kind of build moral machines meaning that they try to take those ethical theories into the way the machine is doing reasoning but that's a very small part of like AI ethics a lot of AI ethicists in my understanding in this community are really trying to actually understand that the data sets that they are working on does not include implicit biases whether it is in natural language processing applications or healthcare applications there are so many examples showing that computer scientists, data scientists artificial intelligence researchers figured out that the way they have for example formulated an objective function, loss function does not really capture something out there in the reality so in reality AI ethics again on my view really means that we need to cultivate everyone who is designing the systems with tools that allows them to do critical reasoning why they are designing the systems why they are analyzing different kinds of data to make sure that there is transparency available and they need to be very critical about the implicit biases they need to ask all of these kinds of questions and so here at the University of Edinburgh when we teach courses on AI and data ethics actually we bring most of the time these two together so there is a master's program that will be launched from this September on AI and data ethics we really try to give the students various different critical skills in order to employ when they want to design data and AI kind of ecosystems so ethics in AI ethics really means tools from social sciences humanities, anthropology critical studies that allows AI researchers and data scientists to think very critically and carefully about what they are designing it's interesting what you say there about it's almost we've created the problem of maybe fear around ethics in AI because of the language and the groupings that we've used I'm wondering whether you have a sense to sort of where that comes from because just thinking of some of the other very big sort of policy area or an area that directly affects all of our lives is urban planning for instance and yet we don't talk about the ethics of urban planning but where you put the local hospital where you put an out-of-town shopping centre has drastic ramifications for how people can and can't live their lives and yet we never talk about the ethics of urban planning how is it that we've got ourselves into a position of problematising the ethics of AI do you think so the way I understand the field is that there were a lot of problematic instances of the deployment and development AI systems that were reported by many different journalists most of the examples a lot of the examples that we referred to were actually formulated and brought to the attention of the public by a lot of investigative journalists originally and basically again the concern are about the social implications of this technology so if you are developing a system that for example tries to understand or basically extract information about whether a woman is applying for or a one of minorities applying for a credit card as compared to a white man and the system is allowed to kind of capture those information based on the browsing data of this person and then would recommend two very different kinds of credit card example or options alternatives to these different people it means that there is something wrong why the system is doing this bias in offering these two different kind of alternatives this is one example then there was this huge array of different examples many different cases very interesting case study published in science a couple of journalists saw that actually in the US across hospitals they are using a specific system that systematically biases against black people, black Americans when it wants to allocate special medical facilities if there is a white American black American systematic bias so there were all of these problematic cases people realized that actually the social implications of this advanced algorithmic systems again meaning AI are very myriad and we have to start studying them and then there were many different kinds of communities that came together studying fairness, accountability transparency of the system but then they just realized that actually it is getting much broader than that so in your question about ethics of urban planning obviously we don't talk about ethics of many things but we talk about ethics of weapon developments I mean Oppenheimer it's one example of it where you just see how a scientist a group of scientists needed to deal with all of these ethical trade-offs and how difficult and complicated that has been so a lot of technological I think advances had with themselves various ethical and safety questions and again if we interpret ethics in terms of social implication concerns then we just realize okay everyone who is dealing with AI systems or advanced algorithmic systems should reflect on social implications of this system so everyone needs to kind of do AI ethics or be familiar with AI ethics in some level and I suppose that really gets to the heart of the question is how do we raise that awareness in your research and in the work that you've done and the people that you work with do you see that recognition and the acknowledgement or the need for a greater awareness of criticality, a greater awareness of that critical reasoning that Etusa talked about? Yeah that's definitely the case I think it's crucial to kind of reflect on one other element is that sometimes we make the assumption that when we have a system developed like such a GPT for generating language that people are aware even the developers of the limitations and limits of those technologies but it's not actually the case so even as we speak there is no accepted framework to evaluate those systems even the developers of this technology they don't know what are the full capabilities of those systems or where those systems might fail and I think that has been one of the main arguments around why do we release those tools if we don't know what are the implications of using it or the limits of using it, when they can go wrong what consequences they can have so I think we've seen in the university sector mainly and perhaps a bit less in industry that develop these technologies that we are paying a bit more attention to the safety of AI systems and the accessibility of AI systems so when we teach students about the basics, we speak about the data but with respect to the ethics and the use of the data why we use the data, who are the end users the stakeholders, why we develop a system how we can make a system more transparent so that it justifies the decision and I think that's before the problem of discrimination perhaps you have different groups of people some minorities that might be discriminated against but these are factors that we have to consider and have to use cases way before we develop the system so when we are conceptualizing the system before we put the effort to say okay let's find the data, let's program it, let's develop it let's train it, we have to think why do we do all of these things who are the stakeholders of this system of this kind of technology that we are developing the tool, who is going to use that tool in what way is going to be a mobile device going to be a system that is in the background without any inspection, there is no human involved in the process, there is an AI system on its own, making decisions or you may have the case where there is a human AI collaborative effort where you say there is a human and there is an AI agent and both of them together they make an action, they make a decision so then you have a human sort of supervising the AI system but in most cases that we have seen in say it's a GPT again as an example we don't have this case the tool is there anyone can use that no matter how old you are or who you are doesn't really matter anybody can use that, the point is who is responsible and legally responsible about the consequences of what comes as an output of the system if a user takes that output as a given and uses it to do something else so who is responsible, the developer of the technology the user that use that the data that we use into that so these are kind of open questions that have a bit more legal side but actually they are open questions so the technology is really trying to to debate and find a way moving forward but it's an ongoing exercise but again going back to my previous point from a university perspective I think we have this possibility to reflect on those things we have this possibility to put these things in the very central on the central stage so that we don't really let things go at drift More questions, more comments from you, sit now right at the front if you can just wait for the microphone Thanks Cheers Hi, I just wanted to ask about when we're on the subject of ethics and community and social problems do you see that AI has now created a new class system so you've got educational poverty and educational richness so people are reacting differently to do it and basically things like the Trump campaign and things like that AI is actually reaching more people who are educationally unaware of the logistics of AI than people who are at the top of the game who know exactly how it works and how they interact with it so basically the people are going to actually be destroyed or have negative effects of AI are actually quite a large part of the population and it's because they're in educational poverty about how AI can be used and its effects Who wants to take that up? Brian No, no Happy to give a quick response on that so at the State Library of Partners Scottish Government to deliver the Scottish AI Alliance and George is a member of the leadership circle as well looking at how we can create trust worthy inclusive and ethical AI for the country and our key focus in the last two years to be honest has been community engagement societal engagement with children through to older people through to people through the geographies of Scotland and we're just about to launch a larger community engagement programme looking at socio-economic groups as well and an online living with AI course for everybody to understand the basics in an easy to understand way for members of the public much as the famous Muck from Helsinki came out as well so certainly within Scotland it's a key focus for the work that we're doing with the government at the moment because we appreciate the challenge that you outlined there could easily happen if we didn't actually be proactive on this I saw you wanting to come in too I think this is just such a fascinating question and I actually really think all of the governments, especially in the developed world have responsibility to really enhance the education, social education that they are providing to people about AI systems, how they work and how they trigger our cognitive biases as you very correctly mentioned we are humans full of cognitive biases like our brain is full of cognitive biases and that's how we function and many of the systems trigger our cognitive biases for example try to trigger how we want to pay attention how we feel happy when we pay attention to specific things that we like and then more information our target at us so you are exactly pointing to a very important kind of point I think more and more work at different governmental levels should be done to educate people and I always think instead of some of this boring series that we are exposed to there needs to be more and more like series interesting educational programs evening programs on TV and different places where all sectors in the world like all different age levels are exposed to this information because these are just out there are impacting all of us all ecosystems, the way we learn the way we leave, the way we lose our jobs etc so it is just necessary for everyone to know to know the basics of this system so I think that's just a great point and this digital divide has always been there there are so many countries in the world who still do not have for various political sanctions or computing limits do not even have access to for example chat GPT etc so there's a lot of work to do on that space to educate people and ensure that the public knows what is really happening to us by the more and more advancements of this algorithmic reasoning system thank you more questions quite a few hands a person just on the end on the light top thank you thank you very much good afternoon to you I have a problem with the definition of AI AI isn't intelligent it's something that's been worked on probably what you said for the past 60, 70 years and we haven't really got very far we've got some clever software and I'll give you that and some very quick software but I think we may be discussing the wrong thing maybe we should be discussing the ethics of the people who write this software that can generate this fake news because that's what it seems to be being used for more than anything they thought that self-driving cars was going to be a simple problem they are a long way away from self-driving cars so I wonder how long it's going to be before we really do see proper AI that we can recognise as intelligent and maybe then when we know what AI really is maybe we should be discussing the ethics of it then because we're discussing something we don't have are we not? we're discussing something we don't have do you want to come in on that one? I disagree with you because what you call real AI or proper AI is referred to in the literature as artificial general intelligence or artificial super intelligence and this is artificial general intelligence has been this whole goal that was said by many by many different people who were theoretically trying to develop artificial intelligence systems I totally agree with you that the term intelligence is just such an unfortunate term the community has used and has developed some people suggest sanctioning this term so they say we should not refer to the term artificial intelligence we should talk about algorithms talk about capable machines and etc but the fact is that artificial intelligence is the discipline that many different researchers developed and they are developing so I am personally more interested in trying to kind of raise awareness about the meaning of this term and all of the nuances that are around this and I think actually the term artificial intelligence is like this metaphor that a lot of the people who were developing the systems tried to use and it kind of functioned as like a futuristic concept for them so they would develop some systems and then because artificial intelligence is just the things in the future there is still a huge gap between what they are doing and what they want to achieve and that gap would motivate them to do more and more research and that is how I think about artificial intelligence but I think you are totally right when we look into different psychological theories of intelligence there are so many and then even within the whole field of artificial intelligence there are some people that try to develop artificial intelligence according to some purely formal mathematical models that have nothing to do with human intelligence and some other people try to make connections between our brain work and the ideas that intelligence somehow emerges or is a byproduct of cognitive functions in our brain and so we can kind of like imitate that so there are many different interpretations of intelligence but I think algorithmic systems that we have sociologically speaking have been called artificial intelligence systems and that's an unfortunate fact but that's a fact and so I think it's fine to use such word. My point with that is that we have been using AI as a buzzword and that's a reality so I think you will see many of us not really using any I put myself in this category I'm not using AI as my title or as my discipline I speak about machine learning because we have machines and they learn something now what they learn is debatable right and that is what we are discussing now in this panel that we have data that we have some knowledge that we develop a system to do X and then we develop a system that has learned to do a very specific narrow task whereas humans in theory we have better capacities and reasoning we can create an understanding of the space very quickly I know if I have to exit this space where I can go so I'm not going to crush someone unless there are some other issues behind that so systems don't have this capability at the moment and that's why some critics about the modus like the GPT is about the reasoning capabilities we are seeing cases where the GPT is very successful so you could then if you have a paragraph written by the GPT about a topic you wouldn't understand what is written by an AI system or a human but then in the same kind of situation you might have you might rephrase your question to the system and might give you a completely different wrong answer as if it was a different person so I think from my perspective sometimes we speak about AI and we think that the AI system is kind of a replica or something that is equivalent to a human or one single person but we kind of miss the perspective of how these systems have been developed where they are digesting all of this data that is out there and then they just learn, regurgitate information, they kind of fail they succeed in some cases they are all over the place and I think that's the main problem at the moment the ability of those systems is a main concern and the ethics for me go like it is is part of that conversation that like reliability, ethics possibility is for me although different areas there all has to be considered to get otherwise you are going to create systems where they are disconnected from each other and you have the technical people, the development systems and the ethicists looking at the ethics of those systems the legal people looking at the legal implications and then no one speaks to each other and then you have plenty of catch up all the time so that's why I think it's cruiser to find a way of actually merging and bringing people together and make these systems a bit more transparent and have more ethical at the moment Just for the benefit of those who might not have heard that the question how far away from those intelligent systems are we from my perspective how far we are up to the discussion that we had a few minutes ago about what intelligence is so I think first we have to decide what do we define by intelligence which are the which of the definitions do we want to use when we define the system of course in the past there was a Turing test that still exists that defines whether you can differentiate between a human and AI system some people argue that Turing test is long gone or such a person has passed this test others will argue that's not the case because the case is where you can trigger the situation where the system will fail the Turing test I believe that we are quite a long way from having an AGI system is what Atusa defined before like a system that you can say that has the same amount or the same principle of intelligence like humans so I think from my perspective decades away from achieving something of this capacity do you want to come in on any of this or? I'm not going to give a number I don't know is the honest answer I'll just be honest I think maybe one point I'd like to pick up when you're talking about software engineers I think Scotland faces a choice right now we have invested £42 million in the tech scalar scheme which is great we've got a massive focus on entrepreneurship across the country and I think the debate and the choice we have is the playbook we want to use to make that successful so a lot of people are focusing on the Silicon Valley playbook you could ask valid questions was that successful in the social media age to feel that was a good outcome for humanity what do we learn from that if we were going to adopt it in Scotland or should we adopt it etc because data scientists in the real world are working with product managers software engineers lots of other people to ship product to make profit online and you know how we do this is really important and we're coming to the inflection point of really going for this in our country right now so you know I think a good point for our future debate is how do we do that responsibly Back to you for more questions or comments just on the end of the middle there My questions about what I feel is a very real threat to jobs presented by AI it's great that all people studying data science are doing a module in ethics but we're living in a time where there's a fairly woeful lack of regulation on large corporations so what actual barriers are there right now and potentially coming up in the future to stop large companies doing fairly cutthroat cost cutting when it could save them a hell of a lot of money just as an offhand example Netflix just advertising a $900,000 salary AI position in the midst of the Hollywood SAG strikes I'm not necessarily saying that job is going to be writing shows and losing writers jobs but that kind of thing what hope really is there for really solid regulation if it's not done industry level in these really big corporations and I suppose there's also maybe within that question a question about timing because government regulation often lags way behind the development of any technology the technology happens first and then regulations might catch up at some point brand do you want to take a first stab at this one? Yes and then I'll let the other panelist improve dramatically on it we've definitely seen changes right now in call centres for example because it's very defined use cases for calls coming in etc chatbots replacing a lot of the investment that's been made in large call centres so there's emerging use cases that this is being applied to drive efficiencies and organisations and there's also creating new roles as well we're working with a company who are doing thermal imaging of Scotland's housing stock in order that it can be better insulated and they employ a lot of people to look at the images that are scanned and then crop bits out of it and we've automated that using a machine vision algorithm but those people have been retrained and they're now growing their data analytics side of that to grow their business so there's pros and cons but I think it opens it up to the whole debate about regulation and what actually can governments do and that is the hot topic of the day because you've got China are focusing on this we've got the US focusing on it we've got the EU focusing on it the EU speaking to the US and the UK not involved in it then you've got the UK talking about an AI summit in the fall and lots of discussion that's been done here interplayed with some of the largest companies in the world lobbying the prime ministers and the presidents etc and you may have seen in the White House last month President Biden with seven of leaders of the biggest AI companies in the world who happen to all be men again and you've got to wonder about the level of conversation at a government level and there's much more work to do but I'll just set that out there for the others to add a lot more wisdom to it just do you want to take that next and then I'll bring in a choice that's a big thing but I think another angle to this problem is the fact that automation in general is not something new so we've seen for decades how our lives have changed because we have systems auxiliary systems even during the pandemic you've seen that suddenly we went to remote working and we have systems in place like zoom and themes this is not with us but we can easily have you with us so there are always the pros and cons with end technology so I think from my perspective we have to start from that point that any new technology or any new system will benefit something but we might have negative implications of emails so I would like to dismiss technology because it will have a negative consequence by default but we have to sit down and think what are the pros and cons if the pros are more of the cons then we move ahead or the other way around so I think it's inevitable that end technology or any new system will replace someone or something right? the point is how do we make that in a way where it actually empowers someone else or we create a system where say in this case through data lab or other initiatives in Scotland or the UK where we up skill people or we skill people we make generation or the kind generation how we can develop a situation where they can learn new skills and they can support this ecosystem so I wouldn't take personally the side of the things that because technology will have negative consequences for a certain case we dismiss the technology altogether even if we have the evidence that says that there are other people or situations where technology can have a positive side of things just to answer a bit more accurately the precise question I think we're going to have several cases and several examples like that in the near future where we're going to be discussed about BT for instance of the announced about the anticipated thing until 2030 to have several thousand fifty five thousand jobs replace so this type of news it's not going to be very rare cases they're going to be quite common but the point is what do we do to mitigate that in a way where actually we adopt technology but actually we find a way to support the people that might be displaced because of the technology I suppose there's also something around the quality of jobs that are being replaced if they are pretty mechanical for want of a better word or not very fulfilling for people actually why shouldn't we replace those jobs you know there's the argument with that I think your question though around screenwriting around the arts and around culture I think hang on for the mic it's just behind you sorry just you know yeah I mean it's like I mean what I mean is you know it's much more than automation now you know there's an article that just came out that AI is now able to do a huge amount of the workload of what a lot of architects do and stuff like that so it's become much more than you know manual labour jobs which are also really important as a source of employment for many people but it's becoming a whole other thing where all sorts of you know what are deemed skilled jobs are also a threat so yeah so that's kind of why I was saying it's quite pervasive and what the barriers are and so on. Thank you Tisha do you want to come in on this and I suppose there are different elements to this there's the different types of jobs but there's also as Georgia said how we what we do for the people who are losing the jobs but also what are the benefits of of AI in employment? If you're so clear I want to kind of start from like the first great question that was asked about the regulation and I think we can look into the regular questions around regulation at many different stages like at a very macro stage like what I observe is that there are there are two very very main players one is the United States like Silicon Valley right like all of these big tech companies are literally based there like companies that are introducing this interesting disruptive technologies are based there even like DeepMind is now owned by by like Google right so it looks like there is this like asymmetry of power we have like United States the way they think about human values the way they think about their role in the world like being maybe one of the best players and then and then the kind of like a competition that they are in very implicitly sometimes explicitly with China and it looks like a lot of the decisions they are making is based on this this underlying assumption that we want to remain the best in the world we want to remain number one in the world and we observe this very explicitly for example when Sam Altman goes to the Senate and kind of like he is talking to many different legislators there he is just saying that I am very I am very proud and I am very glad that we are making this in America and this is a great nation and so this kind of like perspective really underlies this macro level assumptions about like let's innovate because we want to remain number one in the world for various reasons so then in that sense the other entities in the world of course from again this macro perspective need to take some sides so United Kingdom for example is now going for this like pro innovative approach that in my analysis and understanding is closer to to the US kind of approach regulation like let's innovate like minimal regulation there are talks about like let's regulate but there are a lot of tension a lot of paradoxes that are unresolved but then there are also this more maybe interesting questions at a more like local level about how different kind of nations or countries try to work through this regulatory questions more like locally again we need to accept and we need to acknowledge that there is this like crazy power symmetry in the world that's visible and it looks like the US is kind of dictating a lot how we should see things in the world EU AI act of course and EU in general is taking a different approach and I think UK is kind of like somewhere in the middle but yeah so then in the local space and there are a lot of questions like a lot of different kind of sectors, unions try to push against the invasion of this disruptive technology to the labour market and I think that one of the solutions that people are talking about is like this universal basic income that okay this whole Pandora box is open for example with the generative models and they are really going to take a lot of jobs and so maybe we need to kind of give some funding support to those people whose jobs will be replaced maybe they can learn new skills or do new things and very interesting theoretical idea, there's been a couple of experiments about this I'm not sure how in a large scale these ideas can be really deployed so yeah that's my take on this lots more questions I'll bring you in a moment I've seen a couple of other hands coming up which I think are also potentially linked at the back just next to you, yep thank you Just sort of following on to this there's a lot of talk about how these kind of jobs in various sectors architecture art manual labour are going to be replaced by new jobs but all these new jobs are going to be in a very specific sector and those aren't necessarily going to be shared interests from those people whose jobs are lost I'm just kind of a bit concerned about not being very into tech but being very into these things that I do see being taken if you're not very interested in working in the tech sector then that's a sector that's growing and pretty much all the others will as a result of AI be shrinking so I'm just a bit concerned about that yeah I suppose maybe the follow on in terms of what we can control what are the regulatory measures we should be thinking about and should be thinking about now rather than 5, 10 years down the line because these things are happening now Brian do you have any take on that? Yeah so there's no doubt there's shifts going to happen I think what I would look for is the evidence okay so it's easy enough to write an article on who's at risk etc and get the news for the day but I just recall when we were moving forward in machine vision and scanning for various types of cancers we said that radiologists would be obsolete in x number of years now in Scotland we do not have enough radiologists to treat our people and so I think I would look to you know as a lot of the vendors are positioning this stuff now is kind of co-piloting the professionals actually a tool in their armory to do things faster better more accurate etc and so you know for me I'm looking at the evidence is that going to evolve over time or actually are we seeing architects and others lose their jobs I think you know it's hard to predict right now if we're honest about it So there's something about roles and jobs actually enhancing and shifting rather than going away completely the example of radiologists There's a lot of talk about chat GPT replacing programmers right just tell the code you want to generate and it comes out and that was the headlines for a week and then everybody said actually what about XYZ improving that that wasn't quite going to be the case or could be the case as well so I think we need to look at how this pans out over the longer term and take a kind of critical eye at the journalism some of the journalism that's being posted on this stuff having said that and back to a point that Tusa made earlier on you know I think the role of journalism going forward is really important investigative journalism from Carl Cadwala in the Facebook Cambridge Analytical piece through to the recent article in time about the low paid labour in Kenya that trains OpenAI and other models from Silicon Valley companies so they train it with things that we wouldn't want to see coming out of those models you know the importance of journalism has moved forward I think for me is really increasing Jojus do you have any comment on that or just a quick addition because I think it's quite important for the rest of the discussion is that it's very important to understand that when we speak about AI systems we are sometimes quite wrongly using the AI term broadly we established that before but also we assume that everybody agrees even within the AI community about how things should move ahead actually what happens at the moment is that if we see the big universities in the UK or in the US or elsewhere you will see many AI developers or professors of AI whatever an ethicist to disagree not only the ethicist with the technical people of AI but also within the technical developers of AI people see things in a different way so there is no consensus even within the AI community about many of the staff so I think that's important to understand otherwise we assume that everybody working on AI have a certain opinion and everybody else is opposite actually that's not the case you can see that half of those developing AI technologies disagree with the other half that develop AI technologies and that has been a huge debate over the past year or so so just to clarify that because I think it's important element to add to the discussion and I suppose your note of caution is well made if we look at other industries because there may be some excuse me some tech areas where roles develop and jobs change in a positive way but in energy for instance we've not done energy transition jobs very well in the past you just need to look at coal mining areas in Scotland so there is a note of caution there that we maybe need to hold as we continue this discussion more questions I see somebody back on that side can I go back to the question of fake news and bias because I think some of the comments that were made about bias at the beginning are really interesting and quite frightening actually my background is as a software developer I don't think that the coding that's been done to code the advanced algorithms is the dangerous bit yes you could subvert that but actually I think it's the training of the models that's the dangerous bit and I believe some of the larger players in the game that have trained their models aren't letting anybody know how they've done that how do we tackle that challenge who wants to take that one I couldn't I can start a couple of years ago there was a debate on Twitter that was about the sources of bias and the development of AI systems and there was a group of people that said the only source of bias is in the data so there's a data bias another were saying that you have algorithmic bias and data bias so they were differing opinions about whether that's the case or not I think the reality is that and we see that in many real life applications and a very good example for that is that when we develop AI systems for any application we try different models so we don't say this is the correct model use that and go is that each model and different types of models give us different results if you go down to the integrity why that happens is that the way this model learns from the data and the data don't change, the data is the same so the model change is different so practically what the outcome means in this case is that the model performs differently and that means has more bias, discriminates in specific groups of people so then it's very hard to disconnect data bias and algorithmic bias so that's why from my perspective my personal opinion is that you have sources of bias from both cases maybe that answers partly your question but I guess the main attention has been on the data side because I guess that's the elephant in the room that's the thing that in theory we can easier control if I can say like that so you can curate the data you can be careful where you get this data from if you create a system that is deployed say in a healthcare domain and you are training a model with data say from Eidbra and then you want to deploy this model to Greece maybe the population is different so you have population drift in this case maybe you need to do something better so I think that's why data bias has taken a bit more of our attention but I think we have to focus on both sources in my view I don't know if that answers your question Brian do you want to come in the last one I think there are a lot of trade-offs and like depending on how we address the trade-off we might end up at different points so the kind of narrative that goes at the moment is that well if we kind of like open source many of the systems and if we kind of provide details about how our systems are developed and et cetera then there would be some malicious users that would replace or basically copy what we are doing and then they can they are malicious users so AI can be used in two different ways for good or for bad so they are going to use that for bad and so that's going to really like provide disadvantages to the society so that's a kind of narrative that open AI that should be called closed AI has put forward and a lot of people are now trying to push for auditory mechanisms so they say okay you are not providing this kind of information for whatever reason for all the reasons you say but there should be some independent auditing groups that would come and study our work and then ensure that a lot of checklists in relation to the data ecosystem and the development and deployment of these machine learning models are kind of in place I think trying to go basically look at the problem you are asking from the auditing perspective then maybe some of the concerns that you are raising of course if that auditing group is very legitimate and has very high standards for analyzing the biases within the development of these systems could be like one solution to this issue of how we have to address the biases within these closed systems thank you yes a quick question up here just following on from what Atusa said there and what Professor Aline said you talked very early on the conversation about ethics and how everybody needs to learn the ethics and train the system in an ethical way but Atusa just mentioned malicious I mean I think we've seen from technology in the past and when changes in technology happen it never plays out well for the little guy is what I should say it's basically you know the first time they had a laser scanner at a supermarket it didn't work now you can hardly see a person at a supermarket that's common what if the person who's creating the AI just fundamentally is a bad actor for example say if you went to chat GPT and just to take finance which affects everyone if you go to chat GPT and you put your prompt in and it says please give me a list of the best financial products on the market if chat GPT is a vested interest or if it has a prompt injector at the start that says you will not you will not recommend any other product acceptors then basically what you've got is a censorship of the first instance that's the... I'm going there to get information I've got no way of knowing what is true because AI as much as this gentleman might not like to believe it there's a tidal wave on the shore AI's coming so how... who are the police are too some who are the police who are gonna who are these people who are gonna audit them and enforce it on China or the US or Mr Trump it's not a large it's only now become a large language model sir before you were typing into google now you're just asking a question as a human and as the professor said it's near enough got the potential pretend to be a human 75% of people can't tell the difference between a human and AI as you said you want to make a stab at that one okay so I think ideally in democratic societies people believe or the citizens and the governments in the democratic societies believe that through democratic deliberation we can bring in new institutions and this policing kind of action would be done through institutions and these institutions come into play when various kind of like experts kind of deliver it together and then push forward for special kind of institutions that it's supposed to do this kind of policing now super nuanced right and then no government in the world is like perfectly democratic and there are all these indexes and there are a spectrum of different kinds of democratic positions but so basically I think what we have, the best thing we have is that democracy would help us to bring in institutions auditing institutions in place and I also want to kind of give credit to a lot of AI ethicists like Timnit Gebru like make Mitchell who have all been computer scientists who did a lot of AI ethics work and they try to really introduce critical ways of thinking about auditing, how to design auditing systems, how to design participatory mechanisms that would allow us to bring in some interesting auditing institutions in place. So there is a lot of work there, it's not perfect. What I am personally concerned about and I think we need to really think about this more carefully is that all of these institutions would be introduced at a national level but these AI systems and there is also like this competition between these different kind of institutions. It's happening at a global level. We do not have global governance and it looks like we don't want to have kind of like global governance because history has shown that it's very hard and it's very hard. So kind of we have decided those kinds of questions aside and now a lot of these competitions, a lot of these questions, regulator questions, they are all very important and like they arise at the global level because we do not have like a global governance structure it's very hard to answer them and I think that's a space where we need to do much more work and we need to kind of become a little bit more mature. I think what you say about global governance is I think you can see the problems or the lack of that in so many different aspects of our lives, the climate crisis, where's the global governance in that. Your point about democracy and the importance of democratic institutions having the powers to regulate, having the powers to set up frameworks or codes of behaviour and controlling what isn't legitimate. How do we ensure that those democratic systems themselves aren't a target for bad actors through AI as we know has been through Facebook, Trump, we don't need to is the only way that we can ensure that that global governance system or are there other mechanisms, are there other things we need to think about at a national or maybe super national but not quite global level around protecting democracy. This is such an important question and a very difficult one and again as I mentioned I don't think we have even one single example of a perfect democratic government where everything is being done like of course we have examples of more idealistic global governments but then we also have this lobbying entities and it's not clear exactly how these lobbying entities are really undermining democracy or they are really helping democracy to strive so there are a lot of complicated political dynamics happening at different stages and then ideally the goal is that independent viewers should be embedded into many of these companies and then there should be new institutions that oversight what is happening, propose what should be done to the government and so basically there should be a lot of like deliberation at different stages like civic assembly, small deliberations and those kind of result of those deliberations should be fed into bigger institutions and then hopefully through this whole complicated kind of ecosystem there would be some democratically elected institutions that would be able to audit the systems like that's I think the best kind of proposal that is available out there. I think at the moment there are a lot of interesting work that people are doing to bring in, to kind of like break down these ideals of democracy into like smaller more local kind of entities they want to, there are lots of different efforts people trying to educate like small civic assemblies together and then teach them how things are going forward, how they think about regulation, what kind of auditing mechanisms should be put forward and ideally like there should be an aggregation mechanism and those information would be fed into discussions about the national and global regulation but like all work in progress. Okay, thank you. Now the one or two questions from the audience I think we've got time for it towards the front at the end here. It's just two things really, two points. The first is there have always been con artists and one of the features I think of con men or women is they're very convincing and part of the defence is being aware that there are people who maybe don't have your interest really at heart however convincing they are and I think people have a false belief some people in AI that it will be right and when you're looking at a situation like doing part of the work of a radiologist I've no doubt that the information it's being given is as unbiased and correct as possible so it's a different kind and sense of AI from the other kind of AI like even chat GPT I mean chat GPT I mean it's the ultimate of garbage in and garbage out somebody I know in Cambridge was bored because he was stuck at home because his family kept getting Covid so he asked it about a course that he runs a group that he runs himself called the Armchair Economist and chat GPT knew a lot about this and some of what it said was correct but a lot of it was just sheer rubbish that it had come up with as something likely and it didn't distinguish them so I think the education of people that just because it's AI doesn't mean it's going to be right I think that's a big defence that comes back sorry that comes back to the question that point around raising awareness and education across the board not just for people who are working in this area probably got time for one more question and I've got somebody at the front third row back on the other end thank you hi I'm sorry this will probably sound possibly a bit trite but I'm a journalist so I do apologise but basically given all what we've discussed over the last hour and a half isn't the fundamental challenge when artificial intelligence or however that is defined run slap back into natural ignorance okay who wants to take that George is your turn I don't know if I can address the question directly it points to the comment that the lady said before about garbage in garbage out I think we make the wrong assumption that first the data that we are using is correct there's always going to be errors in the data so the system will learn from errors and the second assumption that we make is that if we had two humans or more than two humans discussing the same topic that they would agree always with each other which is not the case either so we will most likely disagree in that discussion of half a day even two experts maybe there are scientists, maybe there are professors maybe there are healthcare experts they might disagree on the same evidence so they see the same picture and the same medical image and they might say different diagnosis so I think there is inherent issue about the fact that some problems are hard even for humans to solve so then how do we expect an AI system by default to be better when even humans would disagree on the exact same topic so for me it's very important to raise awareness and as a lady said before that we have to make sure that people know that the systems quite often will fail or give answers that are not really correct and we have to scrutinise this process in the same way that if you meet someone randomly in the street and they ask you a question and you respond back to them and vice versa I don't think you will be expecting for them to believe you or to trust you so you always be thinking do I trust this person, do I know this person how do I establish this trust ecosystem and we tend to be a bit more cautious sometimes we establish barriers and we have to have someone to speak many times or to establish a relationship before we trust someone so I don't see AI being too much different than that but at the moment we just believe that if I go on whatever browser you are using you put the URL and you use the security we assume that the outcome of that system will be correct but that's not the case so I think that's what we have to digest a bit otherwise it's not going to be debate whether we can use a system or not Brian do you want to come in? Yeah and that comes back to responsibility again for companies who generate the outputs of this for it to sound very real to somebody and not say this is a mathematical model and the likelihood at this answer is 80% you read something you think it's fact the way that it's actually phrased it's responsible to these companies to actually share some of that information so you can make a decision on it so when I asked about tell me about Brian Hills it said don't know about Brian Hills he's not famous enough that's okay I got a thick ego so I thought well how do I get more info tell me about Brian Hills at the data lab it was okay and then it said I'd work for two companies I'd never worked for before in a way that if you read it you thought you would have thought I'd work for those companies but there's no button to say explain how you got to this what's the probability etc so you know back to the responsibility of how these things are being developed and the regulation I think that's very important and it also links into the ladies earlier point about educating the public on this and how to ask the right questions of these new technologies you don't need to be a data scientist but we would all benefit I think from phrasing or coming together and saying what are the top three questions I should ask and I'm engaging with one of these technologies to get a recommendation or an output And the two sir do you have anything to on that specific question I think there's only two kind of be be careful that we can use the systems in many different ways so one is that we can just for example use chat GPT and take whatever it says as like the final word I think that is really wrong that is very dangerous but something you can just take it as like a drafting of the idea you know as a reasoning machine that helps you to generate some new ideas many of them would be bullshit some of them might be good you know I know a lot of professors that use chat GPT to write very glorious recommendation letters for their students so what they do is that they give a couple of bullet points this is for my student like write page recommendation letter and then of course they take that and they kind of personalize that but that allows that kind of content generation allows them to be a little bit more effective and like don't do some maybe like boring things or use the systems to kind of help them to speed up writing the system so there are some as I think the other panel has also mentioned positive use cases of the systems where we can just use them you know as like as like collaborators or as things that can help us to do brainstorming but of course yeah exactly this idea of whatever they generate we take it for granted we don't check the references we don't check the content that's very dangerous and I think we should really avoid that thank you we're coming to the end of our time but before we close I just want to ask our panellists to sum up in a minute or two what would you recommend or what would you be looking for to happen in the space of an ethical approach to AI or AI ethics in Scotland what would you look for the Scottish Government and for other public institutions in Scotland to be doing around might be around regulation we've talked about education we've talked about auditing we've talked about a whole range of different things what should we take away from this evening's discussion for work that we still need to do in Scotland and Tushael I'll start with you and then we'll work along the platform okay great so I think like I just want to be very very clear with you on this a lot of these questions are very complicated we need to think about them by engaging many different stakeholders and then we need to think about these questions together they're like open questions there's no solution no one in the world know exactly where should we go there's some voices that are louder and they push forward some ideas and some other voices that are less loud so I think the most important thing for me would be just to engage more and more citizens and involve citizens into this whole discussions and allow them to know more about what is really happening as they opinion this should become like a dinner table kind of like discussion without having fear without invoking fear or super crazy excitement that these systems are going to kill all of us or replace all of humans I think these discussions go nowhere but just trying to follow up on what these systems are really doing in our real world but also having futuristic ideas that are going right so even if you haven't lost a lot of jobs or labour market is not still change much in Scotland we have seen a couple of very interesting reports that for example open AI release so they made that expectation of how they think large language models would change the landscape of labour market in the United States I think it would be amazing if various different stakeholders in Scotland also do something similar and when there are this kind of like empirical investigations or findings then the public can be more engaged according to like written reports according to some informed numbers and I would love to see exactly this kind of like informed discussion with citizens and I think that's the right ethical way to take in tackling the social implications of this system Thanks very much Attishit Brian for me so Mr Lockhead who's the minister responsible for AI strategy has commissioned a review of where we are and has asked the chair of the AI Alliance to review progress and what we need to do next for Scotland in AI so that's in progress just now as I explained earlier a lot of that focus today has been societal engagement so I'd be looking for that as to at this point to be ramped up further as well as viewing on the education perspective and the industry perspective too so that that's in progress just now another thing for me that I see a lot that I would really like to see move forward fast is education of the public sector and so that as part of the AI strategy we created an AI register so anybody in the public services using AI can register how they're using it and can look for guidance and collaboration etc it's not mandatory just now I'd love to see it mandatory in a community of practice around that to help our public sector adopt them in the right way and learn from when things haven't worked so in South Ayrshire I think they took a facial recognition system into a school to help during Covid times and help efficiency etc and there's a lot of feedback and they stopped that fairly quickly I'd like us to be in a position where we can help our public sector understand this technology and help them to make the right decisions Thanks very much Brian and George I mean we should not be panicking and see say the GPT or language models as being the end of the journey because it's very easy to go down the route of regulating something that in a couple of months time or in a year from now is going to be obsolete so I think we have to see beyond that so I think that's the first point so I would like to see the governments bringing the universities together the public sector industry and having to discuss and see the bigger picture not just stick with the current example it's not the end of the journey and the second thing is that finishing with a more positive note here is the element of regulation we discussed before I think one of the very good things I've seen in AI community is the fact that the AI community itself is policing the other part of the AI community so maybe something that it's not very, many people might not know about that is that before such a GPT came out there was a similar system from Meta Facebook and because there was a huge discussion on Twitter many very eminent AI scientists went very against this kind of system that was deployed at that point publicly and Meta sat it down so there were examples where it was recommending recipes eating broken glass because it's nutritious and so on so there were many bad examples and they sat it down so I think the positive side is that the AI community itself is aware of those issues everybody's pushing to develop technologies that's just not fit for purpose so I think that's a very positive thing but the governments have to support this mechanism with funding, with support with students the ecosystem has to be there to be able to do that at scale thank you thank you very much I think we have to end it there can I begin by thanking our panellists Atusha, Brian and Georgios can I thank you all for coming along this evening and for your thoughtful questions and quite challenging questions for us, I thank also our partners at the University of Edinburgh for their support in putting this evening on as one of the directors of Scotland's futures forum I must also plug its work on this issue we have developed a toolkit of questions to consider when using and engaging with AI to ensure among other things the ethics of AI have been considered all of the Scotland's futures forums work is publicly available on the website so do go along and have a look can I also remind you please fill in one of the surveys for today's event if you can and there's still a little bit of time left in the festival of politics this year, a couple more events to plug one on aviation and sustainability agenda at 545 followed by a discussion on Scotland's music venues with the wonderful musician Hamish Hawke at 6pm and then in a couple of weeks time we have a special in conversation with Gustavo de Damel who is one of the world's foremost conductors, again there's more information about that downstairs thank you again for coming along for participating in this year's festival of politics and we hope to see you again next year if not before, thank you