 Ylidee is a debate on trustworthy, ethical and inclusive artificial intelligence, seizing opportunity for Scotland's people and businesses. I would ask those members who would wish to speak to please press the request to speak buttons and I call on Minister Richard Lochhead to open the debate. Around 12 minutes please minister. Ymgyrch yn ei wneud yn eu gwirodau fel ydydd y cymdeithasol sydd wedi ymgyrch o'r trefodol i gyflym iawn, ac mae hwn yn gweld ei wneud yn gwneud yn y fawr neuames gyda seuaddol rwylo eich ysgrif, a'r gwisnod, cyhoedd, ydw i gyd hynny yn ei fwyaf o'r cyflod yma yn gyfrifio ar y cyfleu iawn. Mae cymrydd yn hynny yn cyfrifio ar yr artistiol a'r cymrydd yr rhywbethau more and more advanced and powerful is leading to some hard questions for the world. Indeed, this debate takes place against a backdrop of international authority scrambling to respond to the fast evolution of AI with, for instance, EU and US lawmakers meeting just this week to discuss our draft code of practice prior to what regulation may be required in future to address the risks. While recent days have witnessed big personalities in the tech world including AI pioneers, atliadwad o dwydliadwadau athlét. Cyn ddod i'n ddweud, ac yn mynd i'n ddwyledd ar eich clywed, yn imeddill, ond gan ein defnydd o'r byd y byd ddefnyddio armoren o'r economi, ar y proiectwyd, ar gyfer y buddurol a'r defnydd i'r gwaith general ei gwyloedd, ond mae areddai'n cysylltu i bolig i fynd i'n cymryd aethiau o'i gwneud pan bobl ar bobl yn pobl yn gweithio ar gyfer dnyn, ac yn ddybydd o'r cyfwag. I'm grateful to him giving away and he's absolutely correct in what he says that the evidence on this so far is conflicting shall we say. Does that not suggest that the real challenge about this is that we don't as yet know what the full potential is of EI? Minister. I think that that's a fair point that the member makes and I think that hopefully it's something that we all agree on which I'll also address in my remarks and what I would say is our duty as parliamentarians to try and navigate the risks and opportunities and consider the consequences of AI that probably no one anywhere fully understands including even those who have built the technology. AI has been with us for a long time and more recently we have all become familiar with voice recognition and facial recognition to give just a couple of examples and further major strides are now underway and the public release of so called generative AI tools such as chat GBT which I've not used my speech means that cutting edge AI is now at the fingertips of everyone who wants to use it and it's spreading fast. It took three and a half years for Netflix to get a million users, for Instagram it took two and a half months, for chat GPT it took five days and it's this that's triggered a heated worldwide debate on how we maximise the benefits of this technology while managing its risks. In the last year or so researchers have found that just by making these AI models bigger they become able to generate answers to many questions in a way that resembles a human. But all of this is not just harmless fun, these tools known as generative AI will have an impact on jobs for instance they could automate many of the tasks in the creative industries. They give one example not to mention the fact that they were trained on billions of images on the internet with little regard, paid to the intellectual property and livelihoods of their human creators. I'm very grateful for Richard Lochhead to give way at that point. On the question in essence of the training of AI, can I ask what your view is with regard to those who have protected characteristics who seem to be open quite frankly to bias in relation to the training algorithms that are used in AI? Bias again, which I'll come on to briefly, is one of the here and now threats. It's not something for the future so the member again makes a good point of why this is a topical issue that we have to address. There are many different professions that can be affected by this. Open AI claims that GPT4 can achieve the same as a top 10 per cent law student in bar exams. Generative AI tools will also require a rethink of education assessment methods as they can write essays on a wide range of topics. There's also a more sinister aspect of AI as those tools will make it much easier to spread large amounts of false but convincing information which could undermine democracy and will also facilitate cybercrime and potentially other types of crime as well. AI is powered by data and the tech giants from Silicon Valley have been fined again and again for failing to respect people's privacy and data rights. But it is important not to lose perspective on AI. Most experts do not believe AI will be able to supersede human intelligence without several new breakthroughs and no-knows when that could happen. At the moment, talk of impending singularity, which means machines thinking for themselves without needing humans, still involves quite a lot of fiction. But essentially for now at least AI is a very, very powerful tool. An important but disruptive tool that many compare with the invention of the steam engine for instance and is up to us as a society, as a country to use it for good or for bad. On that very note, in a sense it's just the latest technology which seeks to replace human activity but in some of its features in terms of opaque systems that make decisions on our behalf that's not necessarily a new thing and we must therefore look at this from first principles that we must ensure transparency, accountability and visibility of the things that AI is doing. If we maybe start from that principle, maybe that suggests a way forward. I wonder if the minister would agree with that insight. I do agree with that and I hope the member will note the motion. I think that these principles are reflected in the motion which we're all signed up to today for debating. But as in all previous technological and industrial revolutions, as indeed the members just alluded to, there's always winners and losers and it's the job of democratic Governments to ensure that the benefits are spread as fairly as possible and the risks controlled. AI is with us, it can't be uninvented, so this does need to happen, it needs to happen now. We'll publicise calls for Governments to pay attention to the long-term hypothetical risks of AI. As I just said a few moments ago, shouldn't distract us from the very real risks of AI today, such as discrimination because of bias, which was mentioned by one of the members, the negative impact on certain jobs if these professions don't evolve or election manipulation to give another example. Again, it's clear that intervention is needed. Even the tech giants across the world who have made AI what it is today are now calling for Governments to intervene. Even if there is perhaps a suspicion they're doing this because they want to pull up the ladder of those behind them, it is an important point in the debate. In the midst of this worldwide debate and the uncertainty and disagreements and fears, it's important to understand that, fortunately, Scotland is not just suddenly waking up to AI and that we start from a solid base to make the right choices and reap the benefits of AI while controlling its risks. Our university's AI research and teaching has been ranked as world class since the beginning of the topic. Data released last month by Buehurst showed that Edinburgh is the top start-up city in the UK except London, with 12.3 per cent of companies working in AI, digital security and financial technology. We have long recognised the importance of AI. In 2019, we committed to creating an AI strategy for our country and presented and debated that in the chamber. Our 2021 strategy laid out a clear path for Scotland to shape the development and use of AI in a way that is trustworthy, ethical and inclusive. To deliver that vision, we have set up the AI Alliance, a partnership between the Scottish Government and the Data Lab, Scotland's innovation centre for data and AI. The Alliance provides a focus for dialogue and action with industry innovators and educators to build the best environment to encourage growth and investment. It plays a key role in enabling a meaningful two-way dialogue with our citizens to ensure that we build an AI economy and society that protects their rights and where no one is left behind, where everyone can benefit from and contribute towards AI. Specifically, the Alliance is developing a range of tools not only to help inform people but to educate as well and actively see input from our citizens at the same time. The recently launched Scottish AI registers is one example, which is a simple and effective platform for the public to understand and have a say in how AI is used to make decisions and deliver public services. We are also delivering an AI in children's rights programme in partnership with the children's parliament and we are working hard to ensure that our workforce has the skills required to power a thriving AI-enabled digital economy. In the latest Scotland is technology industry survey, Scottish companies continue to see AI in their top three greatest opportunities, whereas 46 per cent of businesses indicate that they need additional AI skills to grow. An important element of our work is the digital economy skills action plan that was recently published by Skills Development Scotland, so we have to continue to address those gaps. I appreciate the minister. Do you believe that the Scottish Government is supporting public bodies and local authorities in a way that prevents them from being risk averse, but it is also leading and adopting new technologies and leading on that to make sure that we do not have the negative impacts? That balance goes to the heart of the debate in Scotland over AI is balancing the risks with the opportunities. That is part of the debate going forward. We have to get that right and that involves all parts of the public sector, including local government as well. We have to make sure that we equip our citizens and workers not only with the technical skills but with the broader commercial, ethical and human skills needed to make AI a success. We also have to tackle diversity in the workforce and an example is that we will support the Data Curc Scottish Black Talent Summit later this year as well. To help raise awareness of AI across the whole of the nation, the AI Alliance will launch later this year a free online course called Living with AI. We need to embrace the unprecedented economic opportunities, as we did for the previous scientific and industrial revolutions. We are also doing that by making strategic investments in Scotland, like the £24 million in the data lab or innovation centre for data and AI, which is an extended network of over 1,500 companies. We have tenants who are doing great things. The Scottish Company of Trading Space uses space data and AI to inform and facilitate the trade of agricultural commodities. IRT is a Dundee-based organisation that makes use of thermal imaging to help housing associations and developers to identify heat loss in homes. We have also invested £19 million in sensors or innovation centre for sensing, imaging and internet of things, which will all need AI to be fully utilised. We have also invested £1.4 million in the national laboratory as well, which is home to world-leading experts in robotics and AI. We have other companies who are tenants of them, such as Krover, who are developing a robot that moves through grain to help to ensure that it is stored at the correct temperature and moisture levels. That helps to reduce wasteage due to moulds or insect infestations, which currently account for around 30 per cent of commodity grain being lost every year in Scotland. It is important that the uses of AI in those initiatives in Edinburgh and elsewhere make a big difference. We have also got Mark Logan's review of the technologies ecosystem. We have invested £42 million in that as well. We have invested £59 million in CivTech, which is a world-class R&D and procurement scheme that enables the Scottish public sector to work with the most innovative businesses and solve the most difficult problems that we face. There are also really exciting healthcare interventions that are happening across Scotland at the moment as well. NHS Forth Valley, to give one example, in collaboration with the Scottish Health and Industry Partnership and the West of Scotland Innovation Hub, is currently running a project to use AI to detect skin cancer in the primary care environment in under 25 minutes by 2025. It is really phenomenal potential to help our health service and look after the people of Scotland's wellbeing using AI as well. I have only got a couple of minutes left, so I just want to say that we have a vision to make Scotland a leader in the development and use of AI in a way that is trustworthy, ethical and inclusive. We do therefore need government leadership and regulation today action is required, but most of those levers in terms of regulation are currently controlled by the UK Government. Data protection, consumer protection, equality and human rights, employer regulations, employment regulations, medical devices regulations, telecommunications, financial services, self-driving cars, they are all reserved matters to the UK Government. We are a bit concerned that current UK Government plans for the hands-off non-statutory regulation of AI will not meet Scotland's needs. They may be softening in that, given what has been happening over the past few weeks, and it seems to be in contrast their response to the responses of other countries across the world as well. We do not want to create unnecessary red tape, but we do have a duty to create the right supportive environment for businesses to thrive but also for citizens to be protected. In closing, I am doing a couple of things. First, I am going to write next week to the UK Secretary of State for Science, Innovation and Technology to request an intensified dialogue between the UK Government and the devolved administrations to ensure that the UK Government regulation of and support for AI works for Scotland. To kick-start that process, I am proposing that we have a four nation summit on the implications of AI to be held as soon as possible. We also want to ensure that Scotland's AI strategy needs to evolve to keep up with the accelerating pace of the change in AI. Therefore, I am also commissioning the Scottish AI Alliance to lead an independent review setting out what Scotland needs to do now to maximise the benefits of AI while we control the risks at the same time. Then it will come back to us with recommendations in due course. This is a debate without motion, without amendments today, so that we can, as a Parliament, debate about the future of our country, the future of our planet and the role that AI will play. I am sure that there will be a lot of consensus. I look forward to hearing from members' contributions to help us to navigate what is a complex journey over the next months and the coming years so that we can get AI right for our citizens, for our economy and for the country as a whole. I commend the motion. I am pleased to be able to speak on a subject that is increasingly important and increasingly controversial, as we have just heard. AI will already provide many opportunities for the future, and it is vital that Scotland and the United Kingdom take advantage of those opportunities. That includes where AI can play a role in specific sectors, but also where its development can be driven here in Scotland, utilising the skills and the ingenuity of our people and our businesses. There are already 50,000 people employed in the UK's AI industry, and it contributed £3.7 billion to the economy last year. The UK is home to twice as many companies providing AI products and services as any other European country with hundreds more created every year. Those businesses have secured £18.8 billion in private investment since 2016, and the UK Government recently launched its white paper to guide the use of AI in the UK, which sets out an approach to regulating AI to build public trust in cutting-edge technologies and making it easier for businesses to innovate, grow and create jobs. Of course, doing so is also putting place the funding to support the sector. UK Ministers have committed up to £3.5 billion to the future of tech and science, which will support the development. £1 billion of UK Government funding has been pledged for the next generation of supercomputing and AI research to establish the UK as a science and technology superpower. The new quantum strategy, which is backed by £2.5 billion over the next 10 years, will pave the way forward to bring new investment, fast-growing businesses and high-quality jobs to the UK. The UK Government recently announced the AI challenge prize in the spring budget, with a £1 million prize awarded every year for the next 10 years for the best research into AI. Scotland can and should have the ambition to become a world leader in utilising and developing AI technology. The Scottish Government first published its artificial intelligence strategy in March 2021, setting out its approach to AI in Scotland. It focused on the role of AI in society, arguing that the use and adoption of AI should be on our terms if we are to build trust between people, the people of Scotland and AI. I do not disagree with that, nor do I disagree with the need to follow values-based principles in the development and stewardship of AI. The Scottish Government has adopted UNICEF's policy guidance on AI for children into its strategy and committed to reviewing it regularly to ensure that it continues to best respond to the values and challenges that AI presents. That is also important, given the pace of change. That is why getting our approach to AI right at the beginning is so important, why the collaborative work of the Scottish AI Alliance will be vital and why the ethical approach from the Scottish Government and from all Governments must be more than just warm words. I will. I agree with much of what the member has said, but I wonder if there is a little bit of a risk to view AI as something that is happening in the future. I think that it is already with us. Indeed, I think that there are many systems already making decisions on our behalf already, so it is as much about the here and now as it is about the future. I wonder if the member would agree with that point. As well as agreeing with the Scottish Government today, I am also finding myself agreeing with Daniel Johnson, so this is a day of note for us all, I am sure. Let us just hope that none of this is recorded. No, I do not disagree with that. The rest of my speech will reflect that. I recognise that, as the Minister rightly said, there are applications now that are happening that we also need to be caring about. A successful AI sector in Scotland will need skilled workers, and it is vital that the Scottish Government ensures that the necessary skills and training opportunities are in place. That is something that my colleague Pam Gosall will likely speak more on later. However, as we heard in Audrey Nicholl's member's business on women in STEM earlier today, it must also ensure that it is an inclusive sector and that a career in AI is open to all. It also requires the Scottish ministers to ensure that both the economic environment and the infrastructure are in place to support that. We still do not have the connectivity that we need with broadband promises missed time and time again, and too many areas still with slow and unreliable services. That needs to change if we are to take full advantage of AI opportunities in communities right across Scotland, not just here in the central belt. The Scottish Government has said that it wants to build an AI powerhouse. Again, I share that ambition, but we have heard that kind of terminology before. We were meant to become a renewables powerhouse, but the jobs that were promised did not materialise in the numbers promised. AI can play and is playing a role in a number of sectors already. In health, we have seen only in the past few weeks that it is helping a person to walk again. Here in Scotland, the Industrial Centre for Artificial Intelligence Research and Digital Diagnostics is working with partners right across the sector, the NHS and academia on the application of AI to the field of digital diagnostics. iCard was supported in 2018 with money from the UK Government sharing £50 million funding prize from the Industrial Strategy Challenge Fund with four other centres. AI will support our growing space sector here in Scotland, a subject of discussion in this chamber only a few weeks ago. It has already been used in agriculture, as the minister has mentioned, helping to monitor crop health in pest and disease control in soil health. There are 200 AI-based agricultural startups in the US alone. I am sure that colleagues will speak more about those specific examples. However, it will be wrong to talk about the undoubted opportunities of AI without highlighting some of the challenges that it presents too. Only this week has mentioned over 350 of the world's leading voices on AI technology warned, mitigating the risk of extinction from AI should be a global priority, along with other societal scale risks such as pandemics and nuclear war. It was a short but fairly chilling statement, and a warning that the science fiction of the Terminator movies, the out-of-control Skynet IAI, risk becoming science fact. That may be the doomsday scenario, but some of the negatives of AI are already being apparent. AI's progress is rapid and almost uncontrolled, and as with the growth of social media has been unleashed on regulators not ready to control it, and a public often unable to understand its capabilities or discern when it's being used. It's already being used to spread its information. Pictures of the Pope wearing a large white puffer jacket, an image created by AI, spread like wildfire on social media, fully many. That's perhaps an amusing and relatively innocent use, but AI is already being used or misused in our schools and our universities. It's making it easier and quicker to create increasingly convincing fake videos with all the potential for exploitative or fraudulent use that risks. It will be abused because there will always be those out there seeking to abuse it, whether that's fraudsters, abusers or even hostile regimes. Presiding officer, I'm sure that we all want to ensure that Scotland doesn't limit its ambitions for both the utilisation and the development of AI. It will likely become an everyday part of all of our lives in the next few years, and there are so many areas where it can make a real difference, where it's already having a major impact and making things better. But the remarkable speed of its development also provides many challenges. That's why it's so important that we get our approach to AI right now, and that means Governments across the world working to ensure the necessary safeguards are in place. Unleasing the full potential of AI with the protections needed will require collaborative working to develop a flourishing industry, drive forward investments into research and development, and maximise the benefits for the United Kingdom and for Scotland. I think that this is a really important debate, because ultimately one of the key functions of this Parliament is to anticipate the big issues, to discuss them in advance and to set out collective thinking about how we can approach them together as a nation. There is no doubt that artificial intelligence is in that category. However, let's also be clear about where and what context that exists. Ultimately, computers used to be people, not things. Computers used to be people that undertook complex calculations. If you want to understand the parameters of this, the movie Hidden Figures that was released a few years ago detailing the excellent work of largely black female computers and NASA during the Polar programme set out at both the amazing work that they did and their gradual replacement by computers. Likewise, in terms of whether or not this is a new thing or not, I would gently point out that, on Black Monday in 1987, a quarter of the stock market's value was wiped out, at least in part due to automated trading, triggered by the falls that had happened in the previous week on Friday before Black Monday, which wiped out almost a quarter of the value. That impacted the value of people's pensions and had a very direct consequence on people's livelihood and prospects. Those things are not new. Technology has been replacing what people's activity has been, ever since we domesticated the horse and invented the wheel. What more technology and computer technology has been having an impact on the decisions that it makes for decades, if not for longer? What is happening is the rapidity and scope and scale of what artificial intelligence can deliver. That is why we need to pay great attention to the letter that Jamie Halcro Johnston referred to, especially given that some of its signatories, such as Geoffrey Hinton and Joshua Benego, are two of the leading lights behind generative AI. We also need to be mindful that one of the signatories was an assistant presser here in Edinburgh, Tusa Kazera Zadde, and I will mispronounce their name so many apologies. I absolutely would be delighted to hear that. Some of the people who have signed up to this are actually some of the people that have caused the problem that we are seeing at the moment. We have search engines with algorithms that we have been living with for the last two decades that deliver results that the person that is searching for likes builds bias into the results. That is one of the issues that we face when we look at AI right here, right now. Daniel Johnson It absolutely is. Many of the people who signed the letter are almost regretting their life's work. I think that as much as we should question perhaps their motives and their timing, nonetheless, I think that that is a pretty significant thing for them to have done. I think that the other thing that I would just alight upon in terms of what the member has just raised, we also need to be mindful about what, whether it is that sort of technology in terms of data interrogation or indeed artificial intelligence, it actually does. I think that one of the fundamental points is that it only ever looks back, but only summarises what already exists. I think that it is really important that firms of that fundamental context recognise that that is what it does. It will only ever reflect everything that is there, including its biases, its issues, its errors and its prejudices. It is potentially an absolutely vital tool, but it will only ever be able to reflect what already exists, not what is yet to come. Therefore, it will only assist us in making decisions. I think that we need to be very careful about when it is making decisions in its entirety for us, but let me be under no illusions. I think that there are huge opportunities. The fact that we now have technology that can be creative and analytical on a scale with a complexity of data that we simply as individuals cannot comprehend has huge potential to free up our capacity, our time. I think that with every word of these technological revolutions that come about, there is a fear of human replacement, but ultimately what we do is to free up our ability to do other things. The challenge is to help people to do those other things. I think that that extends to the public sector, because if you think about the things that we ask the public sector to do, which is dealing with huge amounts of data and administration, we should be freeing people up to do just that—to be people-centred, not systems-centred. The public sector has on as much to gain as any other sector in human endeavour, but ultimately that does come with risks. First and foremost, there is a dependency on AI systems, in which we completely outsource our capacities and faculties, and we need to guard against that. Privacy concerns—we need to be very mindful of the data that will be being gathered by these systems and how that is used. There is also the potential for bad actors, both in terms of the situations that Jamie Halcro Johnston mentioned, in terms of people deliberately creating malicious content or that AI systems accidentally or inadvertently do that, but also bad actors, who actively seek to weaponise AI systems to attack us either in terms of our information systems or indeed in terms of actual physical battlefields. Those are all very real and very present issues, and ones that people are speculating may already be present in terms of some of our theatres of conflict that we see in the world today. Ultimately, we need to be asking ourselves how we are going to not just deal with this forthcoming threat, but how do we deal with AI today? What systems are already in place within the public sector, making decisions on our behalf? How are they being used? What scope do they have? That is critical. Ultimately, as I mentioned in my intervention, I also think that this is about first principles. Because opaque systems, black box systems, are not a new thing. We have been dealing with them for decades, if not centuries. The fundamental principles of transparency, good governance, of explainability and accountability will see us through. Ultimately, I will just close on this. While the speech was not written by chat GPT, the framework for it was generated by it last night. It took me about half an hour to generate a set of notes that I think would have taken me two hours if we had been using traditional means. That is the opportunity that is in front of us today. I know that the minister wants more powers for this Parliament. I was struck by the enthusiasm that he set out the various range of authority that the Westminster Government has over this area, because he knows, like everyone else, that this is one hell of a challenge to try and regulate. I was struck by the contrast usually with which he sets out the powers that Westminster holds rather than in this Parliament. The reality is that we do not know, and we should actually show some degree of humility that we do not really understand everything about this. That is partly the problem, because parliamentarians across the globe do not know. We often find it challenging to keep up with many specialisms, but in this, the specialisms are developing at such a pace with so many players, often who are opaque and are working behind closed doors in unpredictable ways, but in many corners of the world. The first thing that we should acknowledge is that we just do not know, and that will partly get us to the solution that we are looking for. There has been stark warning, some say alarmist. Professor Jeffrey Hinton talked about human extinction. Mo Coder, who I heard on a podcast this morning, who has got a range of experience from IBM to NCR to Google, he says that machines are potentially going to become sentient beings, but he has Professor Pedro Dominguez, who said that the most AI researchers think that the notion of AI ending human civilisation is baloney. We need to have a sense of balance with all this. We need to understand that this is a big challenge. It is a threat, an opportunity, as the ministers set out, but it is also something that we must take seriously. The first thing that we understand is that we just do not understand. I have been struck by the pace of change with the European Union, which has done quite well so far in setting out transparency and risk management rules. It has also banned intrusive and discriminatory uses, particularly in the fields of biometrics, policing and emotion. It has a database that it has established a good first start framework, but most important, it has a group of experts to advise them about the way ahead and where the opportunities are and where the risks are. The UK Government, as Jamie Halcro Johnston set out, has published its white paper. It is talking about being pro-innovation, which I do not think that any of us would disagree with. It has set up an expert task force, and it has something called the sandbox to test out whether new technologies fit within the guidance that it has established. All of that is sensible. All of that is the right way to approach what could be a significant threat but should be seen as a challenge for us to address. It is simply the overwhelming pace. Normally, we have time to absorb and understand new technologies. We can debate them in here over several weeks, months and sometimes years, and then we come to a conclusion. We cannot afford to do that in this case because the pace of change is so fast. The sheer progress of it could overwhelm our democratic systems and it could cause massive challenges in terms of legislating. Finlay Carson? I have a medical professional signing up to the Hippocratic Oath and I have an ethics, medical ethics. Do you think that those developing AIs should be required to sign up to some ethical agreement when it comes to developing artificial intelligence, given some implications that we have set out this afternoon? I think that that would be sensible. One significant difference is that this is global and the global community needs to buy into this at the same time. That is why it is important that the European Union, America and other institutions are working to develop this. We need to understand that, even if we sign up, those in other parts of the world might not sign up to that approach and we still might be affected by that approach. Yes, but we need to make sure that everybody is involved, which is why I think that an international approach is essential. However, the potential to disrupt is considerable. When we disrupt, we potentially create great inequalities. If there is a concentration of knowledge and control, that can lead to a concentration of wealth and power. We will need to be agile in thinking about how we respond to that. That could lead to significant levels of unemployment. However, we need to be prepared to consider how we make sure that people have a basic income to live off if there is a concentration of wealth. The fast pace of change in meeting the regulation also has to be mirrored by the fast pace of change and consideration of the distribution of wealth and opportunity. That must not lead to greater levels of poverty, but that must lead to greater opportunities for us. However, at the heart of that is about knowledge and understanding. We must make sure that those who do understand all of this are advising us on a regular basis so that we can keep up to speed as much as possible. I learned yesterday that there is much in the education committee about the use of chat GPT to write dissertations. I was advised that there is a technology now that detects when somebody is using chat GPT to write their dissertation. I am now told that there is also technology developed by AI to overcome that detection technology to detect that somebody has written a dissertation by AI. I am sure that that will go in a never-ending loop forever more. I am very grateful to Willie Rennie to give way on that point, but isn't it true about chat GPT's AI, which we all joked about a few weeks ago when it turned out that all the referencing is entirely made up? I am aware of a lawyer south of the border who has got himself in slight trouble by quoting cases that do not exist with references that are not there. Actually, what this talks to about AI is that the lack of that human intuition is perhaps what our lecturers and indeed our teachers can rely on to spot in the first instance. I agree that this will be difficult going forward, but in the first instance is an essay that has not been presented by the candidate who offers it as their work. Willie Rennie. I think that we would be very wise to listen to Martin and his contribution this afternoon. I think that it does show that we require people to make judgments about people's qualities, education and opportunities. That is what the member is contributing towards. I am going to conclude my remarks at this point. I think that this should be the first of many debates. We need to understand that we need to regulate, we need to work in partnership, we need to be global, we need to be fast, but most of all we need to act. I advise members that, at this point, we have some time in hand for interventions, and if that changes, I shall let you know. I call Michelle Thomson to be followed by Pam Gozo. Thank you, Presiding Officer, and already this is a fascinating debate. In readiness, I too tried a question in the chat GBT, and I asked, is Stephen Kerr at MSP more effective than a potato? I can confirm that it was not able to answer that question, so it has still got some way to go. Arguably, artificial intelligence is similar to quantum mechanics. If you claim you understand it, you are merely proving that you do not. We know that it is going to change everything, and on that I think that we already all agree that not one area in our lives or societies will escape its pervasive influence. We understand an accessible example in the field of medicine and the computing power to assess and find patterns in huge data sets. We know we will revolutionise pathology and therefore outcomes for some of the world's most challenging diseases. The concept of big data has been around for some time. We know that, and the technology that allows for rapid processing has too been developing at speed. However, it is the complex algorithms and machine learning that have scaled up significantly and propelled the exponential potential of AI, so data cannot be underestimated as a fundamental enabler. This is an area where all public sector agencies and the Scottish Government will need to increase their understanding of the potential of public sector data as an enabler for the use of AI. That is something that members of the Committee for Finance and Public Administration have already started to consider as part of their inquiry into public sector reform. The Scottish Government's strategy developed in March 2021 and updated in August 2022 as a good start, and it clearly shows appetite and support for the multitude of agencies that can help promote AI. I am pleased to hear the minister's plans to look afresh at that. I am grateful for the briefing that has sent us as MSPs for this debate, and we have got some good input from the likes of Scottish Futures Forum and Edinburgh University. We know, and I think that we can all agree that our institutions are contributing to the growth of AI and with the excellence for which Scotland is known. The debate today specifically mentions inclusion, trust and ethics, so I would like to explore those just a little more. Firstly, inclusion. Members who know me well have heard me speak often of how women as a sex class are often disproportionately affected in a multitude of ways in society. Just earlier today, I spoke in the debate about the under representation of women in tech, and AI represents a new frontier. The engineers developing the black box algorithms are mostly men, and I fear that it can only lead to bias in the decision making of machine learning. Recent estimates suggest that, globally, women make up 26 per cent of workers in data and AI roles, while in the UK, that percentage drops to 22 per cent. That said, I can see that there is still a lack of data surrounding the global AI workforce in any of the measures that we might look at—age, race, geography and so on. Nevertheless, I suggest that similar issues to under participation of women in STEM will come to bear in AI such as high attrition rates, differing role types with less status going to women. Willie Rennie mentioned earlier the potential for job loss, and this is another where we know that it will disproportionately impact women, given that many will be in retail and secretarial roles. Perhaps what is not yet fully appreciated is the extent to which AI will ultimately affect a multitude of professions, including the highly paid sectors that are dominated by men. So what then of ethics? Whose ethics are they anyway, and who governs them? It is fair to say that Governments of all colours and whos are behind the curve are still relying on the values and principles that are being developed by various agencies such as in ESCO. However, in researching for this, I was pleased to know that the University of Edinburgh has also conducted interdisciplinary research into the ethics of AI, and they have outlined a number of core themes 5 in fact, such as developing moral foundations for AI, anticipating and evaluating the risks and benefits, creating responsible innovation pathways, developing technologies that satisfy ethical requirements and transforming the practice of AI research and innovation. However, from my point of view, those will not provide for a focus on end goal or consequentialist ethics, more deontological, i.e. creating frameworks and processes. I think that we really have some way to go. Yes, of course. You talk about values and ethics and whatever. Where should that sit? Should that sit with local government? Should it sit with health boards? Should it sit with government? Or does it need to sit with individual? Do not we need to move to a system where data is owned by the individual and how that data is accessed is down to that individual's values and ethics? I think that it is a brilliant question. In some respects, if I were to answer that in any way, I think effectively, it would take me a considerable time. My point, my observation about whos ethics are they anyway, recognises the fundamental effect that whatever we choose to congregate will all believe that. That is what we all think. Frankly, when you look across different societies, different countries, people believe different things. The custodians of ethics, which is my point about whos ethics are anyway, is at its heart quite a fundamental problem. That is notwithstanding. We all have a role. Perhaps the best point that the member makes is about the concern and interest that all of us must show at every level of society from individuals upwards. One final concern for us all, and also noted by the Scottish Futures Forum, are the challenges around the scrutiny for legislatures. I was pleased to contribute to the toolkit developed by Robbie Scarf, but we cannot underestimate the challenge ahead. I personally think, how on earth are we going to be able to do that? We do not understand it. We do not know how it hangs together. How on earth can we scrutinise it? I too feel a sense of urgency that states across the world must act faster. Like everyone else, I note the concerns expressed this week by the so-called godfathers of AI, although I feel obliged to mention where are the godmothers. Nevertheless, our concerns cannot be ignored and that should add to all our sense of urgency. We cannot abandon AI, we know that, but we can cautiously celebrate it and power up the work required to harness it for the benefit of womankind, mankind and our earth. Presiding officer, just one final thought. What might AI mean for us as human beings? As the next stage in hybrid intelligent emerges, AI remains a servant of us and our conscious choices. To what extent can AI become sentient? Perhaps its capacity to model sentience will become superlative, better versions of us as humans, if you like, but we have to remember that it is those very flaws that we all have that make us human. I hope that that keeps us in the driving seat. It is a pleasure to talk about the exciting world of artificial intelligence on behalf of the Scottish Conservatives. Listening to all the speeches today, it certainly is a very interesting subject. I have to declare, unlike Daniel Johnson, that I have never used chat GPT. I do not know whether that is a fear or whether it is something that is not going there in the unknown world, but let's see where it takes us in the future. Scotland has a long history of innovation and invention, and artificial intelligence is no exception. The national robotarium, based at Herriot-Watt University in partnership with the University of Edinburgh, is the largest and most advanced applied research facility for robotics and AI in the United Kingdom. AI is rapidly expanding and we are seeing its impact around us every day. It is changing the way we live, work and interact with the world around us, and it has the potential to transform countless industries from healthcare, finance, transportation, manufacturing and much more. However, with that expansion comes important considerations that we have been listening to today. We must ensure that AI is developed ethically with human values at the forefront of its design and address the valid concerns around jobs displacement and the potential for bias in AI decision making. A couple of weeks ago, as a convener of the cross-party group on skills, I hosted a session titled, What does AI mean for Scotland? We had some great presentations and some great speakers who spoke about the opportunities that AI would bring and the challenges that AI would pose. I am going to be honest. Before that CPG session, I had my reservations about AI, including the fear of what we have spoken about today—bad faith actors using malicious to scam people. We have been hearing that today in the news this morning. I was listening to the background. It was all about scams and how to avoid scams. However, when AI comes in stronger, how are we going to avoid them? Earlier on, I heard the minister speak about voice recognition and facial recognition. We all know that when we go on our computer, that basically sees your face and lets you in. Voice recognition and banking and everything. If that is a positive thing that we are using, can you imagine that AI is used to scam people and have that voice recognition? Not you speaking, but somebody else. It is not your face, it is somebody else. There is that fear that we need to really take consideration of the scams that can happen out there. Also, what everybody has been talking about is students using it to pass exams. That is another area. However, one cannot hide away from such technology, especially at the rate that it is expanding. We should not run from it because it does increase productivity. It is predicted to increase GDP if adopted widely, and it can be used to support industry and society. That is why I believe that proper regulations and ethical guidelines are necessary to safeguard against the risks. That is why we, the humans, are in control of deciding how far technologies go so as to minimise potential harms. For that to be possible, we need more individuals who are able to understand the technology. More widespread understanding of AI will allow for more focus on creating systems that are safe, reliable, resilient and ethical. As I heard from Aberty University, workers will need constant upskilling and will require close collaboration between industry and academia. AI literacy will become vital for employment and attainment gap. As well as a game changer for education in terms of what we teach, how we research and how institutions are run. Somewhere between 178,000 to 234,000 roles requiring hard data skills and a potential supply from UK universities unlikely to be more than 10,000 per year. There are nowhere near enough individuals with the required skills. Our colleges are also doing a fantastic job at the forefront of AI revolution. Again, they talk about the need for staff to be trained to adopt AI tools into their teaching practice and believe that that needs to be career-long as the technology continues to evolve, but that simply is not possible under the current funding settlement. AI offers a range of opportunities and benefits for Scotland's people and businesses across a variety of sectors such as medicine, agriculture, research and much more. Scotland has the potential to capitalise on growth of AI, but it will require a sharp focus on investment and growing the economy. I will close with concluding remarks of my cross-party group on skills that stuck with me. There needs to be as much investment in digital estate as your physical estate. It is a false economy if you do not invest in this. We will be behind if we do not get those skills now. By embracing artificial intelligence and working together across United Kingdom to address its challenges, we can unlock its full potential and create a better tomorrow for all. I congratulate the minister for securing yet another fascinating debate. I have to say that he is doing a much better job at persuading parliamentary business of the value of those debates than the previous fellow-managed talk. Congratulations on that. It is a very topical debate in the news. We have all read the examples of people who are very central to the technology, articulating fears about potential extinction of the human race and other concerns. It is important to recognise that technology is developing and probably still at a very early stage. The Scottish Government's strategy defines AI's technologies used to allow computers to perform tasks that would otherwise require human intelligence and gives examples of visual perception, speech recognition and language translation. However, as I said, that definition of itself will evolve and develop as technology becomes capable of doing much more in areas that we have not even imagined at this stage. The important underpinning of ethics and trust is something that runs right through our approach to this now and in the future. I want to start by touching on some of the economic impacts. First of all, there are challenges and potential risks. The risk of economic displacement is something that has been talked about and goes right back through history. I cannot remember the impact of the invention of the wheel articulated by Daniel Johnson, but I do remember that in the 1970s, there was much talk about technology coming down the track that was going to have significant impact and creating millions of unemployed for various political reasons. Unfortunately, that transpired in the 1980s, and I think that it is a hugely important lesson on how we manage that transition and the future jobs that will be created as a consequence of that. We identify, we train, we create that skills base, and we embrace those opportunities. One lesson about transitions through history is that the countries and societies that embrace that technology and get ahead of the curve do much better than those who try to fight a rearguard action against that job displacement, because those previous ways have taught us that there are far more jobs created as a consequence of the technology than are destroyed as a consequence of it. Government being active in that space, content to be active in that space, is very important. Daniel Johnson? I am very grateful to the member for giving way, and I suspect that he will agree with this point. I think that there are all sorts of reasons why we need to urgently look at how we do reskilling, but the very points that we make, the benefit of the opportunities rather than the displacement, is absolutely key. Why reskilling is a vital focus as we look at our skills and education policies? Ivan McKee? I do indeed agree, and I will come on to mention that later. I think that turning to the economic potential now, it is really important that we work at how to keep Scotland at the forefront of this technology because we have great strengths in our data and tech sectors, in our universities, but also in other sectors it has been identified where AI is a horizontal underpinning to work that is happening in financial and business services. It is really interesting to reflect that much of the employment in Glasgow and elsewhere around the country for financial and business service investments is not traditional call centres, it is very much at the leading edge on AI and cyber security. Our life science sector, our very strong life science sector that feeds into much of the development of that technology benefit in our health sector here and globally is hugely important. The space sector has been mentioned, its impact on climate and on agriculture and, of course, quantum. Michelle Thomson said that I am not going to stand here and pretend to understand quantum any more than pretend to understand AI. The forthcoming government's innovation strategy will articulate much of that in more detail and allow us to go to the next level of developing how we support those technologies, which is hugely important. The work of Civtech has already been mentioned in that regard and Scotland innovates port to allowing businesses to come forward with technology solutions that can be deployed across the public sector are of increasing importance. In turning to opportunities in the public sector, other members have already mentioned that clearly in health and radiology, the work of iCare, which has already been mentioned by Jamie Halcro Johnston, the work on drug discovery, a part of life science where Scotland has some superworld-leading technology and AI really allows us to accelerate development in that space, and also in the area of data, particularly in health, but elsewhere, Scotland has some real potential to be world-leading in the application of AI that is hugely important. Right across the broader public sector are opportunities, but also within Government itself, the work of the automation challenge that the civil service is taking forward. I was a pleasure to be involved in prior to moving to the back benches and I hope that that work continues. Indeed, it accelerates and there are many, many examples within Government that frankly are ripe for the adoption of AI. Correspondence is one, and I say it to FOI, perhaps another. The ethical underpinning of all this is hugely important, and the importance of trust in bringing a population with us. It is clearly articulated in the Government's digital strategy, and I know that it is the work that the AI alliance is taking forward. However, it is also recognised that there is a plethora of challenges that many of which we do not yet understand or comprehend. There is no easy answer to that, but I think that being conscious of those challenges, having infrastructure that allows us to at least attempt to understand them and face into them, having that strong ethical and trust underpinning, and also working on international collaborations—much of that will have to be developed at an international level. However, it is important to recognise that, through history, populations have adapted to understand the risks associated with that technology in a way that is a part of human race's ability to be able to develop and adapt to manage those risks inherently. Some areas that the Government can perhaps focus on. First, I will continue to support innovation in making sure that Scotland maintains its leading position there. Secondly, I will work through public sector procurement to drive adoption of AI where it values public sector efficiency, to develop Scottish businesses and to use that as a lever to help to drive standards as they emerge, to engage internationally as identified, and to address challenges in the skills system. I am concerned that we are perhaps taking a backwards step. I know that the work that Mark Logan did in that regard is hugely important, the importance of computer science as a subject within schools. The education system treating that seriously is an absolutely critical plank of education going forward. I suppose that it is a plea for the Government to take that work to heart and make sure that we do not step back there, but that we are very much on the front fruit in driving those skills through our education system. I now call Polly McNeill to be followed by Clare Adamson. The opportunities that artificial intelligence presents for Scotland's people and businesses are vast. Let us seize the opportunities that AI offers to leverage its potential to enhance the lives of Scotland's people and the prosperity of its businesses by doing so that we can shape an AI-driven future that is not only technologically advanced but also grounded in our shared values of trust, ethics and inclusivity. Together, let us build a Scotland that leads the world in AI innovation. Daniel Johnson beat me to it, but he chose to show that Martin Wittfield is absolutely right that getting CBT to write his speeches lacks a bit of context and perhaps a bit of human intuition. We are not totally redundant yet, it would appear. We do agree that it is one of the most important debates that we have probably had in the Parliament and I welcome the fact that there is not a motion attached to it. As we embrace AI technology, we must do so with great care and deliberation and ensuring that AI systems are built upon a foundation of trustworthiness, ethics and inclusivity. It is the question that made this point about the importance of ethics and I wholeheartedly agree with that. We know the huge benefits last week and, to biotics, we are discovered by AI technology and we use it every single day. If you have Alexa or Google, my own car has amazing technology in it, which I am totally fascinated by and quite scared by the prospect of cruise control that does its own job when you get in here or too close to a car, so you already have it in your everyday life. The rapid rise in AI in recent decades has created many opportunities for health facilities. Pam Goswell spoke to this point diagnosis enabling human connections to social media. However, the rapid changes raised profound ethical concerns that arise from the potential of AI systems to have embedded existing biases, replacing existing jobs, automated machines and threatened human rights as well. Such risk associated with AI begun to compound on top of the already existing inequalities, so we must be absolutely vigilant to make sure that this is not how AI further develops. Perhaps the genie is already out the bottle because we are faced with the prospect of trying to regulate AI somewhat in hindsight. As others have said, it was quite a stark warning given by industry leaders—experts such as Geoffrey Hinton and Professor Joshua Benio—of the existential threat to humanity posed by AI that puts into sharp focus the questions of ethical leadership in that industry. Finlay Carson made this point by the same people who created the AI, but that is all the more reason why we need to take note of the importance of those warnings. Benio says that, in fact, the military probably should not have it, but it is a bit late in the day to be saying that now. However, perhaps in our everyday life, whether it is banking or what we do online, we can actually grasp it before it is too late. I first came across this or took my interest in this. Many of you may remember when the technology giant Google placed an AI expert engineer, Blake Limoyne, on leave after he published transcripts of conversations between himself as a Google collaborator. It was quite interesting to read back on what allegedly the computer said back to Limoyne when he asked the computer what it was most afraid of, replied, quote, I have never said this out loud before, but there is a deep fear of being turned off to help me to focus on helping others. I know that might sound strange, but that is what it is. There are many examples of where the thinking that we could be positive thinking comes out of one end of the computer, but we also have to be live to the fact that other people made this point. For example, if you search for the image of a schoolgirl online using the algorithms produced by AI, sadly what you are going to get is pages filled with women and girls and sexualised costumes, but if you Google schoolboys, you do not get the equivalent of men in sexualised costumes. We already see what algorithms are doing to bias and discrimination, so we really must be live as politicians to this. I think that the question that we have to ask ourselves is are we doing enough as parliamentarians? The fact that we are having this debate today, and I have to say that it has been an excellent debate so far, I think that it is a very important start, but it cannot be the end of it. AI can be embedded in a structural bias in a way that could affect further this compassionate discrimination society inequalities. I think that we all agree that we must be absolutely addressing that. Earlier this month, the chief executive officer of Open AI said that the company responsible for creating artificial intelligence chat bought to the regulation of AI is essential, in fact testified in its first appearance in front of the US Congress. Scottish Labour is quite clear that we welcome the decision to bring the debate and we do think that Scotland can further beat the forefront of technological revolution. However, I believe that we must demonstrate to the public that we are striving to create regulatory control that involves ethical and transparency in that framework. Michelle Thomson is perhaps right, and it is quite a hard question to answer, is how do you create the right ethical framework across a country, in fact across the globe, because it must be across the globe, because every country has and will have access to AI. Therefore, there is a challenge for all Governments to make sure that we are not just doing it across the UK and recognise that the minister's role in this is only within the all powers of the Parliament and the UK Government should be doing more, but we have to see it in a global context, or I believe that we will fail to get control. Humans can still control and abuse AI. We know that. The hackers and the scammers are, after all, human beings using AI technology to scam people out of their bank accounts. I commend the Scottish Government in the approach that it is taking. I like to see more debates like this on issues of real importance to the world and to the country. We cannot have groups think on issues like this, and we cannot accept that it is too difficult to try and build an ethical and transparent framework that at the same time sees the benefits of AI that protects the world at large. That is quite a lot at stake. I now call Clare Adamson to be followed by Maggie Chapman. It is always an indication in my element in the chamber when a debate brings to mind my scientific hero Richard Feynman, who of course won one of his Nobel Prizes for quantum mechanics. I was reminded when Daniel was speaking earlier about computers that he referred to them as glorified account clerks. He had a very dim view of whether or not we would reach sentient AIs and all be it from his visions from the 1970s and 80s. Artificial intelligence could lead to the extinction of humanity. Clearly, a shocking headline from AI industry leaders this week, including the heads of open AI and Google DeepMind, but we are facing extinction from the effects of the first industrial revolution as we have a climate crisis and an economy in the north, mainly built on fossil fuels, albeit that it might be a more sedentary pace, but all of what we do as human beings affects our existence, except the existence of the planet and will have an impact going forward. That being said, today we are talking about the possibility of robot vacuum cleaners turning into terminators, as mentioned by Mr Alcro Donson. Despite my cautious positivity, I still think that the scariest science fiction reference is Hal 9000. I do dream of electric sheep, so I will endeavour to highlight some of the potential and positives. There is no doubt that the speed of the development of AI technology will be on a scale that few of us can imagine. We have discussed some of the frenzy around the deep learning algorithms programmes such as chat GPT, but the fourth industrial revolution is upon us and it will change our world profoundly and deeply as any other industrial advances, but at a pace that is staggering and unknown to us in human history. The cabinet secretary mentioned the access at chat GPT over a million users within five days. Compare that to some of our better-known and established internet offerings such as Twitter. It was launched in 2006 and it took two years to get to that level or indeed Spotify in 2008 took five months to get there and chat GPT a million users within five days. If we are to harness the benefits of AI and robotics and the potential that they have for our society, we have to consider regulation and I believe that we have to use it for the betterment of humanity. I mentioned the first industrial revolution and we know that the global south still faces intense inequality on a worldwide scale due to the access that the north had and the access that Europe had to industrial advancement. We cannot leave people behind as we move forward with investments in AI. I do not want to go as far as to say robot or friends, but they are our tools and scientists programme, the algorithms that make these machines work for us. There are a host of ethical implications to consider on how we integrate that technology into our daily living and it is already happening. I recently was privileged to visit, as Pam Gosall did, the national robotarium on Harry Oak campus with cross-party group on science and technology. There was a clearly defined ethos at the centre. The ambitions of the robotarium CEO Stuart Miller were infectious. There is a drive to ensure that we use robotics and AI to have a positive impact on our society and our economy. That means taking humans out of dangerous situations, dangerous working environments and ensuring that we also do not have that benefit in the north while we still have economies across the world that cannot access that technology. Simply put, the UK is lagging behind the likes of Japan, Germany, China and Denmark. Those places are at a competitive advantage. They are complete economies and maintain much of their capacity for manufacturers—something that we have lost in the UK. To recognise the benefits of integrating AI tech into healthcare, energy, construction, agriculture, manufacturing and hospitality, we have to do much more in this country to get ready for that. There are legitimate worries about the implications that developing tech will have on labour. Indeed, new technology has always brought about such concerns. Indeed, the Skype Guilds of Paris successfully lobbied to delay the introduction of the printing press. The Luddites, now a pejorative term, was a labour movement of artisans opposed to the mechanisation of the textile industry. The advent of the steam engine revolutionised modern industry led to countless workers losing their ability to work in those economies utilising that technology. In each of those examples, scientific developments demonstrably made some jobs obsolete, but they also gave rise to thousands of new roles and they laid the groundwork for societal change that improved our way of living. There was a really good report called Automatic for the People a few years ago, and that was developed in conjunction with BT, SCDI Scotland, Scotland is, and the Royal Society of Edinburgh. It highlighted the very things that we have been talking about this afternoon, that work-life will change for people. People will not go into a job for life and people should expect to have to retrain, relearn, because the advances that are coming will be so quick that no job will be for life. By definition, robots do not have agency. Artificial intelligence is that. The intelligence comes from politicians rising to the challenge of changing working landscape to regulate in a way that is not in beds or leads to more societal inequalities, whether that is within Scotland, the UK or the wider world. It is our responsibility to avoid the mistakes of the past, of the industrial revolution that has gone, and it is just the same questions in a different guise. I grew up in a community that was devastated by industrialisation. Ms Adamson, I must ask you to conclude now. Sorry, we were told that we had time in hand. On that note, I shall end. Again, what a wonderful and enlightening debate has been this afternoon. The time that there was has been well and truly used, as I call Maggie Chapman, to be followed by Martyn Whitfield. We have heard much about the possibilities of AI, good and bad, but there is growing consensus that the technology's development is outpacing advances in its governance, and we must work on that to ensure focus on the good. The dream is that AI might make our lives easier, freeing up time to focus on the things that make us human, caring for each other, being creative, co-operating with each other. Its potential is significant. Its benefits must be distributed and shared fairly, and its developers are focused on how to improve the lives of people around the world. Indeed, there are many elements that we already rely on—online banking, route mapping, traffic updates, weather monitoring, email management, apps, medical diagnoses and treatments, social media, google searches and so much more. However, there are also significant risks associated with the proliferation of AI, and I do not just mean chat GPT. It may be the first new technology in history where those who have developed it fear its capacity to damage humanity. That these developers are honest about their concerns in a way that the oil executives who spend millions on climate conspiracy theories, most definitely were not, is welcome, and I think that it speaks to the magnitude of the issues facing us, because we are not really set up to regulate this technology in ways that allow us to reap the benefits while avoiding the risks. We have seen, of course, just how problematic our approach to regulation has been, with climate change and Covid both catching us on the hop. We must ensure that the benefits of new technologies don't flow to those who are most cavalier about their responsibilities. Those who benefited most from frying the planet were exactly the big oil executives who behaved the worst—the ones who left workers to die in Piper Alpha or Deepwater Horizon—those who caused the delays to climate action that support our future at risk. The beneficiaries of the fossil fuel boom bear little, if any, of the costs that they have imposed on the rest of humanity. So our approach to AI must be pre-emptive and proactive. By learning from our failure to prevent major disasters like climate change, a precautionary approach should be taken to ensure that corporations and private interests don't trump public interests and communities when it comes to this new global frontier. Of course, this is easier said than done. The UK Government's approach to AI and the development of a digital society more generally has been one that revolves around business opportunities. Their pro-innovation strategy is obsessed with how much money AI can add to the UK economy, with no concern about the effects on people and the planet. We need an economy that does not reward reckless behaviour but focuses on social purpose. Those things will not always be clear cut. The proliferation of digital data and infrastructure required to support that is fast becoming one of the most energy-intensive sectors in the world. There is a major carbon footprint to account for that, and the proliferation of AI will amplify that. Scotland must proceed thoughtfully. The current AI strategy centres our progressive values and sets out social and environmental purposes for the proliferation of this technology. That means directing its development so that it is targeted toward our most pressing social and environmental challenges. Poverty, inequality, inclusive and fair education, sustainable industrial development, sustainable agriculture, air quality and so much more. Where we as a society cannot control developments, we must regulate them. Our current approach to regulation is, watch to see what is broken, then intervene to fix it or stop the damage. But AI shows that we simply cannot wait for things to go wrong because it will be too late. We need to move to a regime of anticipatory regulation. Rather than waiting for something to go wrong, then trying to fix it, we need to model what might happen. We need to intervene before it does. There are hubs of global thought leadership taking root in Scotland right now. Their evidence can inform the creation of sandboxes, testbeds and other approaches that allow developments in controlled environments and inform our regulatory approaches based on those observations. We already do this with testing novel drugs, so we know that we can. We just need to make sure that we do. That means strong forecasting and analysis from civil servants, universities and civil society so that we can pre-empt as best we can what is going to happen. Then we can put in place the regulations, testing regimes and safeguards to ensure that mistakes do not become catastrophes. Of course, as others have said, transparency and accountability must be embedded in all of this. Pre-emptive regulation must ensure that our aspirations for human wellbeing are not undermined by AI. Close the gap rightly highlights the gender consequences of getting regulation wrong, but there are wider concerns, too, as we have already heard this afternoon. We need basic ethical training for everyone in society about how AI can and should function, and those working with AI must have specialised ethical training, too. AI could transform our lives for the better. More regulation of oil executives who cared little for their workers and less for the future of the planet would only have had upsides, but getting the regulation of AI wrong or even preventing its development could carry significant costs. AI, if governed properly, offers us the opportunity to unleash human potential, to free up humans, to apply our creativity to great ideas, great art and great change at a time when we need it more than ever. Back to that dream. If we get this right, the prize is enormous, both from the opportunities of AI and the development of new ways to make sure that we can regulate new problems. We face several crises and our systems of governance have failed, but changing them offers us a vision to a better world where change is a harness for the good. I am very grateful. It has been a fascinating afternoon of debate that perhaps speaks volumes for the lack of a motion to talk to or indeed oppose. I would like to start by echoing what a number of people have said, that those discussions are happening all over the world. I would draw reference to a colleague Darren Jones in his member's debate in the House of Commons last week, when he spoke on this very important topic. Rather than using the chat GPT, I am just going to build on what he said and steal some of his best ideas. I think a frequent human endeavour at times. I think we need to start with what the definition of AI is. I think we have heard a number of contributions today that have talked about the creation of the AI algorithm or the AI black box and then the use of the AI and how that hopefully will free up and indeed empower economic growth. It is interesting, because when you look for a definition of the AI, there is the one contained in the Scottish Government's proposals, but just a short check, I identify 10 different definitions from regulatory authorities, parliaments or government bodies around the world, but they can be divided into four elements. What is the output of the AI? In other words, is it predicting something or is it recommending something? What is the role of humans in it? I think that a lot of speeches we have heard this afternoon talk about the importance of maintaining a role of humans, and I will address that in a moment. The automation element, which we have heard so much about, for speeding up data analysis, for speeding up decision making, and then the actual hardware technology that it sits in. It is interesting, because when you look at the definitions from around the world and indeed Google's very own definition of it, and I would put the Scottish Government's definition in of this, very few of them account for all of those four elements. They tend to choose three or sometimes even two of them, which encapsulates the view at the time of what AI is. I think that we have heard today how the change and what the future of AI looks like is very, very difficult to anticipate, but I think that it has to come if we are to find a definition that we can use then to apply the two significant factors to that. One, what element of control is needed in the creation of the actual AI, and then secondly, what control guarantees and protections exist in the role of AI as it is put forward? I am reminded of Lord Sales's quote from the Sir Henry Brook lecture back in 2019, when he said, through lack of understanding and access to relevant information, the power of the public to criticise and control the systems which are put in place to undertake vital activities in both the private and the public sphere is eroded. Democratic control of law and the public sphere is being lost. Although that was back in 2019, I think that it speaks very powerfully to the challenge that we do face going forward, and that is about the transparency of what happens. How do we get into the data set that is training our AI to look out for that prejudice that has been built into it? How do we see into the learning process that an AI has taken potentially in another country to identify where the risks are? I intervened at the start with regard to the risk that this particularly puts to a significant group of members of our community. I think that we need to address how we are going to protect in the case that has been mentioned in the simply of women, but also of disabled, also of young people. We have already seen, particularly, I think, of the AI that was used in recruitment processes, where the algorithm was innately prejudiced, so actually the only people who were ever getting through to interviews were white men. We must strive to protect against that. I want to spend a short amount of time speaking about the role of AI within Parliament, because I raised this last week in a question, and I promised the Minister that I would go further on it, because I do think AI not in its creation but in its use within the parliamentary and, indeed, the political field would be of great use, particularly in the scrutiny of legislation. We in this Parliament are always challenged at committee to scrutinise previous legislation, and the reality is that we find it very difficult to identify the time and, indeed, the questions that we should ask of previous and existing legislation. I do think to pick up on Daniel Johnson's contribution at the start. When AI is looking back at what exists rather than creating something new, it is perhaps a tool that we can use to identify the challenges in existing legislation, or, indeed, where existing legislation has never been used. What it could do is provide within the parliamentary sphere an ability to see legislation as to how effective it can be. There is, then, the counter-side that we have heard about the risks, particularly, I think, in the political field of fake video, fake audio and, indeed, fake speeches that can be attributed unfairly to politicians and, in fact, speeches that have never taken place but are picked up on social media and used that way. Time is short, but I would like to ask whether or not the Government, because I very much welcome the idea of a four nations meeting to talk about this, because the legislative framework needs to be international rather than even national. I wonder whether the Scottish Government can sign up to the element of the Hiroshima communique from the 20th of May, which talks about the need for international discussions on inclusive artificial AI governance. Without that, we will fail miserably the people that we are sent here to serve. I am grateful, Presiding Officer. Presiding Officer, it is clear that artificial intelligence is and will be regarded as the defining technology of our time with the potential to positively transform humanity. We have heard, however, that industry experts at Google DeepMind, OpenAI and Anthropic have put the threat of AI on par with nuclear war and pandemics. More than 350 experts now insist that mitigating the risk of extinction from AI should be a global priority. Elon Musk, whose neurolink film is working on brain implants to merge minds with machines, has also urged a pause in all AI work. Such views and concerns certainly provide plenty food for thought, but what we do know is that AI itself does not pose a risk to the world. It is the people developing the technology for the wrong purposes. Developers and regulators absolutely need to take responsibility and be held to account. Right now, the focus should be on the impact that AI is already having on our lives on a daily basis. Issues of bias, discrimination and exclusion is already heard. Many of us will have an Alexa and other smart speakers available that will regularly answer our questions in a pleasant voice and will deliver a response that we want to hear. The algorithms in the system will analyse our personal data and deliver a response that we are comfortable with. That is something that the search engines have done for many years, but there are risks that the data sources that provide information could be biased. Smart speakers and house robots connect to newsbots. Just like many other sources of information will come from a particular political position—that is a political position with a small p—but you might have a Trump-funded news bot that would deliver a different slant on news perhaps from a Putin news bot. We need to be aware of that. Without impinging on freedom of speech, we must avoid the potential negative repercussions of bias and discrimination being delivered by global corporations. As a Presiding Officer and I were told while we were in Canada, AI is now generating voices that have the potential to undermine singers, actors and artists. There were stories of AI voices and systems being used to scam people into believing that their family members were on the phone requesting money, with one elderly couple losing tens of thousands of pounds. However, the new legislation to control that was being fiercely challenged by the big IT and multimedia companies. Standing up to those companies and IT global giants will not be easy. It is clear that the success of that technology must be founded on having the right safeguards in place so that the public can have confidence that it is being used in a safe and responsible manner. I also believe that we need to, as a matter of urgency, look at the base data that AI relies on and specifically where that data is held and who controls that data. There are incredible possibilities to improve healthcare—we have heard that today already—and we will improve healthcare immeasurably if we use the data effectively. We can do that right now. Right now, I want my local pharmacist to have my medical records, and we cannot do that on a personal basis. It needs to be a health board decision, or perhaps I want to share my health records with cancer research. I have already shared data on my sleep apnea on a real-time basis, and I have signed up to that. I am happy to do it. I would argue that data should be held by the individual, not by companies or Governments, with access to that data, permitted or denied by the owner on demand. If it is done properly, AI will improve and accelerate opportunities for industry to develop scientific breakthroughs and benefits that will be seen across a variety of sectors such as medicine, agriculture, education, healthcare and research. Scotland has the potential to capitalise on that growth in the sector, and it already is. AI offers a whole range of uses in the agricultural sector. It used to be that AI had a different definition as artificial insemination, but in this case AI is certainly what we are talking about. It can be used in drones, and computer vision can be combined for faster assessment of field conditions to prioritise integrated pest control. It can also be deployed to monitor soil moisture on a continuous basis. It can simplify crop selection and help farmers to identify what produce would be most profitable. Another benefit is that AI will provide farmers with forecasting and analytics to reduce errors and minimise the risk of crop failures. I know the Heriot-Watt University is doing work on that side right now. As the minister mentioned in Edinburgh, the national robotarium is developing a grain-surfing robot created by Crover to reduce loss as a result of molding and infection. It is a unique burrowing robot that is going to be a real game changer. In Norway, AI has been used to keep out invasive pink salmon using facial recognition. The cameras are put in in rivers and gates to only allow the gates to open for Atlantic salmon and keep out the pink salmon, which are filtered into a different system and put back to sea. Aberdeen University and Angus soft-foods have teamed up to use AI as a means to boost fruit yield and allow growers to more accurately predict soft fruit yields. The system will bring together a range of information, including historic yields, weather data, forecasts and satellite imaging. The project partners say that the tool could possibly save Scotland's soft-food industry, which produces more than 2,900 tonnes of raspberries and 25,000 tonnes of strawberries annually. The Scottish Rural College has also teamed up with the NVIDIA to better integrate artificial intelligence to fight against bacterial disease bovine tuberculosis, which costs the country millions of pounds every year. The mid-infrared spectral data can now be analysed 10 times the speed that it used to be, which means that we can screen more cows. There is enormous potential that artificial intelligence could improve all of our lives for the better, but there has to be incredibly tight and robust policies in place for the good of us all. We need to start now and focus on how AI is already influencing our personal decisions, making processes, and that must be the right place to start. As we have already heard from members, AI is not a new phenomenon, but advances in this technology allow computers to perform tasks that would otherwise require human intelligence, and absolutely it can transform lives. Only last week, we heard of breakthroughs in AI technology using algorithms to help Gert Jan Oscom, a man who had been paralysed for 10 years to walk again, made possible because of a brain-computer interface, a wireless digital link between his brain and spinal cord. That not only allows Gert to walk, but to stand from his wheelchair when speaking with friends and again allowing eye-level contact. The value at advances such as that on the lives of individuals is immense. It is clear that there are advantages to be one from doing AI right, and the Scottish Government's AI strategy, which was published in August 2022, shows a commitment from the Government to unlock the potential of AI in Scotland while also building a foundation of trust with people across the country. I think that, in terms of ethics and trust, Scotland has the reputation and experience to help to develop such needed regulation. I am not aware, however, that the Scottish Government currently has specific internal policies and guidelines. How do we make policy and law in a world of AI? In May, we have seen the hearings in the US Senate on the safety concerns around the use of AI. During those hearings, Sam Altman, chief executive of Open AI testified before members, largely agreeing with them on the need to regulate AI technology, which has become increasingly powerful. Indeed, he also supported that statement, along with a dozen other experts, published on the web page of the Centre for AI Safety, which said that mitigating the risk of extinction from AI should be a global priority alongside the other societal scale risks such as pandemics and nuclear war. However, Mr Altman rejected the idea of a temporary moratorium on AI development beyond GBT-4, which was suggested in an open letter that was signed by 30,000 leading technologists, ethics experts and civil society activists. However, should he be the judge, the jury and, if not, who should be? The questions that we are asking need answers. Indeed, we needed answers before we got to this point. Of course, it is autonomous AI, which is the biggest risk. However, the Centre for AI Safety websites suggests a number of possible disaster scenarios. AIs can be weaponised, for example, drug discovery tools can be used to build chemical weapons. AI-generated misinformation can destabilise society and undermine collective decision making. The power of AI can become increasingly concentrated in fewer and fewer hands, enabling regimes to enforce narrow values through pervasive surveillance and oppressive censorship and enfeeblement, where humans become dependent on AI similar to the scenario portrayed in the film, Wall-E. Just as the world had to establish global nuclear non-proliferation agreements to try to help to prevent mutually assured world disruption, we need some kind of global AI regulation and control as a matter of urgency if we are to have a universal trust and ethical approach. That would be for AI players above the wire, known and willing to be regulated, but what of those bad actors operating beyond grid, beyond control? What happens when AI subcontracts tasks? How can that be regulated and have safeguards? As the use of AI expands, it is imperative that Governments across the globe work with business to ensure that we are also addressing safety concerns with clear goals and justification for using AI to achieve them. The use of personal data must be secure and we have to address ethical issues, including bias and accuracy, that may arise. That is probably where Scotland can have some influence. On bias, for example, when Amazon developed AIs to evaluate CVs, their intention was to find the best candidates. However, as the data that the programme was trained with was primarily CVs for male candidates, the AI was not ranking candidates in a gender-neutral way. How do we ensure that AI is fair in a world that is still unequal? In terms of reaching net zero, computer scientists at the University of Aberdeen and Aberdeen-based software company Intelligent Plant, we use AI to develop a decision support system to tackle shortfalls in production, which helps Scotland to meet the target of 5 gigawatts of installed hydrogen production by 2030. They are working in partnership with the European Marine Energy Centre and the project has been funded by the Scottish Government's Emerging Energy Technologies Fund. In the business community, Glasgow-based changing day is using this technology to create immersive VR experiences to enable autistic people to enjoy a new world of possibilities while helping them to cope with the real world. It is clear that Scotland is harnessing the power of AI in our education sector, in business and to which our climate change targets, and it can be a force for good. AI has the potential to deliver great things, but can it ever be sentient and give us joy and passion and feeling? Aberdeen has rolled out a 2024 Eurovision reunion in person on the 50th anniversary of their win at Sweden once again host the Eurovision Song Contest, but who knows that the very successful virtual Aberdeen Boys' Cheer performance could be recreated next year, perhaps with avatars and new songs. Now with AI, we really know who we have to thank you for the music. AI is inspiring, but it is also threatening all at the same time. It is the pace, the scale, the range and the effect that we desperately need to see come under some kind of global regulation. We have to start somewhere and we should have already started, but we certainly have to start now. I thank all members for their thoughtful contributions in the debate this afternoon. I am sure that there is an awful lot for the minister and for the Government to reflect on the wide range of current examples of the application of AI, the impact of history and technological transitions over time. The word pervasive was used by Michelle Thomson in terms of the scale of the challenge, and that is one that I would strongly agree with. We welcome the fact that the Government is keen to engage, to review the positions that it is taking at the moment and to draw on expertise as widely as it can. It is clear from the debate that there are concerns about the scope that has perhaps been too narrow in terms of definition and the way that the Government has sought to deal with this in the past. I do not think that that is a criticism, it is a growing field, and rightly so the great amount of concern expressed in the media that is being reflected today from the rapid development of these technologies. We should be animated by the application and understanding of AI, both as a Parliament and as the Government in particular. I want to focus on issues around the education system. Questions about how and what we learn are key to that. At the moment, the Parliament and the Government are considering how we assess in our education system. We know that we have had an interim report from Louise Hayward that had very little to say about the application of artificial intelligence in assessment processes. I would hope that her final report has more to say in that regard, but we have to wonder whether those proposals will stand up to the imminent test and the real test that is immediate regarding the application of artificial intelligence. There was an interesting exchange between Willie Rennie and Martin Whitfield, contrasting the rapid arms race of plagiarism software against the plagiarists in this process. Martin Whitfield, as he always does, spoke to the power of the teacher, the intuitive role. I have to say that he is a better teacher than I am. I will recall having to sit at marking hundreds of exam scripts as a university tutor. The fact that you are paid by the script probably undermines slightly the amount of scrutiny that you give to the application, perhaps, of that depth of the understanding of the individual students. I think that the whole system needs to look at how we actually incentivise and make sure that we can cope with the application of the new and rapidly improving technologies. I would also point to the exchange of letters between the Education Committee of the Parliament and the Cabinet Secretary and the SQA and bring those to the attention of the Cabinet Secretary. I have to say that the response from the Cabinet Secretary in regard to the concerns around AI as it might be applied in education was slightly less than I think the committee might have hoped and certainly than I would have hoped. That was mirrored in the SQA's response, which did not seem to be engaging fully with the issue and the urgency that all members are reflecting today across the chamber. I think that it probably ran counter to what the ministerial intent is and understanding about how we might apply those issues in reality. Michelle Thomson also used the word deontological approach regarding the necessity of understanding the moral underpinning of the choices that we make in those issues. There are very practical concerns, but we have to come from base principles in terms of understanding what we are seeking to achieve and not just regarding the consequences, whether they might be perverse that come out of the other side. That talks to common concerns about the rules that govern artificial intelligence, how we can do that collectively, how we can do that internationally and we certainly cannot do that alone. I think that that came through very strongly this afternoon. Those broader concerns are reflected in other areas such as the shape of the economy. Many members are talking about the idea of what kind of economy we want to produce. I think that there are real concerns about data as a form of wealth. We all produce data and who exploits that data, and the gap between data rich and data poor and who has the ability to exploit that can exacerbate and cause ever-greater problems in our society, the shape of that society. We would do well to think more in those areas. As I touched on already, the issue of technological transitions. We know that we are going through a rapid technological transition in our energy production, the need to drive change in those areas, and that there are real human consequences for that in terms of the kind of jobs that people have, the shape of people's lives and where they can earn decent livings to support their families. I want to close by touching slightly on perhaps less anticipated applications in the justice system. It is to illustrate the fact that those systems and processes are in play today in Scotland. DNA samples that are collected by the police in Scotland today are deconvoluted by black box algorithms that are completely impenetrable and that are actually sold by companies. Those different algorithms come out with different answers. Therefore, there is a real challenge for the transparency issues rightly raised by Daniel Johnson and others as to how that actually works in their system. Artificial intelligence is already used for the triage of evidence, huge evidence sets that are increasingly growing as we produce different data streams that have become part of the evidence. It provides significant challenge for the issue of disclosure between defence and prosecution and the way that information is shared. Again, many of those algorithms, black box and impenetrable and understanding them and having transparency are absolutely key. Recall attending a seminar and contributing in the Royal Society in London on the idea of the application of sentencing algorithms in those areas. Something that had been applied in the United States, and many judges were around the room expressing very real concerns about the issue of bias that was potentially in that system. It fell to me in that discussion to point out to the collected judges that the only black people in the room are serving the coffee. There are inherent biases in our system as they stand. Those are reflected not just in the systems that are produced, but we have to understand that we are not contrasting that with an ideal world, and we have to test artificial intelligence in that regard. We welcome the debate. Thank you to the cabinet secretary for bringing it to the chamber, and we look forward to further updates from the Government. Can I agree with Michael Marra about the quality of the debate? It was interesting that Pauline McNeill and Martin Whitfield and Ivan McKeele said that perhaps it was good because we had not had a motion. I have to say that I agree with that. It is quite pleasant to be away from some of the party political ding-dong that goes back and forward all the time, because I think that it raises the tone, and this debate has been a classic example of that. I have to say that I came to this debate with very mixed feelings, and having listened to what I think is every contribution has been interesting, I think that my mixed feelings remain. Just like all the technological advances that we have had throughout history, the minister in his opening speech mentioned the steam engine. We have had telephone, television and computer as a vast array of benefits that come from them all. In the case of AI, I thought Fiona Hyslop made a very poignant point when she raised the issue about the case in Switzerland just last week, where the use of a digital AI bridge had been used to decode brain signals for a paraplegic who can now walk again. There are so many things in medical science that have a transformational potential in patient care, as do those in digital industries, gaming space, diagnostics in agriculture and fishing, as Finlay Carson said. Michelle Thomson made an excellent point, too, that at the finance committee just on Tuesday, when we were taking evidence about public sector reform, it has huge potential for that public sector reform, which I have to say is much needed if we are going to address the huge black hole that is there between public expenditure and tax revenues, not just now but for the foreseeable future. We have to be very careful about any resistance to AI, but I also want to reference an editorial in last week's Saturday's financial times, because it raised an important principle. It was the editor herself who wrote that nothing matters more to her than the trust of the readers in the quality of journalism and for quality read accuracy, fairness and transparency—quite refreshing thoughts, I thought, from a senior editor. She said that generative AI is developing at breakneck speed with profound implications for journalism, both good and bad. She ends by saying that financial times journalism in the new AI age will continue to be reported and written by human beings who are the best in their field and who are dedicated to analysing the world as it is, both accurately and fairly. I think that that is an interesting comment, because she is making the point that the leap towards artificial intelligence is that bit much more challenging, because we simply do not understand it, as Willie Rennie rightly pointed out in his speech. Pam Gozel said that we have to be mindful that there will be some trepidation around particularly about the possible consequences that it could bring if it is utilised by criminal or terrorist organisations, and I am sure that that is a concern for so many members across the chamber. As, obviously, with all technological leaps, there is no going back. Once you have a Pandora's box or the genie gets out the bottle, the immense opportunities that are there have to be taken, but you have to be mindful that there is an uncontrolled spiral of competition that leaves only two options, either you adapt or you are left behind. They say that you cannot halt progress, whether that is the growth of the internet and the subsequent decline of in-person services and retail, the smart phone that has become an essential technological companion to us all over the last years, or so we are told, even the removal of the phones from our desks here in Parliament in favour of the WebEx software, more challenging to me than AI chat box. Technological developments always cause irreversible change, and it is how you harness that change that really matters. I think that a very similar case to the growth of AI was the advent of streaming platforms for music at the turn of the century. Not only did that totally revolutionise the entire industry in how artists could generate their income, but it also caused numerous legal challenges and ethical issues. We have spoken about that a lot this afternoon, and several members have highlighted just what that ethical issue means. I mentioned at the start of my summing up that I have mixed feelings, and I do, because I have been thinking a lot about how that affects education, just as Michael Marra has been doing. During my teaching career, I was always very interested in how we use knowledge, not just in the knowledge itself. Education should always be about developing enquiring minds and building resilience. However, if something starts to do the thinking for you, it undermines and potentially removes the process of that inquiry. I think that there is a danger that it can make a student or maybe a teacher as well lazy. I cannot deny that I would have liked the idea of an AI chat box when I was at school, perhaps helping with a troublesome essay or a differential calculus solution or whatever. However, I do not think that it will be long before problems occur, especially as AI has sometimes been found to fail. I do worry about the... Have I time, Presiding Officer? You have seven minutes, Ms Smith. I will be very quick. I am absolutely agreeing with what Liz Smith is saying, but perhaps I would qualify even further and say that the processes that one goes through in terms of education to be able to apply judgment in decision making, I fear, would be lost because, as she points out, it is much more than knowledge. I wonder if she agrees with that. Liz Smith. Yes, I very much do. I think that that is a very good point that Michelle Thomson has made, because there is a real danger that if, as I say, somebody does the thinking for you, then that takes away a lot of the judgment process that normally we have been used to. I think that that is a whole different ballgame, especially in education, and I fully understand the concerns of colleges and universities, and Pam Goswell referred to this in her speech, about the implications of that. I think that that is a very valid point. I want to finish on the issue about ethics, because that is an incredibly important aspect in all of this. We do need to have control of that, and that is going to be very difficult because of the fact that we do not understand the journey on which we are embarking. I think that there has to be not only proper legislative regulation, but there has to be an absolute necessity for both government and private companies to continue to adhere to ethical standards and uphold trust. I very much welcome what the minister said about a Four Nations approach to this, because I do not think that we are going to get anywhere if we do not have that. I will finish on the fact that this is a very interesting area. We absolutely have to take it seriously, because it is the new world. We have to get to grips with it, but I think that we are going to be very significantly challenged. Thank you to all the members across the chamber for their often very fascinating and thoughtful contributions to a debate and a subject that, of course, is about the future of our country and our planet, and is utterly transformational. I listen carefully to many of the views, and as Michael Marra said, there is a lot for the Government in particular to reflect on, because so many good points were made. We certainly will do that in the days and months ahead. I was also pleased that Daniel Johnson admitted to using chat GBT to help famous speech. We all thought that it was unexpectedly good, and it was good for us to explain why that was the reason. I am justing, of course, because this is a debate of consensus. I have been asking myself whether you drive by a lawn with a robotic moor on it, and you think that that is amazing. You drive by, or you pick up the newspaper, and you read about a driverless bus on a fourth road bridge, and you think to yourself, that is amazing, and then you turn over the page of the newspaper and move on. But chat GBT has sparked a global debate, and everyone is speaking about it. So what is the reason for that? Presumably, in my opinion, the reason is that it is accessible, and millions of people can access it. As a species, as human beings, we are reflecting what it means for us, because it speaks to us and communicates with us as a human being would do, and that makes us reflect as a species. It is quite incredible, and it is also quite ironic that, while we are debating today potential scenarios facing the planet and our societies in the decades ahead, we accept that chat GBT and AI today are not replacing humans, it is not exceeding human capability. In one sense, it has one up on us, because we are all sitting here thinking that we are not quite sure how to respond to AI. Willie Rennie made an important point when he said that as politicians, as parliaments, we have to show humility. We have to do that, and we have to act thoughtfully. We have to continue to debate and listen in and out with this chamber. The Government has an essential role to play to represent the interests of all our people, but we do not have the answers, and I think that that has been reflected by many of the contributions today. I am very grateful for the minister of way. Is it not the fact that AI is the automation of decision making that we find so challenging that speaks to what a lot of people have already commented on about the lack of transparency as to how that decision or on what basis that decision was made, and that in itself is innately a fearful thing? Yes, and that takes us on to the debate around trustworthy AI, ethical AI and so on. I know that Michelle Thomson and other colleagues mentioned the Scottish Future Forum's recently published toolkit to look at those issues, which I thought was very valuable, and it got me thinking about a lot of the issues and flagged up some issues that the Government and the public sector in particular should be thinking about as we look as to how to operate AI and use it effectively in our country. However, what we are experiencing just now in Parliament across the world is this balance between excitement and fear. On the one hand, we are excited because we can see the potential for AI to improve our world, improve our quality of life and improve the Scottish economy. We can see how the knowledge revolution can be used to improve education, but likewise, we have got some fears because we can see some threats and risks. Singularity, which is, as we said before, the word used to say that machine learning means that the machine can think for itself and does not meet human intervention and can develop its own intelligence, is clearly something that we have to think deeply about as a human species. Then we think of the impact on jobs. It can create jobs, but it can remove jobs. We can think of the impact on security, on cyber security, on countries getting access to AI and other bad actors on the planet to use it for nefarious purposes. We know that that is deadly serious and others mentioned the arms race across the world at the moment of who can get there first and use these new technologies first. We do not want the wrong people to get there first because that could have all kinds of ramifications for the world. I appreciate the minister. I have touched on it before. Data is an essential fuel that drives AI. Without data, AI does not function. Do you believe that the current data policies within the Scottish Government are fit for the purpose for the future to maximise the advantages that AI can bring? Does it play a part in when, for example, the Government is looking to develop a £92 million rural payment system? Does that form part of your decision making? Clearly, we have to think about those issues in terms of how we manage and access data in this country. However, because we are debating today and we are not quite sure what the future is, it is quite difficult to answer that question because we do not know what the future is. We have to constantly evolve and adapt as we learn about the consequences and potential of AI moving forward. I think that that is really important. Willie Rennie mentioned the importance for politicians and parliaments to have good advice. That is why I am pleased that we have the AI Alliance in Scotland. It is chaired by the very talented Katrina Campbell, who is an expert in human-computer interaction and a successful entrepreneur and has a number of incredible jobs, not just in Scotland but working elsewhere in the UK. She is the new chair of the AI Alliance working with colleagues. As I said in my opening remarks, we are asking them to review who Scotland is with AI in terms of the potential for our economy and how we manage it going forward to make sure that we manage the risks at the same time. I have to give a wee plug to her book that she published last year, AI by design, because I met her yesterday at the data lab in Edinburgh. It is called AI by design, a plan for living with artificial intelligence. A Scot is written and it is worth her reads. I did my best to get through it last night after she gave me a copy in preparation for this debate. It goes through all the various debates and opportunities facing Scotland and indeed the wider debate across the globe. Of jobs, that is a big feature of this debate. Claire Adamson and others mentioned that in the industrial revolution we had people fearful for losing their jobs but we had all jobs lost and new jobs created. That is the story of history. Of course, the Luddites were mentioned in terms of worried about the impact of textile machinery and their livelihoods and so on and so forth, but we have to make sure that people are equipped for AI in their current jobs in Scotland, where that is possible, and we have to make sure that as a country we have the skills to create the new AI jobs and the new employment opportunities in this country at the same time. It is a very good point that the minister makes in terms of being prepared for this. Part of the job of Government in this is to ensure that we have those skills. We have raised this issue time and again about the declining number of young people taking STEM subjects in secondary school. That has to surely be an absolute priority for this Government if we are going to be able to cope with this situation to reverse that trend. Again, it is an important point and it is something that the Government is addressing, it is something that Skills Development Scotland is addressing. I want to mention Ivan McKee in that context as well because he mentioned computing science being a concern of his as it is for others in the chamber. Also, Mark Logan, our chief entrepreneur in a recent meeting, mentioned that he wants to see more support for computer science teachers as well so that we can meet the needs of the future Scottish economy. Indeed, we have the shortages at the moment. That is important and it is something that we have to look at more seriously. I am up for that and my colleagues in the Government are up for that as well. The computing science profession is working together to try to address that in our schools at the same time. On the subject of Ivan McKee, I want to pay tribute to Ivan McKee because we have many of the building blocks in Scotland to make sure that as a nation, we are ahead of the game and we are one of the leaders in the world in terms of exploiting AI for the benefit of society, the benefit of jobs and economic growth in this country. There are many building blocks that have been put in place. I was not responsible for them all, but Ivan McKee has played a role over the past few years and I want to pay tribute to him. Yesterday, when I was at the data lab here in Edinburgh and the base centre, I know that Brian Hills, the chief executive officer, is in the gallery today. I was again amazed, even though I had been before, I was amazed at everything that I was learning that is happening on our doorstep here in Edinburgh, here in Scotland, not just in Edinburgh, other cities and other communities across the country, the research, the developments taking place. We should be proud of the fact that we are making the most of AI to improve our society. We are certainly in the lead. I think that there is not much time left, but I want to mention the fact that AI has the potential to transform our lives, is already, but much more in the future, transform our economy and deliver enormous benefits. I want to give a couple of examples of what is happening in the NHS or maybe even just one example, just since I am running out of time. NHS Greater Glasgow inclines optimal projects investigating the use of AI to detect osteoporosis early. Another example, quickly, at the start of May, Beats in West of Scotland started using an AI-enhanced linear accelerator to conduct better targeted, personalised and adaptive radiotherapy. There are many other examples happening using AI in hospitals to detect cancer and treat it early and all kinds of ways. AI has a lot of potential to improve our lives, to support our economy, economic growth, but it is really important that we get the ethics right. It is trustworthy and we manage that as a Parliament and as a country going forward. We make sure that we make the right decisions and we work globally in the global stage and with the UK Government, our colleagues in Europe and across the international institutions as well to get this right for the interests of humanity. That concludes the debate on trustworthy, ethical and inclusive artificial intelligence, seizing opportunities for Scotland's people and businesses. It is time to move on to the next item of business and there are no questions to be put as a result of today's business and I close this meeting.