 We're absolutely delighted to welcome Dr. Patricia Scanlon, who is the Ireland's first Artificial Intelligence Ambassador appointed by the Department of Enterprise Trade and Employment in 2022. She was formerly the founder and executive chair of Soapbox Labs, which is a company that manufactures proprietary voice recognition technology for children, and she has worked for 20 years in the field of artificial intelligence technology, including notably with Bell Labs and with IBM. So the way we're going to do the event tonight is Patricia's going to give an address for roughly about 15 minutes on the subject of artificial intelligence, and then we'll go to discussion and Q&A with the audience. So it's my great pleasure to welcome Patricia and hand over to you for 50 minutes. Thank you so much. Great, thank you, and thanks for coming out, freezing. I hope you're all warmed up after a drink before we start, but look, it's very good to come out and talk to you guys. I appreciate this, and I appreciate that it's really important as many people as possible really get to grapple with the idea of artificial intelligence. It's going to be so pervasive in our lives, not going anywhere. It's just going to become more influential. So the more people who understand it, and I understand a lot of this around policymaking as well, it's really important for the people making policy to understand as much as they can about the technology. What we don't want to happen is we have another crypto situation where people don't really understand the technology that's exploding and AI has the potential to be a little like that because there's such depth and nuance to it. The thing about AI is, and there were, even up till about six, 12 months ago, a lot of people thinking it was a bit of a fad. It was just going to be a productivity tool. You won't notice it anymore. If you just fade into the background, it will all get out of our lives. And I think other people likened it to the iPhone moment or the idea when the internet went mobile. Others likened it to the internet. And I think that's where you're kind of getting to the gravitas of what AI is. AI is arguably the fourth industrial revolution, building on previous industrial revolutions over the last couple of decades and centuries. It's so, it's so transformative because we've only, it's the tip of the iceberg where we're at at the moment. It's technology that's been in development for decades. Arguably the theory back to the 50s and 60s, if not before, depends who you talk to. First implementations were in the 80s, like real implementations in the 80s and around that time. Papers have been written about it. It's been building on for decades and decades. And the reason why we're talking about it today is because about 10 years ago, we first started dealing with GPUs. We first had, first we had cloud, we had good internet connectivity, then we cloud computing at our fingertips on demand that was absolutely transformative because prior to that, you used to have to have clusters of servers and that was to purvey of very big technology companies or universities that invested deeply. Suddenly you had cloud computing we could all get access to. That's the kind of compute power. But it was the GPUs. So you might CPUs or the processing units within computers and GPUs were actually the gaming processing units that actually allowed very large matrix multiplications to be done. And that allowed huge amounts of data to be crunched. Prior to that when I started my PhD back in 2000, we didn't have the ability to crunch data like that. We just flat out wouldn't have worked. So now with the advent of GPUs, with the advent of fast connectivity, data flow, access to GPUs, access to CPUs, access to all this power, suddenly the theory from the 80s came to play in neural networks and then deep neural networks. And what you saw then the last 10 years, it actually was getting it right most of the time. And that's the big thing. If you think back to speech recognition, it was just frustrating. You know, people, Microsoft had it since the 80s and 90s, but it didn't work very well. So nobody used it because it was made more errors. And then gradually over time, it's suddenly making actually very few errors now to the point where it's doing excellent transcription, kind of surprising transcription. And that's the power of deep neural networks and the power of the technology we have today versus what we have even 8, 10 years ago. Then you have the advent of the LLNs and the generative AI, but it's important to kind of distinguish them. So the expert systems are what we call things like speech recognition, your Netflix recommender, your predictive text, your image recognition. They're expert systems. They're trained on expert data and they do a very specific task. You can't ask a speech recognition system to suddenly start doing image recognition. It won't do it, it hasn't been trained that way. It's been expert, but sometimes expert systems have been called narrow AI versus strong AI or weak AI versus strong AI. Really poor description because what expert systems or narrow AI do is really, really narrow amount of stuff, but it does it so deep you can do better than humans. I mean, that's really important to think about. It can actually crunch data in more dimensions than humans can and actually do longitudinal data that our brains just can't possibly comprehend. It can pick out nuances in data that our brains would never see. So people when they say weak AI or narrow AI really isn't very descriptive, but that's the AI we were talking about up until December, January last year. Then actually it was probably September last year. Then it started being the generative AI. That's where we saw the image stuff. That was the first kind of thing that really came to people's minds, the pope in the puffer coat. I think a lot of people remember that one. That was really like, literally I've been in this space for years and I'm like, what? But on the heels of that, about two, three months later, I became chat GPT. At first it was GPT three, which was okay. It was getting a lot of stuff wrong and people are laughing about hallucinations and literally three months later, GPT four, bang, it was absolutely amazing. And that generative AI is something really important to realize. I've been in the space actually nearly 25 years now, but we had the concept of generative AI. We knew it was possible, but most of us thought that was 10 years, 20 years in the future. It took everybody by surprise, including the founders of open AI, including the Godfathers of AI. Everybody in the space, all of us in the space for decades were blown away by how good it was. And the thing to note about the GPTs, the type of models that were sitting behind chat GPT, there are like a couple of hundred lines of code. It was actually the data, the ability to scrape the internet and the ability to crunch the numbers. Nobody really knew what was going to happen. And one of the challenges around policy and regulation is that actually the people do don't really know how a lot of it works. It just does. It's quite a lot of a black box issue. So that's generative AI. It's creating something new from underlying data that's been trained in the models. Now those models themselves, those GPT models are equivalent, are called foundation models or frontier models. And you hear a lot of the conversation has now shifted from generative AI to you're hearing the frontier models and the foundation models. The UK safety summit was a lot about frontier models. They're the models that sit behind the GPT, chat GPT. You can actually pay for chat GPT or you can actually pay for a license for the GPT models. You could then take the GPT model and fine tune it and now it becomes something very bespoke to you and your data and your knowledge that you tuned it to. A little bit more like the expert systems that are still in existence and very powerful. You can almost get back there but they can still do other things. So there are like three different kind of things, concepts and AI everybody needs to understand because they're very different to AGI which is artificial general intelligence. That's the scary one. That's the one where the AI is more intelligent than does. It learns itself, it can strategize, it can do, it can just take that intelligence and start expanding beyond what it's been told to do. The idea behind that is we need to put in guardrails. We need it to align with human values and you know it's a lot of controversy about what a human value is but the key to the AGI is it hasn't been invented yet. There's a lot of controversy about this. A lot of people, including myself really, I don't think the GPT models are going to be capable of it but the problem is a little bit and the reason why everybody is jumping up and down trying to get attention about this is because we didn't think the GPT models are going to be able to do as much as they did. So now it's kind of put everything up in the air about what's possible and that's kind of key to where the fear is and the anticipation and the caution that we're urging. But also the fact that it's separate hasn't been invented yet. So it's actually, you know, if you want to look for a silver lining for a lot of fear that's out there is we actually have become aware the chat GPTs and the stability AI and all those ones have allowed us to become aware of what's possible and possibly a lot sooner than we thought. That's a good thing right because that's where we can actually regulate. So the EU AI Act last year didn't have the terms generative AI or foundation or frontier models in it. That literally up until kind of Q1 this year they did that terminology wasn't even in the EU AI Act and that's now all changed and we've had time to do that because of it. So in some ways there's a lot of good and bad and challenges and opportunities here but to be able to actually address what's coming down the line at us and a bit of a wake up call to us all that going this is extremely powerful technology it's going to change society and economies as we know it and it's we have the time we have the knowledge now but what's possible what may come a little quicker we think but it's really important to kind of distinguish all those different types of AI because a lot of times in the media they're conflated. When they're conflated it's confused and that creates a lot of fear so we have very real risks with the AI that's right in front of us now the expert systems and the generative AI and the foundation models that kind of AI will pose risks around bias it will pose risks around IP ownership and infringement copyright you know the New York Times are taking open AI to court as we speak because of some of the reproductions going to affect their business model it's going to be really interesting how that plays out because that's going to speak to a lot about how these models are built and other people's data who owns the IP that's generated there is the person who types the prompt is it the person whose data fed the models is it the person who owns the model who owns the output and that's really unclear at the moment so there's a lot of concerns around bias it's really important to realize that bias is not just about a few outliers you talk about bias affecting very large percentage of the population and everybody you know there are people that go and say oh but we can't stifle innovation you know we can't we have to be really careful but it's really important to realize that after the UK safety summit after the Biden administration's guidelines and then executive orders and the EU AI act we're all broadly in agreement about what needs to happen now I mean in the US because I think a lot of people say oh the EU are killing off innovation in favor because of the EU AI act but if you actually look at Biden's executive order very long it's about 78 pages it actually includes a lot around the federal government about what they're allowed to do and it actually goes a little further in some ways than the EU AI act and less and others but broadly they're trying to achieve the same thing um and it will be up to congress to legislate to actually legislate for private companies but it's almost as if the framework is there now and there's not a lot of disagreement the major disagreement we're seeing now is about how or should we regulate the frontier in the foundation models how does that affect open source how should we regulate who regulates how do we regulate to what extent you regulate and we're not the only ones in the EU grappling with this this is a global issue and I think some people would have you think the EU have gone too far and that we're just for the progress because it started earlier so I think there's that's about 15 minutes and that's actually a lot in that already but I'm very happy to take questions and I just want to say before we start with any questions it's really important and I think sometimes that we all start understanding AI and sometimes when a conversation has gone on for a year or longer sometimes people don't want to ask questions because they feel like they should know by this stage what that means and I realize I want to say look no stupid questions here everybody's learning when you ask a question and it's really helpful for us all to get on the same page particularly a lot of the work that you guys are doing now or will be doing in the future that everybody understands AI so please far away I'm very open to answering any questions and I'll say if I can't thank you Are you happy to stamp on the podium? Maybe just one from myself before we before we kick it off you talked a little bit there about the problem I suppose of obviously the issues around regulating AI and the different types of AI but you know there is a broader issue around digital technology is generally I think that you know and we saw it with social media and the Digital Markets Act and the Digital Services Act but quite often a lot of the technological advances are very far ahead before you know regulation has the time to react I mean I wonder do you have any views on on how we can improve that and the importance of that particularly for AI given how quickly the technologies are evolving? Yeah I think the idea behind it of the regulation is is quite simple in some ways that if you try and if the first in the first instance we're regulating the use cases right so it doesn't matter in some ways we've had a lot of different approaches to AI that got gradually better and the improvements over time but the principle is still the same you know you have to you know check for bias you have to check for safety you have to check to be transparent you have to be explainable these will these will persist regardless of how good the technology gets or how it's implemented you know what I mean it's more about keeping humans at the center you know where if it affects if the the decision of the AI is going to affect somebody's life in their employment and their finance and their health and their safety you know that will always be the same like you know actually arguably we should be you know there's going to be a lot of controls around employment right we actually don't have a huge amount of regulation about the training human goes through to anti-biased training you know so arguably maybe we should have done that like you know what we're going to do with AI but arguably if a human's making a decision they should have some training to devising the stuff so you know in the field of health and medicine and finance there's a huge amount of regulation already so this is just going to be more regulation there's not a lot of the high-risk use cases that are going to be more regulated and have to go through compliance and stuff like that they're actually in fields that are already quite heavily regulated anyway so I don't see I think if you keep the regulation focused at the outcome the safety then the technological advances won't change that much because what what they're actually trying to do is replicate humans or do better than humans but they're not trying to do some different you know yeah and I suppose and obviously what you're saying about human centered and biases obviously you have a lot of experience in your company with and I was reading a good bit about your company before the event around voice recognition technology particularly for children in schools and I suppose obviously it can be very dystopian a lot of the discussion on AI but obviously that's a very positive example of changing people's lives in using AI yeah so our company is sought to develop speech recognition technology for children to be able to do assessments around reading and language learning and we've done it from kids age like basically two three plus once a human can understand them not just the parent you know we can do the recognition and basically it's been helping kids learn to read print sounds do automated assessments but a big focus of the company was on the data privacy but also on the equity side of it so when I founded the company back in 2013 I was in New York at the time and the idea you know I had their friend had a kid in the school um there and I remember thinking like there was like it's only about like 30 percent New York accents there was accents from Britain, India, Ireland, Canada you know Mexico, California and the accents were also different I remember thinking well we're going to do this there's zero point doing it if we want it to be brought on to schools if it doesn't work for everybody equally because you know teachers will come on parents will come on kids will get frustrated and demotivated or they'll just get bad bad education if we don't do that right so we focused very early on trying to figure out how do you collect the most diverse data how do you make sure that the data that goes into the models um will work for everybody all accents and dialects, socioeconomic backgrounds and most people honestly didn't believe that was possible to do that um and it is if you do it with an thoughtful intent you can do it and one of the reasons I took the AI ambassador role is because you know the the strategy of the government is for ethical AI um and the EU um and we had actually built a company that does ethical AI and built a commercial successful company so it was actually I felt it was a good opportunity to take in a real world example of a company who has done it has done well um to help make the point because some people really do believe that they are exclusive you know you can't possibly be an ethical AI company and be profitable in. It's interesting I was listening to um the economist uh they have a new podcast called Boss Class actually it was very good um but the co-founder of LinkedIn is on it and he's a big proponent of um of AI and he was saying that it could actually get to the stage now where if companies aren't bringing in AI technologies as part of their processes that they're risking actually lagging behind their competitors and particularly this would be the the case in the in the next number of years um when AI becomes more mainstream I suppose would you share that view or yeah I mean that's the problem I mean it's the opportunity and the problem like you know is that there's a lot of people and out of fear and this is why I think it's really important to understand the different types of AI and what's risking what's not because I've said I've been talking to a lot of boards and stuff that there where people are just like oh yeah it's way too risky we're not going to do anything like you're like ah right not the right approach um yeah if you're not doing it that's fine your competitors will and you will lose out because they'll be more innovative they'll you know they'll reduce their prices because they've found efficiencies they'll you know they will be if you don't you don't have to but be very aware that either somebody else who is using AI could innovate quicker than you could eat your lunch you thought you had a lead in the market you could lose that lead um and and arguably you know create you know we rarely just compete with people in Ireland you know we can be globally like you know and products being brought in if you're selling globally you're you're competing in a global market you know people will be selling into Ireland they're on the internet there is no that it's it's so borderless when it comes to digital technology um yeah you have to and you just be very keenly aware and I think every board in the country should be requiring and their executives there to to be aware of what's going on and be able to answer to what AI could or couldn't do or what a competitor could have couldn't do with AI yeah I know so do you and one of our uh board members who shall remain nameless I was actually chatting to me about this uh and he was telling us we should be paying more attention to chat gbt because obviously as a research institute yeah we're producing content the potential could be produced by us maybe I'll be out of a job yeah you just get better it's do you know I like to think about it's almost like you're just moving the starting line right so the starting line was here um you're doing great work no matter what it is whether it's research or some kind of innovation the starting line is just moved you can choose to stay where you are right but things you know you could use the chat gbt not to help start your research or you're now starting it's here you still have to do the work I don't think anybody should believe everything comes out of it or should just use it I think it's you know but you know it can help prompt and give ideas or find stuff that might have taken days of research or something like that you know the way but with caution you know because of the hallucinations and the problems with it as well absolutely absolutely okay so questions uh comments let me throw it out so I see Paula's hand first as we'll go to Paula thanks maybe to park the risk idea for a second what do you think Ireland can be doing at a policy level and specifically wearing like an Ireland ink type hat to capitalize on the opportunities because as you've kind of covered there's a lot of scaremongering out there and not a lot of talk about what we can do to really like seize the day that's true good point um I think one of the problems we have in Ireland is and actually it's an EU problem is that we're very good at funding university research in AI and all these different wonderful technologies we're very good at FDI investment you know the investment and we're not very good at investing in deep tech out of the university systems that we're very risk-averse in our investment strategy um it's an EU wide problem it's one of the reasons they came up with a rise in 2020 and the SMEI funding and EIC funding they call it now and to try and address that where you if you look at the funding that's gone into AI in the US versus the amount of funding that's gone into AI in Europe you know everybody goes oh why don't we have um you know why couldn't we do an open AI problem I'll give you a litany of reasons why we can't do it I really think we need to address it if we want to be serious but because we talk a lot about our university system talk about our education we talk about our tech but we don't talk enough about why we're not investing in our own indigenous um and you know probably amazing research coming in our university we fund it really really well but we don't we're not successful at getting it out and getting them well funded and we are leagues behind the US because of that um you know France has been trying to deal with that with minstrel AI um they got a hundred million funding after being you know going for three months or something like that but it was their equivalent to try to get there a french equivalent of open AI so they are like throwing money at it so there are some countries that are sitting up going yeah we need to be you know so sometimes we lament that Ireland doesn't have enough unicorns and I'm sitting here going yeah I can tell you why we don't have those companies I suppose one thing we're interested in here as well uh about is digital skills and equipping our graduates with the with the correct um digital skills for not just the private sector but obviously policymakers in the in the public sector as well where do you think we are in terms of how we're doing um in the education system in terms of equipping our students and our graduates with the necessary digital skills particularly for AI I mean there's a lot of work going into it um it's not happening quick enough you know and it never does like obviously when you're trying to change this there's a great course in Limerick called the immersive software engineering um course now that's it's it's a more equivalent to one of the Canadian universities um where they actually do it's a very condensed masters but it's all about practical very little sitting in the class looking at you know the whiteboard kind of thing um and doing internships every year waterloo I think is the university in Canada that's you know really well known for producing um excellent kind of students and interns sort of that so it's kind of more than that model really innovative I really like that what those guys are doing there I really think one of the opportunities we could do is just start doing an AI 101 in every course you know and most of course like giving people the opportunity no matter what course they're doing is to take a very I think everybody at a basic understanding of AI would help because it's going to influence every industry um and and most jobs well so people understand that at some level will be very helpful um absolutely any more questions mark or do you and maybe might just identify your name and affiliation as well I should have at the beginning Paula is and TMA hi thanks so much for the very insightful talk my name is Mark I'm a lecturer at the Institute of Public Administration um so my question relates to education as well um putting aside then obviously there's a lot of kind of ethical debates about the use of AI in research and where that can exist or shouldn't exist in instances of say where material is generated by AI but it's actually used for plagiarism and I'm curious about there's a lot of detection technologies that exist now and I'm I suppose interested firstly in your how these kind of technologies work but also your assessment of are they kind of up to scratch the moment because I'm thinking beyond just education in terms of there's a lot of discourse about you know falsified political information and things and being able to detect what is AI and what isn't yeah um I think there's a lot of talk about detection technology I understand there's a lot of problems with it um I think there was some students English as a second language being flagged as you know generated by chat GBT and the like which is a problem like so um it's one of those things at the pace of innovation technology and the counter technology kind of lags I think it should be a policy a government policy in every country to fund the counter technologies because I don't think they're going to be as profitable as the technology themselves therefore they're less likely to get investment and things so I think that should be for a social you know social good that we look to these things because um yeah they won't make as much but they need to be there and they need to keep pace I heard this great thing but there was um some artists before they now upload their their art onto the internet they can actually use some kind of skin but what it does it really messes with those scrapers um to the point where it actually totally messed up that the thing couldn't create a good picture of a dog like the dog was all like distorted and stuff because people started doing it so I think that's a good example of a counter technology um you know so and a lot of the university systems I think will probably end up doing a lot of that research um I don't know of any that's completely a scratch for plagiarism right now um because it isn't actually plagiarism if you think about it for the most part I know the New York Times have got really good examples of where it was actually just completely left it from the the articles but the idea of Generative AI is it's creating something new so what you're actually but I could say to it um you know write this a paragraph based in the style of this or write it in a more formal way write in a more casual way so that's getting hard because so you can actually go to town on your description of how it should articulate it therefore I I'm finding hard to understand how you would have a technology that would robustly tell you it was made by AI because of the infinite number of ways the AI can generate it in yeah I think then we're going to have to be a little bit more inventive about how to assess progress and it's almost like the calculator in some way it's here now right it's not going away so should we have things like you know for a PhD or for a Viva you've got to stand up for your defense and you have to speak and you have to answer questions on the fly like maybe we move to that model you know maybe we do something that you know I there are other technologies that lock down the screen so somebody can't possibly do it and they can't copy paste maybe you could use your phone but you're not going to be able to type you know do it so there are I know there's somebody people working on technologies around you know being able to tell if you're using another device in the room and all that in some ways I think they're very short-term solutions I think we need the longer-term solutions to God forbid reinvent the education system you know I mean that's not an easy task like but it's been in some people's opinions long overdue that how we we do it we now recognize this exists it's in our lives it's not going away therefore how do we educate how do we assess very good at this gentleman here hi thank you actually it's a comment thank you for your interesting talk and my comment is about the education aspect of education and the importance of why we should teach AI I just wanted to make I mean I am Nikhil I am at Dublin City University I finally a PhD candidate actually this semester I'll be teaching at Trinity two modules one is technology and international politics and the other is politics of AI and it's coming from the same inspiration that you haven't introduced that it's very necessary for having more clarity on what AI does and the distinction between different types of AIs and regulatory aspects who does and why thank you very good um Josh I see you're over there yeah so yeah Josh with Vulcan Consulting so I was particularly interested in your observation that the EU and the US are more or less simpatico in terms of the thrust of AI regulation and I was just wondering whether you had any insights on like moves towards I suppose global harmonized rules on AI regulation and whether there's potentially any risk of the US and Europe putting themselves at a competitive disadvantage relative to its geopolitical competitors like China particularly in terms of the application in military circumstances for instance yeah I think there's very pacific exclusions in the EU AI act or in military and law enforcement they are always treated differently and they will be so I don't think the EU AI Act is going to necessary there's restrictions on using it for real-time monitoring of biometric systems emotional detection things like that but there is always going to be exceptions and stuff that's done in military that we don't know about like so there's a lot of work going on at the moment about the ethics of of war I actually heard a great talk about there was a weapon invented who was back in you know obviously the last century but I think it was the latter half of it where they were using lasers to blind the opponents the you know very effectively blinding you know combatants on the field but both sides decided not to use it really interesting kind of thing that you know there people have this idea that China were sure just going to you know let's say economically anyway just going to fire ahead and let loose and run whatever that it hasn't yet been very politicized you know you're going to have a lot of people who haven't yet decided whether it's good or bad for them if you were just to let AI run rife I couldn't see China let an AI run rife they're trying to control the population not going to let any AI run rife like you know because they or anybody else is not we don't know what's going to happen if we do the regulation there is to put guard rails on it so that it benefits us and doesn't you know isn't detrimental when it comes to war I you know and weapons I will be of the mind that we don't know what the west is doing or not doing I don't think they're going to advertise what they are aren't researching on that whether it's at a disadvantage or not so you know let's say you know every other statistic on how much the U.S. has spent on the military does not tell me that they're going to hold back on if it's going to give the budget we don't know but I think economically people have I hear this over and over again that we shouldn't regulate because China Russia because China Russia and I'm sitting there going I don't know about that like I mean one of the biggest fears of most governments that will destabilize economies and then societies like you create civil unrest you know that doesn't exactly move anybody in government you know and I think a lot of governments right now if you think about it's become highly the discussions around regulation technology have become highly polarized they haven't come politicized yet you know there isn't necessary you couldn't think that the you know imagine the U.S. Republicans or Democrats have fallen down on one side or the other they haven't because nobody's really sure if it's going to help them or not you know you know what it will do so I think I think at the moment the main people you hear calling for no regulation are people in tech because you know it's expensive to do compliance and people would rather not a lot of industries have never had any compliance before so they'd rather not get into it you know I think the tights moving now and it's more or less accepted the most applications of AI will be regular only the high risk ones actually a good thing to notice a lot of medium to low risk stuff which is probably the majority but will not be regulated it's actually only the high risk stuff that largely has already been regulated will continue the difference now is the regulation of these frontier and foundation models and that's where a lot of lobbying and discussion and you know I think you heard like France and Italy and it wasn't Germany I think a couple of the countries including France anyway push back very strongly on the idea of regulating the foundation model largely because of minstrel AI and then and those types of companies that want to compete with the US so you know that's all to play for still at the moment okay very good I see Kira over here and then I see the lady at the back as well okay Kira Campbell thank you very much for being here following on from the example of GDPR which was intended to be a world's trendsetter and in many ways was but did contribute to the fragmentation of the internet in that many websites who do not adhere to European data privacy either do not run in Europe or have different versions will the impact of differing AI regimes or governance contribute to further fragmentation of the internet thank you yeah very good question I think over time I think in the beginning there was a lot of that GDPR I think it seems to have resolved as I don't come across very many websites that don't allow you in now they just have that annoying pop-up you just have to agree to stop or disagree one of the big difference with the AI models is they cost billions to run like billions billions they are like they're making a lot of money they are not making as much money as it's costing you know are they really going to not want to commercialize and settle in the EU at all you know so and then the question for companies is like I was saying earlier you know the federal government is in executive order now that there is a lot of red teaming and testing and compliance they have to do to be used for the federal government in the US which is huge right that's a huge market for a lot of those tech companies and then arguably Congress could follow we'll see what happens in the next year so the UK really have flagged a lot of this as well I think it's all up in the air so if you were a tech company betting on what's going to happen are you going to bet that you're going to get total free reign and no regulation or are you going to start just investing in some of the guardrails that you likely will need in the future I think that's for me it seems unlikely the way the conversations have gone that there will be no regulation at all and I think it's a very convenient argument for some people to say that the EU the EU and you know you're going to lose business you're going to stifle innovation you're going to I think the biggest challenge for the EU and anybody else who regulates is to invest and support and give the resources to the regulators because one of the biggest problems with the GDPR was I mean how many big cases actually came out of it it was so slow it was a bit fumbling there wasn't a huge amount of enforcement of the regulation so one of the biggest challenges will be finding the expertise to staff the regulation and the compliance and the monitoring of the situation and that's going to be without significant investment it will be a problem so I think it's all well and good to write the regulation but if you don't support the regulators and the implementation of it that's where you're going to create bigger problems I think I think we're only halfway through the problems of this and until I see how it's going to be regulated you know I'll reserve judgment how good or bad it's going to be but if we could do it really well there's it's right there in front of us it's just I really hope we don't under resource across the EU on this because you know you can really mess it up like you have all the best intentions but you can mess it up then and then you're almost handing a reason over to the people who don't want to be regulated to not be regulated right because they go see what happened you know and then they're going okay why don't you just self-regulate and then we're going I mean social media all over again like we still haven't got any decent regulation we know the harms it does like that and just from your own point of view obviously as a business person and there obviously there may not have been much regulation for AI when when you were working with your company but what is it that you're looking for in terms of regulation would you prefer no regulation or if there is regulation what are the things you want to see if you're a business person yeah I'd like to let's move quickly and help companies figure out what the regulation is going to be we there was regulation but you know it was kind of to me it was common sense back in 2013 it was like the wild west when it came to data and AI but we were building technology for kids I had have two kids I kind of felt you know how I would feel number one about the data privacy aspects two about the equity so it was kind of to be honest it was common sense a little bit to me that if we were going to build a product that was going to be trusted used by teachers where we had to be you know be on the right side of the data issue data privacy issue be on the right side of the equity discussion as well because it was part of our USP in the market right I mean we were competing with big tech but we were the only company in the world who actually received certification in AI equity and that was like front and center of our our marketing and our advertising and our clients love to use our logo because that meant you were you know it was a credible piece of technology it was really good for us in the market and I kind of scratched my head going well why if you're building AI and somebody you're client because a lot of times when you're building an AI you're licensing to somebody else who integrates into their product much like we did why would you not want it to work for everybody you know because you know you know to be frank a lot of the AI particularly around speech recognition was primarily working for US white males of a certain age right and that was based on the fact that they were the users of the internet predominantly in the US so then when the internet everybody's using the internet eventually and they were just happened to be the first movers on it they end up creating this problem where they actually I think was something like it was creating 20 to 30 percent more errors for black speakers than white speakers and that's all a big tech right they did this study in Stanford and it was published by the New York Times I kind of go well this is a product like so when somebody brings out a product that works for everybody everybody starts just naturally gravitation the one that's less frustrated and that works better to me it's kind of almost like common sense you I understand we all want to rush products to market because you know competitive advantages it's not always the first mover who wins like you know if somebody brings out a better product the people eventually start gravitating towards that now there is the VHS first beta max discussion where the better technology didn't win out but when it comes to AI and it's making errors and it's causing scandals and you know it's you're splashed all over the New York Times once again for for for messing up but there's another company that actually did a little slower but then maybe they just built it more thoughtfully and now it works for everybody the better product you know I mean the you know the more trust if you're talking about finance you're talking about medicine if you're talking about health you're talking about in education you're talking about employment you know the one that makes all the errors and causes class action lawsuits or the one that you know has a reputation and it'd be really good and I have this idea that Ireland does have an opportunity to take the mantle of ethical AI if we invest in it you know because you know again rushing out the crappy product to market that works for 70 80 percent of the population not really the best commercial decision I just wonder about the logic of that yeah absolutely um the lady at the back and then we have another another lady here as well thank you hi my name is carmen I am working as an IT compliance officer in our money and my question is like regarding the topic that we talk about before the innovation how do you think we can include rural areas so they are behind includes rural areas oh yeah I think a lot of that is one of the issues around SMEs anyway is you know there's a strategy of the government to get 90 percent of SMEs to be digitized by 2030 and then 70 percent maybe using AI one of the biggest problems is the digitization you got to do that first you know and then and then educate people I think one of the best ways to educate you know SMEs and the nice thing about post COVID is the decentralization of a lot of businesses that we don't all have to be located in Dublin anymore or you know building companies outside of Silicon Valley is actually you know very achievable now if you can there's an opportunity for people to stay where they are to build commercial entities provide employment in rural areas you need the digitization need the connectivity need the broadband and then you slowly start educating people on what how AI can help them and there's a whole education program there's a lot of work going on there's these things called innovation hubs they're funded across Europe Cedar is one in Dublin there's another one actually in Ireland they're gonna be funded to help SMEs when they want to and the idea eventually is we should be giving grants to people or you know start helping them take this step into first of all digitizing that second introducing AI to improve their business to help them compete in Ireland and compete globally because the problem is you know again back to what we're saying earlier if you don't look at AI how it can help somebody else well and then you you know you lose your business so I think the rural you know I would look at it first from supporting the SMEs in the businesses in rural parts but you've got to start first with digitization and a lot of it is not just helping them start out and get started but also helping them scale up them as well to become to become large companies I mean do you see that in the government strategy that that is the sufficient support is there for that or could we be doing more there I think there is actually there's quite a lot it's a lot better than it used to be I think those you see these innovation hubs and things all over the country now and you don't have to be located in the city anymore you can participate in accelerators and you know I think there's a lot of different types of business and I think that's always I think sometimes we always gravitate to everybody needs to scale up they don't you know some people want to run a lifestyle business and employ a few people in a town and themselves and you know we need to support those people too because they can do very good business they don't all need to scale either like you know but when they do want to scale there are accelerators I think that whole idea of allowing people to participate when they're located in rural areas as well they don't have to be five days a week at the accelerator I think that's really good like and I think they've definitely seen that happening now so we can always do more and I think a lot of that is for me very much is looking at how and who we invest in. We're getting slightly off-topic now I think with some of the stuff but the lady here had a question and then I see Jake as well. Hi, Lydia Dutling from AXS partnership firstly thank you for speaking with us today I just wanted to pick up on the competition point raised earlier I think the EUA Act seeks not only to regulate AI but also tries to encourage competitive development in Europe by introducing like regulatory sandboxes and also exemptions for open source specifically on kind of open source and the development of these communities in collaboration do you think that's a viable solution for raising Ireland's and the EU's competitiveness against US big tech or do you think that providing exemptions for this specific category of AI developers could lead to more risks than benefits? That's so hard you know it's a very open discussion right now it's like it's really current and to be honest when I listen to both sides I see there's valid points I'm not someone who just like you know I'm not going to I fall down on one side I really believe in regulation but you know handing the market to the incumbents or the you know five big companies who can afford to run these models doesn't really make sense either I think a lot's going to change in the next two years I think at the moment the LLMs that are really effective need to be it costs billions to run them I don't think that's always going to be the way I think you're going to be able to run those very effectively for a lot cheaper and I think that's going to open the market and that's going to change a little because you know even the way the EUA Act is right now it's over a certain amount of crops and processing that the regland if you're less than that and they put it that way because currently that's how it works but back to your point earlier technology moves really quickly therefore suddenly you're going to have stuff that's probably as effective as the LLMs of today in about a year or two running under that that thing so what are you going to do about that you know so I think the open sourcing issue is interesting I think the regulation will still be on the use cases so you still will have to maintain you know evidence about bias you know so one of the problems it's going to be is again come back to how it's regulated right what is the steps that somebody has to go through do you just have to show using a certain meant test data that it doesn't have bias or do you have to have evidence from the underlying remember I start with the foundation models so let's say you're using an open source foundation model will you get an exemption then because you're using an open source that you don't have to say anything that the data was trained on and if the IP is copyright IP issues behind it are you allowed use it in finance if you used an open source model but it's fairly closed it's a black box but it's now used in finance you know that's where I and I haven't heard quite yet the details of how that's going to to to Peter right I think that's going to be really important when it comes to open sourcing are you getting complete exemptions so can you use it in surgery if it was like an open source model and it's suddenly being used to do something about the the outcome of somebody's life like you know I that's where I'm personally unclear about because it's these discussions are only ongoing right now about the open sourcing about what's not going to be regulated so therefore in a very high-risk use case can you do that like you know I mean I'm you know sitting here waiting to hear myself but so I think there's a lot that still has to be understood about how the regulations going to actually practically work and I think that's like as we speak ongoing okay very good I'll come to Jake and then I'll do last call for questions if that's okay so if you have any questions please put up your hand now and then I'll take them maybe with with Jake's as well so if you're happy to take yeah yeah we're good so we have Jake Jack and then we have Brian as well so I'll take the Jake Tran from the Department of Children uh just wondering a lot of the conversation has kind of been about you know what the government can do in terms of sort of work you know helping other people use AI or regulate AI for other people but in terms of kind of policy making in AI itself what do you think do you think governments in particular our government are kind of doing enough in terms of thinking about how they can use AI themselves in their own processes and putting kind of the necessary resources and skills into that yeah well I've maybe I'll just take Jack and Brian as well if that's okay just in the interest of time yeah Jack from the Department of Finance my question should be a lot shorter than Jake's just wondering about like from your perspective as the AI ambassador to Ireland where do you get your information from like what's your engagement with the industry like you're you quote from the New York Times considered be this evening just for pure examples just wondering where your reliable sources are coming from in this world of unreliability with respect to what with AI in general like the way all the like you're saying you're waiting for things to come out and just conversations are ongoing and happening like you're not in the room are you or not or like those kind of instances like where is the material coming from kind of it's not just coming from the lack of your mind I can tell that but it's just where is the information coming from and where can others find it as well okay very good I might take Brian then as well if that's okay you have for a member of the special thanks for the discussion my question is just more around we've discussed a lot around regulation etc but some great use cases coming out already like in terms of health preventative measures but also in terms of kind of health screening and bits like that and I just wonder like I know myself my family my grandmother really had a heart attack but the Apple watch was telling her this is coming up you need to get your heart checked and another two or three days you know you're being troubled but I suppose I'm just wondering do you think that the conversation will move on from regulation to more this is the this is the benefit this is the societal movement in us and do you think there's more that needs to be done to kind of communicate better to older age groups who might not be that confident with a smartphone never mind get a AI thank you very good um so regard to what the government is doing I've been in a lot of meetings about trustworthy AI in the public sector things like that so there is a lot going on um could they do more yeah possibly I often get the question going is Ireland leading where are we but it's all so new I don't think it's a case of you know any one country is leading I think the US lead because by virtue of the amount of money they invest in it more than anything else but I've seen a lot of what the government are already looking to do but there's a lot of concerns there's concerns around data there's concerns around the ethical nature of the you know what I think you know you've got to be kind of careful as well about what you're going to use it for nobody wants to build the technology or you know license technology then fall foul of the upcoming EU AI act so I think there's a lot of kind of a little bit of trepidation but yeah I found it interesting enough and people curious enough and there's enough um investigation going on about what can happen um I think the next year or two again it's really new because prior to Chachi BT a lot of the AI technology was a little bit you know it's a little more niche like you know now this is more something that can help across the board but because a lot of these open AI all these different kind of models um they're very black box you know there's a little bit of trepidation nobody's rushing into it um and and you know I think thoughtful implementation of AI is important there's a lot of risk like you know so I think you know I think we're almost over the data problem and and the and the permissions and the data privacy and the respecting the you know kind of need to move past that say okay we have that sort because a lot of work's gone on that already um to then it's once everybody's having with that then but what can the data be used for and then the question is is the government building their own AI or you're going to license the third party and licensing the third party is where it comes in the whole host of problems comes in because you have to know are they going to be transparent or you talk about black box stuff again because a lot of them don't want anything coming out they'll tell you they're 99 percent accurate on what like you know what data so I think there's a lot of challenges about the um licensing and the testing and the red teaming you're going to be when you're using the third party because a lot of time a government's not going to build their own they're going to license in and you've got to be super careful about who you're licensing because you you know there's already examples of lawsuits and class action lawsuits going on the US where it was used in law enforcement now so it's got to be super careful about how we go forward in the year with that okay um I think the the other question oh yeah it's okay yeah was where I get my information from I'm 25 years in the space I'm an engineer I'm technical I read I read research papers I worked in the industry for years I I understand how is it I'm involved in lots of meetings I liaise with policy makers um who are at the EU and who are at the table um I get some reason feedback and you know I say it's kind of my life so reading research papers um you know a lot of the the EU stuff all those regulations are public like you know whatever comes out in the new there's nothing hard to find you just have to be able to read it I would highly recommend anybody read um the Biden administration released um what do they call the blueprint for AI very readable um and it's one of the most commendable things I thought about it was while you know we did great job in the EU producing the EU AI act I just found it more readable for the general public to read the blueprint for for AI and it's pretty much again there's going to be details that are not the same like about how it's going but broadly if you want to understand what what it all means and what even you'd be able to translate what the EU is trying to achieve by I just found it very readable for um people yeah that's it well and the thing sorry the last one oh yeah sorry it's on the medical stuff um you know there there's been decades and decades of work on on medical so I think that's that's separate to what's happened in the last year um a lot of times you what you'll notice is that a lot just melt into the background do people really know need to know it was AI or was it just a trigger you know the way it was the you know those those fall detection things that people have you know the watch doesn't know you know it's kind of like it's it's like an almost it's just an evolution I think um what you'll see now more and more is that AI be integrated into more and more products and will make those things more usable probably more than anything else um but yeah I mean look the thing about the health and the medical that was happening anyway I think just the world's woken up to AI because in the last year but they're very different almost areas you know but I just want to thank everybody for coming I want to thank everybody for your questions and your comments but most importantly I want to say a huge thank you to Dr Patricia Scanlon for her time here this evening and I know I've learned a huge amount about AI that I didn't know previously and I hope it'll benefit you in your work and in your lives as well so just in terms of our next event you will all receive an invitation tomorrow to the next event on the 25th of January with Federica Mogarini who was high representative of the European Union from 2014 to 2019 now she is the rector or essentially the the head of the College of Europe and she's going to talk to us about so we're going to do it in the Belgian residents the ambassadors residents on Aylesbury Road and she's going to talk to us about the challenges for the European Union's strategic agenda from 2024 to 2029 of which I'm sure AI and and the whole digital space with will feature quite prominently so hopefully we'll see you there keep an eye on your emails for the invitation and thank you so much for coming