 Cynon. Allan. Mae'n meddwl a lŷn gydych chi, ar ôl gwrs L-Altigol L- worrying... Dyna rwy'n meddwl, rwy'n meddwl L-Ysyn, I-D-D, rwy'n meddwl i'r cyflawn i gael arwain, ac i gyd, dyna ni mwynhau'n meddwl gael ar y gwrs ffordd. Rwy'n meddwl mae'r gweithio yn ymddangos인ol ac mae'n rhoi dweud cyflym byddwch. felly os ydych yn wneud bob e-mail ar y llat ac mae'n gweithio i chi yw, byddwn i'r ddweud o ran gweithio'r cyfnod. Ond oedd eto hynny, mae'n gweithio am y cerddwn i yw'r cram-di-grannu, Tom Peroni, ychydig wedi'u ei wneud y Dysgu Dysgu Dysgu Dysgu Dysgu Dysgu Dysgu diogelio'u Well, really about the bigger picture and the worries around AI, with all this stuff taken off with chat GPT and Bard and the various others. So Tom's been in higher education for 10 years. He's been everything basically. He's been a teacher, a technologist, a designer, an academic developer. He's a senior fellow of Advanced HE and his big interests are really ethics and responsible data use. As I said, he's currently at the ODI and he's helping all sorts of organisations. If you were here earlier, he was talking about civil service, but I think is everyone big and small to use data more effectively and probably knowing Tom more ethically as well. So Tom, over to you. Oh, thank you very much. Yeah, I'm Tom. I work at the ODI. I worked in higher ed before. That is pretty much the kind of deal. I suppose I've come here today. Lucinda invited me to come and speak and offer these kinds of thoughts. The kind of subject area then is it's kind of unpicking these concerns and it's as Lucinda said, taking this kind of higher level view. I'm not going to wax lyrical around everything that you could do at a granular level. We all kind of work in these different practices and I'm not naive enough to think that there aren't different struggles that you have as academics teaching within these institutions with all of these systems in and around you. As I say, 10 years in higher ed, I experienced it quite a lot myself, but I'm going to jump straight in and there's a bit of a, you know, as all good teachers who don't actually have an answer for the question being raised. I'm going to just going to give you a moment to think of the answers for yourselves. What are your concerns? Throw them in the chat. Just take a moment. I'm just going to hold here for a second. When it comes to generative AI, chat, GPT, we'll use them all as synonyms. We'll keep it simple. I don't particularly mind. We'll just take a broad level on it, but do you have concerns? What are they? Elements of trust. Trust in data is one of the big issues when it comes to data. Here's something that we'll pick through today. What if it gets too good? I suppose that depends on your definition of what too good looks like. How do we define good? How can we match the pace of change? How can we detect if it's used? How teaching will change? We're not teaching students how to use it properly. There's kind of issues within there that cover all these different elements around teaching and learning. Rwb i'n rwb i'n rwb i'ch iawn, but just because you put good in doesn't mean you get good out. Enhancing learning. Equality, bias, yes. Critical thinking skills. Assistive technology. Yep, unfairly. There's loads of stuff. I'm not going to read them all out to you. I am feeling quite good because a lot of this does kind of come into what we're going to cover. I thought I'd just start here. So yes, we are in a crisis. This is an opinion piece published inside higher ed. So generated text, growing apathy, posts as a catchall for assignments, discussion forums, whatever kind of text generated input students have. They sound suspiciously AI generated. My challenge there is just because they sound it, are they? Is that our own bias? Is that our own fears coming through? Are we being increasingly less trusting of our students? When it comes to these broader practices, then is higher ed truly going to look the other way as our students collectively stop engaging with our curricula? Well, who says they are? Is that again just another element of our fear coming through? Is generative AI really the reason why people would disengage with the curricula? There's a comment that's immediately come through from Mike, similar issue to Wikipedia before it came checked and moderated. There's all of these different conversations. I mean, there's lots of stuff here around, you know, we've always kind of been worried about the next best thing, the next technology. And GPT, the kind of issue we've got here is that it's just scaled up massively. The access to it is so much easier. But what are we actually looking at then? Generative AI, a definition. This is super simple. I'm keeping it quite high level. I'm not going to get into all the nuance of the technologies, but it's focusing on generating new content. The next image is sounds whatever or data by learning patterns from existing data sets. Now the several comments coming through in the chat about it reflecting the current world. And the problem here is that what does the current world look like? Who designed the current world? Because shock horror, the data sets that exist reflect that current world. But whose world? So how did we get here from point A to point GPT? I saw a presentation done by an old colleague of mine, Marey Pratch, who is now the chair in digital education at the University of Manchester. She did this talk as part of a set up with Anthology. So the Anthology, the makers of the owners now of Blackboard. And it came through these different waves, the five waves that come through in this innovation element. So we started off with water, power, textiles, iron, we went steam, rail and steel, electricity, chemicals, petrol engines, petrochemicals and so on. We're going through these things and as you can see each of those waves are getting shorter and shorter. The actual picking up of these practices, the evolution of these technologies are happening at such a faster pace. Now I put the YouTube link for Marey's talk in there so you can watch the whole thing. But her talk is very simply titled Riding the Tsunami, chat, GPT, generative AI and digital education. And we touch on all these things. Now these things are not new. They've been around anybody who's kind of got an interest in digital education will have seen these things. I'm not going to explain what they all are, but the one that we're kind of majorly concerned about at the moment is this generative AI element, chat, GPT. What can it do? Now riding the tsunami, keeping on this element of waves, well, it's getting faster and faster. So how fast is that wave? One million users, five days. So much quicker than any other technology that's come before it. And with that increase in use, it's had the increased media attention we've found that users across all these different sectors are picking it up. Lots of platforms have gone in with the race to the bottom of how they can embed chat, GPT into their systems or other generative AI systems. I was recently working with a colleague who works in marketing using HubSpot. HubSpot is one of kind of the big CRM systems. Now, great. They've thrown generative AI into there so you can start marketing your products, writing emails and communications out to your customers far easier. OK, but what happens to the creative aspects of marketing? And the same premise kind of applies across all of these different areas. So we're already seeing it changing employment, we're seeing it change the role of professionals across these different sectors. But what are institutions doing about it? What about the kind of university approach? We've got responses to rapid change. What was it the first of November last year? It was sometime in November wasn't it? Chat GPT got released. We're in May now and it's seemingly the only thing that anybody's really talking about. But what are institutions doing about it? We've had less than a single academic year and mostly they're either banning it or they're using it. There is a nod then to the third option, which is ask students not to use things that give them an unfair advantage. So we don't have the detail of what that means. Now, I'd like to share this example with you. This is one from UCL. So UCL went forward and they co-designed principles for using generative AI and they co-designed it with their students. But ultimately I felt when I was reading this that you could take the letters AI out of it and it would still by and large sit pretty cleanly in any assessment principles that we might have. You might have to do a few little changes. I appreciate that. But ultimately relevance. Teaching and assessment is designed to prepare students for life and work. Literacy, students understand what the purpose of assessment is. Okay, cool. Tools that can enhance their learning practice. Okay, cool. This is all stuff that has been said before. AI is not really that different in the grand scheme of things. The practicalities of where it is a little bit different. We've got academic integrity and scholarship. Great. Well, okay, cool. We're just asking students not to cheat. I'm being a little bit facetious there, but we're asking students not to cheat. Apply their own criticality. The clarity then. Actively engage with students, communicate policies. Okay, cool. Well, tell them what they can and can't do. Great. Fairness. We've been discussing fairness in academia for many, many moons. Data sets have never really been that fair. You only have to look at cardiology research and understand that the data sets that exist in cardiology are massively skewed. They're massively skewed because the data is more about men than it is women. Women are failing to get the right diagnoses and the right treatments. Throwing AI on top of that issue is not going to solve the problem. It's going to perpetuate it even further. But then you take that step back and you go, okay, who's doing the research? Who's peer reviewing the research? Who's getting published? Fairness has always been a problem. So within all of these things, okay, well, what are we actually trying to say here? What are the principles telling us that we need to approach with our students? Redesign your assessment. Cool. There we go. Problem solved. Redesign your assessment. Stop doing things where students can just generate it in an essay format. Pretty ropy suggestion on how to fix the issue. Because ultimately this is what we're already doing. I appreciate there's a lot of essay based assessments that are knocking around and there's a lot of things. There's a good place and a bad place. Again, I've worked in academic development. I've worked in learning design. I know the complexities of these things and I appreciate there are good reasons to use the different modalities of assessments, but by and large is an essay always appropriate. No, does an essay open yourself up for it to be generated using AI. Yes, is that always a problem? No, it's more about how we facilitate that practice. So we get into this discussion again around authentic assessment. Okay, well, what is authentic assessment? How to think, not what to think. We give students the subject matters and they construct their ideas in and around that. We go beyond the didactic here. Here you go. This is what you need to know. Go away and learn it. We get those cognitive processes and we hit that golden stage, that constructivism piece. The word that, I don't know, ed techies like to throw onto stuff certainly sales people in ed tech and go, we got a discussion forum, therefore we're doing constructivism. Great. But that's not what I'm talking about. You know what I'm talking about when it comes to authentic assessment. It's going to be different depending on what your subject area is. So providing opportunities for students to engage with workplace skills. It's not just about the subject matter. It's understanding how it applies in context beyond their academic studies. It's balancing the two. There is benefits and transferable skills in the academia, and also the application to the practices they will employ in life. Cool. Great. Skills, competencies. Building those metacognitive skills then great. Lifelong learning, going beyond university. What happens when they get outside of that formal structure? Where do they go? How do they continue to develop and make sure they're not left behind? There's fluidity to the learning. It happens constructively over time. Great. Students seek out new information. They engage more so with their peers who come with different perspectives. You've got this construction then of unique responses, complex problems, higher order reasoning. It's all the buzzwords that we've seen in research for many, many moons. Reflection on actions, self-assessment, building self-efficacy. Again, all of those lovely buzzwords. I've just seen that Manish has put in a link to authentic assessment. Go away. Watch it. Try and contextualise it in around what I'm talking about here. So, when you're thinking about the authenticity of your assessments, we need to support the building of experiences. We need to support criticality. What do I mean by that? We start getting into the realms of data skills. When we think about generative AI, when you start looking at what we're asking students and what we're advising them to do, we go, well, you can't just put a question in. Throw it into chat. It doesn't really work like that. That doesn't build the experiences. Ultimately, then we get it down the realms of going, cool, well, how do we frame that question where I can craft something with chat GPT rather than asking it to do it for me? How to think? How to think about data? I've got a graph on the screen here. It's got 11 points on it. I'd love for you to draw me a line of best fit in your mind. Draw a line of best fit. Where does that line of best fit end up? A, B, C or D? Pop your answers in the chat for me. So, starting on the far left, moving over to the far right in your mind, where does the line of best fit go? Lots of Cs, lots of Ds. I think we've got Manish out on the edge with a B. Carrith with a very honest. I have no idea how to begin. I'll ask you another question. What shape is the line? What shape is your line of best fit? Simplest is linear. I've got some zigzags. Mostly we've got straight, but we do have that comment in there from Mike. Simplest is linear. Now, there you go. There's four trend lines. We'll ignore the fact that quadratic and quartic are a little bit crudely drawn. I was trying to find the original source data for this activity, and I've misplaced it, so I've used these. In Manish's point, we've got the average trend line. Y equals C. Y is a constant. Y does not change. Therefore, we'll see the points as they start to predict the future. They will broadly hit across that straight line. As we go over to linear, now, this is where the data science kind of, ugh, it's all just a bit of a mess with data science because we get into this realm of fancy words don't really mean anything. Linear, linear regression. What is linear? Linear is a straight line. What is regression? It's a relationship between two points. It's a straight line between two points. If you do any data skills course, lots of organisations are starting to put data academies together, and when they go through these data academies, by and large, you're probably going to be offered an apprenticeship. Now, there's nothing wrong with an apprenticeship, but the apprenticeship standards look at these level of statistics and look at this level of maths and data manipulation, and they by and large tell you to use linear. But then we've got quadratic. We go back a slide and actually look at this data. On the Y axis, we've got yield. On the X axis, we've got salt concentration. This is a data set I took from DEFRA. Now, when the salt concentration in soil increases, the yield exponentially decreases. The answer is D because it's a curved line moving through those data points. And when we think about this and we think about data and we think about the criticality of how we apply data, every single one of these trend lines is correct. Mathematically, they are correct. They are applied in the correct way. Sandra's point context is important. Yes, context is important. Salt concentration in the soil, the yield exponentially decreases. If we go ahead with average, then we assume there will never be a drop in the yield over time. If we go with linear, then we're probably waiting until the salt concentration hits about 35 until we get zero. If we go with quadratic, the more appropriate trend line, both accurate and appropriate, we actually see that the yield hits zero when the salt concentration hits about 20. Then we've got quartic. If we go a little bit further than that, we get things like polynomials. These are dreadful, really. It's all around a little bit difficult. I'm not saying that they're wrong, but what we run into is the keyword that Mike has put in the chat for us. It's overfitting. It stops becoming a line of best fit. It becomes a line of absolute fit. To illustrate that point a little bit further, I'm going to ask you a very simple question and, again, put your answers in the chat for me. What does a cow look like? Farm yard animal, a cow. What does it look like? We've got some of the broader holes with black and white patches. We've got some of the contextual questions here. Just remember that cows come in different colours. Like a bigger dog. Yes. There's all these different views on what a cow is. Now, I don't know enough about cows to really go through all of the complexities of breeds and so on. Small circle face, larger oval body for legs to sit down from its body. Here's my data set of cows. I'm going to go with Lucy's. Fundamentally, a large mammal with four legs. Here's my data set of cows. Is this not a large mammal with four legs? I don't know whether I have a bull in it, I'll be honest. I've been assured that I don't. I did run this past somebody, but I'm not an expert in cows. Inherit bias, not a black and white cow to be seen. So, if we start making decisions based on this data set, well, great, but who benefits? Brown cows. So, what do we do? We grow the data set. We increase the data set. We add more representation into the data set. Cool, great. Here are some more cows. The problem is now, I have some cow that produces strawberry milk hiding in the middle there. I've got a cartoon, I've got a sign. I've added uncertainty into the data. What I'm trying to pull here is this issue around these large language models, the issues around data. Because if you start making sorry, I'm just going to pause for a moment just so everyone can take in the jokes in the chat. When we've got these data sets and when we start making decisions, I'm illustrating it facetiously here with pictures of cows, but it's this same practice here. If you get into the realm of quarter and polynomials and you have this wiggly line that accounts for every single data point in your data set, then you're being led by outliers in the data. And there's always outliers in the data, but what we don't have is the criticality to identify those outliers. Big data is not better data. How do we define big data? We use words that only begin with the letter V, volume, how much of it is their variety, the representation in the data veracity, the uncertainty in the data velocity, how fast we can analyse and act upon that data. There's a wonderful link here to a short lecture segment. It's only five minutes or so from the people at the University of Washington. They were in a course called calling BS. It's also published as a book. There's also then this lecture series on YouTube. It's well worth a watch, but the example they use is from 2009. So this stuff isn't new. 2009 isn't that long ago, but we're still having the problems and what Google looked to do. So this was detecting influenza epidemics using search engine query data. This was Google piling in a shed load of cash and building Google flu trends. Now they claimed to a level of 97% accuracy that they could predict a flu outbreak across the US. The problem is they overfit the data. They were led by the outliers because what happens when you use search terms, well you search for a headache, you search for I'm feeling a little bit snarty, you search for I'm a little bit achy, but they are symptoms of so many other things and just because you're searching those symptoms doesn't mean you've got them. They included all of the outliers. They included all of the data. In actual fact they predicted the past not the future and the problem with that they still only predicted the past to an accuracy level of 97% so they didn't even predict the past correctly. In terms of the evaluation rate of their system the confidence was 97, the evaluation rate was actually somewhere around the 40s. It was massively lower. They were worse than I guess. The one key insight that they took away from the data was that most flu outbreaks happen in September and October which is all good but essentially Google spent a shed load of money producing this algorithm to detect flu outbreaks or to predict flu outbreaks, sorry and they predicted the start of the school year. Mike, a question in the chat why is the future same as the past? Well it isn't and that's the problem and this is where having 100% confidence is not actually the right thing. You want to lower that confidence, mainly so you can avoid outliers in the data set. This is where criticality in the data comes through. We can have data skills but we also need data literacies. So what happens when humans learn bad habits? We used to say that the danger existed between hacking skills and we don't mean computer hack and we're talking about problem solving here. We used to say that the problems existed between hacking skills and substantive expertise but it doesn't because at least if you've got the substantive expertise in the domain then you know roughly what should be what and what good looks like you can define good much more accurately than other people but if you have somebody with really strong maths and statistic knowledge and the ability to solve a problem but without that substantive expertise crop yield, salt concentration I was hoping nobody in this room would have a strong understanding of farming because otherwise the example would have been a bit useless but without the substantive expertise in agriculture then you apply linear to it. There's nothing wrong with that but it's not necessarily appropriate. This is what we're seeing applied into the generative AI systems hacking skills, maths and statistics knowledge, substantive expertise the skills that make up a data scientist a priori a posteriori reasoning by logic, reasoning by experience these generative AI systems are reasoning by logic we're trying to build our students to reason by experience and this is where universities come in to build the substantive expertise to develop that critical thought so let's look at the broader examples then AI really good at designing knitwear this was a fashion show, all of the outfits then designed using AI don't look too closely but the belt the belt doesn't actually function what we've actually seen here is not just a fashion belt that hangs upon your waist the actual what you call it, the buckle isn't attached to one half that belt is hanging in midair it's approximated all the things that a belt is meant to be it doesn't even function as a piece of material go a bit further then more terrible examples can AI support accurate diagnoses so when it comes to skin cancer individuals with non-white skin less likely to develop skin cancer but have lower survival rates can AI solve a problem? no, AI cannot solve a problem because when we start using these generative models and using all of these images in the training set we're perpetuating the bias that exists this particular example 63,000 images in the training set only 5-10% of them are from non-white patients things like trap GPT it's exactly the same it's the generalised view of the world who's produced that view of the world generative AI democratises access not representation picture of the British Museum there the example that I would use here is great, you can get into the British Museum for free you can get into many museums in London for free many museums across the UK for free great, we've democratised the access but what about what's inside horrible to see all the loot that's exactly what it is it's Britain's colonial past Britain's colonial present but it's okay because at least we're letting people see it let me illustrate it in a different way what does your network look like A or B? both largely look the same on the surface of it it's because they are but what happens when we start breaking down the connections in the networks think of all the people that you can speak to the expertise you can pull upon the acquaintances that you can pull upon the people you have met at a conference once or twice the people you've seen speak that you can go and reach out to you are able to apply your critical skills your knowledge of context your understanding of context and explore that in more depth got the logic by reasoning and logic by experience your experience is what drives you the logic by reasoning is more akin to what we're seeing in B there we go down the lines when it comes to learning then learning happens in a space between people it's the importance of these networks that we build this was from a talk I was at the learning technologies conference at the start of May this was Dr Nigel Payne this was a quote from one of his talks now from an ODI perspective we have this the data skills framework now it's a skills framework not a competency framework we don't necessarily have the components skills and knowledge areas that underpin every single one of these hexes we do have courses that fit across some of these but again not all so it's not entirely comprehensive it's there to be adapted for your purposes but what we're seeing down the central column is reducing data down to leading change is what we call the core having some understanding of data being knowledgeable in how data creates value and how to create value with data finding insights developing strategy leading change taking those insights developing new knowledge making decisions on the right hand side in the green and the purple we have our data practitioner skills people who are working directly with the data slightly more technical people we might say on the left hand side in the pink the orange and the yellow we have our more strategist area more of the soft skills understanding the processes the operations of organisations and how these things move forward now what we advocate for is a balance how can you make decisions if you can't be critical of the data how can you be critical of the data if you don't understand some of the skills that are applied how can you apply the skills if you can't be critical of what you're outputting works both ways so with all this in mind should we stop using generative AI I don't think so we have this schools based mindset we are the teachers we need to impart knowledge don't go and get it from somewhere else because that other source isn't as good as me that then assumes that learning is purely content consumption that didactic approach here is information read it internalise it remember it but if we rely too heavily on chat GPT rely too heavily on this democratisation of access without applying the appropriate level of criticality we run the risk of creating a society of functional illiterates so democratisation of access but it's missing that democratisation of representation and if we don't start looking for that and we don't start being critical of it it's just kind of plugging the numbers hope for the best it's a bit like going into a maths exam without a knowledge of how a calculator works it's that same kind of thing complain a lot about students using calculators but in reality you've still got to understand how to use a calculator effectively difference between literacy and skills there's a question for you what's the average salary in the UK if you think you've got an answer pop it in the chat now I will apologise because I am a disgraceful pedant it's a bit of a trick question what's the average salary in the UK Mike you've hit the nail on the head the mean, the median or the mode in generally speaking none of you are that far off but the problem with the average there isn't a single average there's three the mean, the median and the mode in any given population salaries are positively skewed the mode, the median and the mean are all different the modal salary in the UK is about 24 the median salary in the UK is about 32 the mean salary in the UK is about 38 in order to use the word average the mean, the median and the mode need to be the same we need the normal distribution so if your data sets aren't normalised the word average doesn't carry any weight because it's wrong there are three measures of average salaries, modal salary in the UK 24, median salary in the UK 32, median salary in the UK 38 BBC usually pretty good when they report stuff they do report the median when they're covering the RMT strikes they're talking about the median there was a point at the start of the strikes in the summer of last year where the RMT started reporting the average salaries of RMT members slightly problematic because there's three averages they declined to comment which average they were using now in fairness to them they changed their messaging they started going with the median I believe there was an audience member on question time a few weeks ago who said that nurses earn on average 33,000 whilst they were disagreeing with strike action well you know what if you feel hard done by because nurses on average are earning more than you then you know that's fair I'm not going to tell you that you're wrong to feel the way that you feel but the average isn't 33 the mode is the most common in which case most people in the UK are earning 24 and it's that level of criticality it's all well and good understanding the maths but understanding how to interrogate the messages that are coming through the technical skills that apply to data are just as important as the ability to communicate it appropriately but when we focus on data skills we tend to think about the input not the data input the human input the rise of the prompt engineer and why it matters this is an article on the world economic forum Albert Pheps who is a prompt engineer at Accenture fastest growing jobs and when we're talking about all these things and should we stop using generative AI no I don't think so if it's going to be one of those things that stick around and I'm not saying it is the hype I'm sure will drop at some point but it's probably going to stick around in one form or another in which case we're better off making use of it if these are the fastest growing jobs and it's a lot in there around AI and machine learning specialists similar kind of themes going down the line there we've got to understand how to be critical we've got to understand literacies we've got to go beyond the skills we can build the data skills but what about the data literacies so our key questions then where is learning accessed which content is privileged what systems and support mechanisms exist for the application so what do you do about this actually one get on board cats out of the bag this is chat GPT's world now and the best thing you can do is get on board use it in your learning set tasks use the outputs from chat GPT work with students to understand how to put messages in how to put prompts in how to be critical of the output further leverage it for peer marking practice between students leverage it for your broader learning activities with prompt engineering yes okay fine cool yeah it's great it makes the output much better but we need to also be critical of that output just because we're putting in better prompts doesn't mean we're getting better outputs we need to be we need that balance of both relating it back to the employability this was a piece of research by dot everyone unfortunately dot everyone closed their doors in 2020 but before they did they put out this research nearly two-thirds 63% of tech workers want more opportunity to understand the impacts of their products changing values in the workforce workforce insights this was a report put out by Randstad earlier this year I wouldn't accept a job with a business that doesn't align with my values on social and environmental issues well okay generative AI massive social issues within there it's just a perpetuation of the social issues that already exist we're seeing more than 50% of young people and actually still a huge amount huge proportion of different age groups all agreeing with this statement action two ab noise explore the complex problems interrogate the domain generative AI relies on those existing data sets they describe the world as it has been designed if we're not careful we perpetuate that do your research explore new topics be critical grow the representation make it more democratic narrow data sets not new it is power that is allowed unequal appropriations of knowledge and marginalisation of other knowledge formations this article was not written about generative AI action three final action embed data ethics now I've seen lots of discussions around data ethics and it's an area that I've really become very very interested in it's something where I spend a lot of my time exploring now now if we took a view on ethics then yes we can get into all the philosophical discussion we can take it at that high level and interrogate all of the different avenues of investigation but I'm taking it much more simple than that what is data ethics as a set of skills so we've got the definition of data ethics by the ODI on the right hand side there a branch of ethics that evaluates data practices with the potential to adversely impact people and society in data collection sharing and use what are the skills that we're applying what are the literacies that we're applying the use of generative AI brings an opportunity to learn data ethics in practice learn the underlying skills of data ethics it's the opportunity to apply new tools to understand the impact of your work that's kind of everything from me and I just kind of want to go back through some of the chat and just scroll back up to the top and actually looking at some of those concerns that you raised right up at the start how can we match the pace of change get involved in it detecting if it's used well what kind of use are you looking for now if you're going down the route of the design of an essay then yeah cool great teaching students to use it properly teaching students to be critical of it all of these things I'm not going to sit here and say I've got all of the answers there's lots of things that we can do to start smoothing off that wheel but thank you very much thanks ever so much Tom that was really interesting and huge amounts to think about certainly for me and I'm sure for others does anybody have any other questions for Tom you can come on the mic and talk to us we've got 10 minutes also left yeah I've got a question the people who are developing this generative AI Greg Brockhamon from Open AI and there's an interesting interview with Stuart Russell who was the reth lecturer I mean they're saying that the emergent properties of this generative AI text AI particularly but all of them are shocking and surprising the creators they had no idea that it could do what it's doing and therefore people are sort of like reverse engineering they're trying to work out what's going on and it's not as if there's an instruction booklet no I think that's a really really good point when I work at the open data institute our entire our entire world is open data now you've got open AI open is applied in a slightly different way what's actually happening under the hood well we don't know we can only trust the messages that we're told now trying to reverse engineer it to explore even deeper how much are people going to find out I'm sure at some point we'll have a much greater working knowledge at the moment we're just kind of firing in the dark a little bit what about bias do you want to tell us more my sort of question is a lot of the data that's picking up from is available from the internet and there's going to be a bias that it's the population that has access to the internet not the population that doesn't so obviously systems you know it's data in data out it's going to have bias isn't it or is it trying to get a head around what it is yeah and it's the thing is it's all well and good to have me kind of sitting here and going it's it's got bias isn't it it doesn't really kind of explain the scale of the issue we've got a kind of within this you know it's understanding what the systems kind of doing and working toward whilst also trying to connect that with the bigger picture so what is happening around the world we have this incredible growth in the numbers of people who have access to the internet but they've not had it yet so what are the stories what's the data that exists even the kind of you know anthropological view of going well this is what I've observed of these people well that's your story of those people not their own story so the bias runs so much deeper then kind of what I've explained here or what I've covered here then I think you're right I don't think we can ignore it's fantastic use cases it's not just about exploring content areas yes of course I'll share the slides I'll share those via you if that's all right Lucinda yep that's Graham send him over we'll pop them up and the recording will go up on the ALT YouTube pages out if anybody wants to go back and have a rewatch wonderful it's an interesting point that I've managed if you use the engine to remove some of the bias it can learn don't suppose you saw the article in the Times around open AI paying a group of people in Kenya absolute pittons to start removing all of the horrible things that chatGPT was generating in all of the discussion around open AI applying ethics I'm not so sure oh yeah that doesn't sit well does it no the more you look at data and the more you look at data ethics the more sad you get a wonderful view on the world so anybody else any more questions or comments before we wrap it up well once again thank you ever so much Tom that was really good really appreciate your time as well for everybody else do keep having a look we do run talks normally about one a month although we may slow down a bit for the summer and we're currently looking at what we'd like to offer for next year as well so if there's anything that you would really like to see or get involved with then please do get in touch with ALT south and we will see what what we can do you can sign up to the group as well if you haven't on the ALT website we've got a gysg mail that would keep you up to date with everything we've got going on so thanks ever so much everybody thank you I'm just seeing everybody slowly leave I remember when I first used this I couldn't work out how to get out let me just stop the recording as well that would be useful wouldn't it