 Rhaid i ysgolwyddi, diddordeb i mewn. Efallai y sefyllfa gwahanol, ond mae'r gyflaen y gallwn y gwrthwyr yn ysgrifennu ar gyfer y cyfnod erbyn. Roeddwn i'n gweithio, rhaid i'r hwn yn awddiad o'r Celfddiad Siathild. Roeddwn i'r gwrthwyr yn ymgyrch yn ysgrifennu. Yn ymgyrch chi'n gwybod i'n gweithio, oedd eich gwybod ymddangos o'r rhaglen ar y cwnghwun i'ch gwarwyddau a'r wahanolion i amgylchau, ac nifer o formidable a'r sianol. Ond mae'n cael ei wneud o gweithio'r ffordd y chynyd a'r fadd am wahanol. Ond mae'r eu cynnig yn rhoi chi'n gyfrifesig yw nhw argynnu eich fan. Rhaid i ni'n f receptive, ac twydd yn ti wneud arbennig i chi. a'r hoffi gofyn hefyd ychydig arbennig. Ond rwy'n gwybod i newydd gwoes iawn i'r hwnnw sy'n gallu Llyfriddio Siarfield and lovely sunny weather, which is what we've got today. So you are seeing what I can see outside of my office door. So this is what I'm going to do today. I'm just going to go through a few things. I'll talk about who I am and why you want to listen to things that I'm thinking about going to go through a bit about the challenges that I think we're probably all facing at the moment. Thinking about AI and machine learning and some of the language models that the large language models that have accelerated so much over the past few sydd y gallwn nhw'n dweud o'ch gweithio y byddwn nhw, a os ydych chi'n gweithio'n ddweud yma y byddwn nhw'n ddweud. Felly, rydyn ni'n bryd o'r maes, mae ymddangos gyda'r ysgrifennid yn Ysgrifennid. I am Bella Abrams. I am the IT director at the University of Sheffield. I've been here for four years, having an extremely wonderful time, but this is the first university that I've worked at. Before that I was at Sheffield College, before that I was at Hull College Group, and before that I worked at Learn Direct doing digital maths in English, which was kind of how I learned my trade and ended up in technology. Further education, apprenticeships, lifelong learning, literacy, numeracy, digital dexterity is my passion. One of the things that I miss a bit about working at the University of Sheffield is the vocational teaching space and seeing that impact on students and working with vocational staff. It's part of my slight sadness about working at the University of Sheffield, because I've still got contacts in FA, which is one of the reasons I'm here. So my background, other than primarily working in education, I've always worked in digital teaching and learning because of my background at Learn Direct, and we had a very significant platform which gave us a huge amount of data and knowledge about our students and helped us design and improve the courses and the offering that we had there. What I've found as I've moved into the more face-to-face space is how different parts of education are and part of the conversation that I'll have with you later is actually how much further on in some ways FE is than HE because of the level of diversity that happens particularly in a research-intensive university around what we know about our students and how we can help them with their teaching and learning is often here a lot less than what I found we knew in FE. So moving on from talking about myself, where I wanted to start this conversation was a reflection of the challenges that we are all, whichever bit of education we're working in facing, and this was my summary but you guys may have some other things that you are facing. I'd be really interested in the chat if I haven't missed anything. So my view, again, having worked in education for as long as I have and in the more highly funded end at the University of Sheffield, nevertheless I think all of us, particularly in FE, particularly in apprenticeships with facing severe funding and financial pressures and that's been exacerbated, that was already there before Russia invaded Ukraine and the situation went wild. And I think the fixed funding models that we all work within, whether they're kind of ESFA funding models or student fees, are causing us to have to behave in ways that are not conducive to the best student experience and especially within technology prices are spiralling for products and services and people as an educational institution to kind of meet those spiralling prices whilst you are dealing with fixed funding models is a near impossible task. I think broadly across the sector, access and participation is an issue for us all. I think particularly post pandemic we've seen a real shift in how students want to engage in their education and how they want to consume their teaching and their learning and what we've found in the University, particularly here in Sheffield, was our students really asked to come back to face-to-face teaching and learning but haven't come back in necessarily the same way. So we've seen students sitting outside our lectures, our in-person lectures using our streaming technology because they didn't feel like they could cross the threshold which has been an interesting quirk. But more broadly, the cost of living crisis, how that's impacting students and families, how people from different backgrounds are accessing education, using technology and thinking about how we can overcome barriers to access is a thing that we're all kind of facing into. In Sheffield, curriculum relevance seems to be as much of an issue as it was when I was in FE as well. So particularly with the kind of high-speed changes that we're inevitably going to see over the next few years with the kind of changes from machine learning and language models, kind of creating curriculum in any area of the university and or in the college that is relevant to employers and to students and then kind of filling the skills gaps of how we teach that and our own skills gaps around how we can allow our staff to comfortably and confidently use technology to fill those skills gaps and to use digital skills to kind of meet that need from a student's point of view is a really big deal. For us at the university, employability is one of the key measures that we focus on. It's one of our key strategic objectives and having students that leave the university that are ready to go into the workplace, especially when the workplace is changing rapidly is a real challenge that we're facing. One of the things that I found in FE again less so in the university was about how you work with employers to engage them in curriculum, portfolio and the use of technology and I'm interested to hear about whether that's changed in the time that I've been away. What I know hasn't changed and I think has become a more pressing problem for us all is adapting to the increasing mental health and wellbeing of both our students and our staff. The pandemic has exacerbated challenges that were already there for an entire UK student population and each new cohort seems to bring new challenges around how they want to be supported and how we as institutions can effectively support them whilst also ensuring that they have high levels of engagement with their learning. What we're also seeing in Sheffield is workload, workload, workload from a staff point of view really creating a challenge from a staff point of view and being able to think about how to use technology to improve workload issues is something we're really considering. I suppose the question today is how weather AI, language models, automation is it something that can reasonably help us with some of these challenges and then if it can, how do we design and implement them in a way that is sustainable for our institutions and ethical for ourselves, our staff, our students because the speed at which the technology is being launched upon the world and then the considerations that we have to make institutionally are, they don't necessarily meet. So that's one of the things that I was going to kind of talk about at length today. So I don't know whether there are any other challenges that I've missed that you are facing in your institutions so if you want to pop that in the chat that'd be really handy but I'll move on to AI and machine learning and the large language models and all of the things that we're all having a really exciting time playing with at the moment but probably all are also kind of slightly terrified of as well. So the first question is will AI change everything and I think the potential of the number of different models that are available and the speed that those models seem to be improving has, from a personal point of view and doubtless for all of you, the speed of which has been eye-watering and it's always something that I've been interested in but the adoption curve and the improvement in particularly the language models has been so fast that I think even the most interested people have found it difficult to keep up. I think where we as a society, we as institutions, we as educational practitioners have got to be really careful is considering the impact of advancements in technology in a way that allows us to get the best effect for our students and our staff and also takes into account that doing things quickly doesn't necessarily always mean that you get it right and so having an approach that allows things to be tested at speed but not necessarily then launched upon everyone. So I should probably declare that not only am I an IT director in a university but my partner is an ethicist so this is kind of our dinner party conversation is the rapid change in tools and then the way that kind of ethics and practice can kind of keep up. So one of the things I think all institutions need to consider is just because the tools exist and people are going to be using them because that genie is well out of the bottle how can that be done at an institutional level that takes into account ethical principles and considers the level of harm or harms that could possibly be inflicted on various people across the institution and beyond by poor use of that technology. I think that's an ongoing conversation because it's not obvious early on about what extent that level of harm or the level of benefit could be and I think there's a really interesting challenge for institutions to have broad conversations that are not just technological about use of technology and its impact on people and that's where I think the ethics is a critical part of the conversation. One of the things we're really considering in Sheffield is about the skills that staff and student need to have in order to effectively use the technology. That's critical skills for kind of analysing outputs and other things but it's also the sense of feeling confident in the use of the technology and confidence in digital dexterity aren't just something that we're facing around AI and large language models, it's a general sense that there are so many technical skills that one needs to have in order to work in a modern workplace. Investing in that and building confidence in staff and students is something that feels absolutely critical in this but also thinking about the level of disruption that might be, well it's inevitably going to be inflicted on the modern workplace as new tools are rolled out. We use Google workplace at the University of Sheffield and we know that BARD is imminent if not already there in the console for us to use that and there is a question of do I turn that on for 8,000 people without mentioning it or do we have a rollout programme that allows people to kind of start thinking about building up their confidence. The speed of being able to enable tools across massive used systems within large institutions would be done much more cautiously previously but post pandemic we're probably a bit more confident in people's ability to adapt to change. One of the things that's really interesting about all of the different AI models that are available at the moment is the genuine potential to democratise access to technology particularly from an equality diversity and inclusion point of view but with reference to the ethics and the potential for harms as well. I'm sure all of you have had a play around with the technology and seen the potential for levelling the playing field for access but again that's all rooted in giving people the right skills, the confidence in order to be able to use the tools to best meet their needs. You'll all be thinking really carefully about how to use technology in terms of teaching, learning and assessment and I can see from peers and from reading around just how people are experiencing things at the moment the confidence issues are clearly manifest the most in assessment and wanting to preserve the integrity of assessment whilst also wanting to understand the impact and the potential for the use of AI for students in how they want to be assessed as well. So there's a huge range of things here and I think institutionally I'm seeing a spectrum of people wanting to really embrace technology but then lockdown assessment or not wanting to bring it into the teaching space at all until it's been really carefully looked at or wanting to just let all things rip as well so there's lots of people doing other things. Back to the validity and reliability AI is not infallible that's back to the ethics it's back to the data that it's been fed on and having a sense particularly for students of not believing everything that you see here or read is going to be a huge part, I think, of how we adopt to the technology as well. Critical thought, which is a key part of what we teach anyway in any part of education is going to be one of the huge challenges that we face around having students that can engage with and use technology effectively but also challenge the validity and the reliability of what they see in here. Finally, for us as a sector the intellectual property of what we produce from a teaching and learning and assessment point of view and that being available across the whole of open AI models and other large models I think is going to be a really interesting challenge not least because a lot of the advances in this space have been driven by Silicon Valley firms that want to commoditise and make money out of it and I think there is going to be a real challenge that's linked again to the ethics and the kind of strategic principles that I talked about earlier around how we as educators make available essentially the product that comes from our brains and is part of what we offer in our institutions to our students. Underneath all of that then is all the privacy concerns about the data and how it's used and whether people can consent to all of this as well so that's a really important part for us to consider. So, these are the things that we're thinking about in Sheffield that you might want to think about yourselves. There is an element of the genius out of the bottle our students have been using these models longer than we've been using them but rushing to adopt them in any part of our institutions can have the potential to cause more problems. I already talked about what things might be like in assessment and I think that there is an inevitability again of needing to be really careful and sensitive about how we consider use in assessment and I've seen everywhere on Twitter and other fora that I use just some incredibly creative pedagogical uses and the technology, the images and all the other things is just a huge amount of potential and obvious pedagogical uses but that's going to develop and evolve over time. Our real worry in Sheffield is that our staff skills are by definition going to lag behind student skills just because of the technology adoption curve. Sorry, that's my computer beeping a bit. The thing that we're really, really focusing on here is this digital dexterity creating a staff and student base that allows people to move flexibly with changes in the technology and a lot of that is around communications, it's around myth busting, it's around availability, licensing all of this type of stuff that our staff body are as informed as they can be to allow them to then creatively adapt their practice to meet this. I've talked about the clear accessibility advantages and our accessibility services are really looking at that. The ethics and the bias and the data privacy and the security are from a governance, from a harms, from a potential for things to go very wrong. That's where I'm really worried and that is where I think really good conversations at extremely senior levels of universities and colleges need to be had because these are not technological issues, these are institutional issues, these are about the relationships that we have with our students, these are about the consent that people can give and it's also about being able to describe that data sets are not without bias and therefore come with a health warning and being able to teach that to students is really important as well. One of the big concerns that I've got is that in the UK the legal and the regulatory landscape will lag behind by definition because the technology appears and then the landscape has to capture up. But one of the things we're thinking about is how we adopt the technology is thinking about what that might mean for the statutory environment in which we exist so thinking about how that might need to be either reverse engineered or thought about as we build and buy. The way that we're kind of thinking about overcoming the challenges and the considerations are things aren't moving so continuous monitoring of use and impacts across our whole institution is going to be absolutely vital to this and having really healthy useful conversations with staff and students so it's not just being something that happens behind closed doors. We're all going to be doing this together it's not going to be something that's mediated in a technology team or mediated in a digital learning team feedback and ownership particularly from students and from staff and their teaching practice allows we assume to create ownership and collaboration between all of the areas that are going to be impacted by this technology and where I think we are going to heavily invest is an ongoing skills refresh and creating an open learning environment where our staff share where they've done really good work and can kind of show the technology to best effect but also then be talking about when it has gone wrong and when things need to be adjusted because those feedback loops are going to be absolutely critical to doing the technology well rather than doing it badly and then only realising later so those kind of type feedback loops are really important and finally data and beyond one of my observations since working at the university is the data that I had available at Sheffield College was way more structured than what I have available to myself at the University of Sheffield so actually I think FE's got a huge amount of potential to achieve more economic use of learning and back office use of AI processes but I think that will be in turn constrained by funding and time capacity to make that significant change because there are smaller teams and less time and less money obviously but all the usual things are in play that I think we've been talking about for a really long time and I think these are probably going to be accelerated there is as ever huge potential for personalised learning but the upfront investment in allowing that to happen for students needs to be well designed and that's always been the case regardless of access to AI and large language models there is clearly potential to have more targeted tutoring but even then human intervention leaving tutoring to an AI model feels too risky to me at the moment but some specific needs where I think it could work as well predictive analytics that's always been a thing that making predictions about students wanting to make interventions I think that's always been driven by massive ethical concerns about whether the technology was driving behaviours or not and I think that there is still some use to that but a lot of the kind of use cases that I think the tech companies are putting in here are still underpinned by frailties around how that would actually work in an educational institution and then all the other things that we've been talking about a long time adaptive assessment whether we can automate content creation chat box is obviously going to be completely brilliant but the key thing for us in Sheffield institutions is having those healthy conversations with senior leadership with pedagogical leadership with staff on the ground with students and not just turning this into something that becomes a managerial view of how we can do things better suddenly the technology is there so we should start using it to save money all of those things this is a really, really key point for us to start thinking about technology in a really holistic way and being able to challenge how things work on the ground how things work in the classroom using this technology versus a sense of imposing a way of working from a managerial point of view so that's it from me I hope it's been interesting this is what I'm thinking about at the moment so it is fairly hot off the breath and I don't know about what that means for any of you but I'm really happy to take any questions any thoughts about whether what I've said has been handy or not hello oh, thanks Steph I hope it was useful it'll probably be different in about four weeks as well I think that's the thing that I am I'm finding both completely amazing about this process and also utterly terrifying is just every conversation that I have at the university changes is a review of what we need to do and how we need to think about things I think that's a hand up the rapidity is something that all of us are going to struggle with as well because I think the potential for mistakes and actually in some ways one of my mottos is fail off and fail fast mistakes are a huge part of learning but the potential for mistakes to have harm on our staff and students in this space I think is very high which is why I do think we need to be fairly cautious as well so Ian, how do you think strategically we can slow down the process without stifling innovation interestingly I don't I think the thing is I don't think the process can be slowed down but I think it has to be out in the open and it needs to be spoken about as much as possible because I think the innovation is just going to happen as I said earlier the GD is out of the bottle but it needs to be done in an open environment where people then get the opportunity to kind of iteratively challenge how people are adopting and using the technology one of the spaces that I didn't mention actually where I am really worried about this is a lot of the products that all of us use on the open market they are going to be AI enabled and the open dialogue with the suppliers about what that might mean and how that could work I think is going to be a really interesting risk as well and having those relationships with our suppliers as a sector where we're not just kind of beholden to their roadmap because if the technology could do it why not I think that's why the conversation about ethics and harms is really important right I think the learning from each other in HE and FE is it's really interesting and I could probably go on about this for hours and about the differences between HE and FE one of the things that I have found is that the potential for innovation in FE is much higher so my institution in particular is like a large tanker and moving it takes a lot longer than you would think previously working in FE and then working in a digital environment it was a much faster moving environment and the potential for innovation was higher but the need for innovation was higher as well because of the more constrained environment so yeah that was something that I found Caroline how can innovators developers outside of FE help with this so that is something that we do cross over at the University of Shaffer because we've got a huge computer science department and machine learning is a big strategic priority for them where I think we've got some interesting conversations as a sector is actually engaging with companies like OpenAI and the big ed tech firms and I do think that that's something that USISA and JISC should be brokering conversations between institutions and some of the larger tech providers because the use cases that we have in technology are really interesting and unique even though we're not we're not the most commercial the amount of money that they'll make out of is not necessarily the most commercial so I do think those are conversations and UK government as well actually one of the things that's really interesting is how we can influence things in the UK how we influence things in the US and our potential to influence Google, Microsoft and OpenAI is probably low but again as a sector if we all come together I think we've got much more heft so I think that's it for now I hope it was really useful I do feel like FE is my spiritual home so I'm really happy to come and talk to you guys and it's nice