 I wanted to start today as we always do when we start discussions and presentations in Australia with an acknowledgement of country. An acknowledgement of country is an acknowledgement that the lands on which we stand in Australia are the lands of the oldest living civilization in the world. Australia's indigenous populations go back kind of 65,000 to 75,000 years and the land on which we sit has never been ceded. So we always start our presentations with an acknowledgement of the particular country we're on because there are hundreds of language groups across Australia. So I'm speaking to you from the land of the Ngunnawal and Nambri people here in Canberra and I just wanted to pay my respects to the elders of the Ngunnawal and Nambri people past and present. I wanted to start with this image too because it really helps articulate some of the concepts that we're going to talk about during this discussion. So the image that you're looking at is an image taken here in Canberra. It's looking back over our national arboretum and it's one of those images that the longer that you look at it, the more that you pick out. I was wondering if you wanted to, using the chat function, just pick out anything that you can spot in the image as you study it on your screens. So I can see, for example, right in the foreground, tree plantings, hundreds of them. I couldn't count them. The longer you look, there's other things. Can anyone see any man-made structures in the image, perhaps, or other signs of environmental life? If you can't find the chat function, I'll start to unpick it a little bit as we talk because while you're looking at it on your screen, you might spot things like there's a highway in the background. There's electric electrical infrastructure going across the hills. There's power lines. There's pylons. There's trees. There's fire trails. You might not be familiar with those, but those markings across the hillside that kind of look like orangy roads, they're fire trails to allow quick access to across the reserve. It is a man-made tree plantation and the story behind this, the further you unpack is that actually what you're looking at is a landscape that was totally destroyed by bushfires in Canberra nearly 20 years ago, and the National Arboretum is part of the regeneration of the landscape. So what you're looking at is part of the efforts to restore the landscape over time. What do you think the photographer is trying to capture in the photo? What are they focused on? Any guesses? The new planting? I feel like this is the tree plantation, so I was standing next to the photographer when they took this, the vastness of the area, and I know that they were trying to capture the light, that what they were interested in was the light. This is a dusk coming across the plantation, and just in how we unpacked that image, we're starting to touch on some of the concepts that I'll talk about throughout this, which are that any system comprises of many intersecting systems, there are technical elements, there are human elements, there are environmental elements, there are these interweaving systems, there are all, there's also this kind of sense of time passing, and not in a linear fashion, but kind of always being acted on. We have these regenerative elements, these efforts to create more sustainable approaches to managing a landscape using a combination of human, technical, and environmental considerations. And we also have a sense of perspective of stance, like what it is that we're looking at when we're interested in exploring a system. So we'll unpack some of those in this discussion. But first, who am I? You've heard my introduction. I wanted to really quickly express my fondness for the library sector in the UK, because actually in a past life, a decade ago, I used to work for the International Federation of Library Associations and Institutions as their manager of digital projects. So I've actually spent a lot of time working with libraries both in the UK and around the world, and it's really great to be back here. I also wrote a book that Kersti mentioned, and I designed a board game that you can see with Jenny Tennyson, who's just incredible data, intellectual engineer, leader in the UK. And I'm also incredibly clumsy, because if you look very closely at that image of me holding the poster for the board game, you'll see that I have a tiny plaster cast on a finger on each hand, because that is from a particular moment in the UK when I was playing netball, broke one finger, kept playing, and then broke another one. So I just kind of comically walked around for about three months with a broken finger on each hand. So I really shouldn't go any further without quickly talking a little bit about cybernetics. At the School of Cybernetics, we refer to new cybernetics. And really core to the way that we approach cybernetics is reclaiming human and ecological considerations alongside our technological considerations. Before I even unpack that, I really have to tell you a little bit about what cybernetics is, because if you're like me, I have worked in AI and technology my entire career, and I had never heard the term before coming to the School of Cybernetics. And I've kind of found myself fascinated with its legacy and its stories and its relevance to the way that we're thinking about AI today. So the person that you're looking at in this image is MIT Professor Norbert Wiener. He was a mathematician and a scientist, and he's the founder of cybernetics. Cybernetics originally came out of World War II. And it was a science that was associated with the development of self-guided systems, missile systems, torpedoes. And it was very interested in the science of communication and control between machines and humans. So how is it that you deploy a machine in an environment that is also dynamic and how does it achieve goals? Cybernetics predates kind of the rise of computing and of artificial intelligence, but in fact, a number of the people who were associated with cybernetics at the time were the founders of artificial intelligence. And a lot of those ideas kind of infused computer science. However, over time, cybernetics kind of ended up more focused on the technical system than those broader human and environmental considerations that surrounded it. Even though there was a number of scholars noted anthropologists who were very interested in how we could use systems-based approaches to consider the design of technologies. So even though while in the 1950s and 60s, cybernetics was, I would say, more popular than artificial intelligence. It was very zeitgeist-y. There was international art exhibitions. There were technology companies. There were scholars across the world who identified with the movement. Artificial intelligence eventually overcame it. And kind of now it's starting to make a resurgence. And cybernetics helps you really look at some of the systems that you're interested in. Does anyone recognize the image in the background here? I'm going to take the silence in the chat because usually someone jumps in. Is it a chip? It is a kind of chip. This is early versions. It's a memory core. That's right. This is early versions of read-only memory. This is called rope memory. And it was used to pilot the Apollo guidance computers. And so you're looking at a piece of computing infrastructure. But it's also one of those wonderful kind of examples of when you zoom in on the technical system, you miss the human and the environmental kind of context that surrounds it. Because these were our programmers of rope memory. Rope memory was primarily programmed by women. Women were the original computers for our space missions for most of our early complex operations involving computers. They programmed the specifications into instructions that could be read by a computer. And there is a culture and a language and a number of terms that come out of this origin story of computing. So cybernetics gives you a kind of different way of looking at the technical system. I also should very briefly pause here and define artificial intelligence because I'm going to talk about cybernetics and artificial intelligence throughout. And AIs another one of those terms that can mean many different things. The one that I quite often use that and I like using is a kind of a sentence that the AI now institute based in New York used to describe it. A constellation of technology spanning sensors, machine learning, massive quantities of data, networked technologies that help us undertake different kinds of tasks. And what I like about that definition is it gets you away from thinking about AI as kind of one neat type of computing system. It is many different kinds of components that are put together to create tasks or processes that can be automated. Another way we often talk about AI is we're talking when we say artificial intelligence we're referring to processes or tasks that can be undertaken by computers that previously only humans could undertake. I like this kind of language too because again it's not trying to define AI kind of with a very clear boundary around it. It's just like a way of describing lots of different kinds of technologies that fit under this umbrella. And another one that someone has said to me cheekily is well it's whatever a computer can't do yet because when it can we give it a different name. We give it a name like search engine or chatbot and it's no longer as mysterious sounding as artificial intelligence. We are really living through a period of rediscovery of ethical AI and I say rediscovery because discussions around ethical technology have been going for decades and in fact Norbert Wiener has often talked about as one of the kind of pioneers of information ethics back in the 1940s and 50s. But in the last five years we've talked about the most recent waves of ethical AI and in fact a beautiful way of characterizing these three waves has come from Dr Kali Khaind who's the director of the Ada Lovelace Institute in the UK where she talked about three waves of ethical AI over the last five years. You know first a wave focused on ethical principles at a high level, kind of setting expectations of practices at a kind of principles level then we moved into technical solutions. How could we make the outputs of a system ethical using fair machine learning for example? And then the conversation now is moving into discussions of structural barriers to implementing ethical AI. We're starting to have conversations around justice and inequality as part of trying to address harms and inequalities that can be caused by AI. These are all really relevant to thinking about responsible technology but on that previous slide you saw so what you're looking at on this slide is wave rock which is from my home state Western Australia it's a distinct rock formation kind of three hours out of Perth and the kinds of conversations that we're having around ethical AI at moment still kind of quite often inadvertently focus in on an AI system itself. So how can we make a technology ethical either in its outcome or ethical in our approach to designing it? So the focus is still that system. A cybernetic approach encourages you to zoom out to think of it as part of a system. You can still see wave rock up there in the top corner but now you're thinking about what surrounds it and what it intersects with. There's a lovely quote from Norbert Wiener from the early days of cybernetics where he talks about you know actually when we're trying to understand something we're not trying to specify it bit by bit we're rather trying to answer certain questions about it which reveal its pattern and this is a kind of sentence that I just want to take some time to unpack with you because one of the kind of tendencies we have when we talk about artificial intelligence is we exclude people from the conversation based on whether they kind of have the technical prowess to unpack the AI system in front of them so we use language that excludes we kind of presume that well unless you've programmed a system yourself this isn't a conversation that you can really be part of or we think about solutions that are technical solutions in practice well like if our AI system is unfair then our solution to that problem is a technical one. However when we think about other kinds of engineered structures in our life we kind of intuitively grasp that you can't ever know and and be able to specify every part of that system let alone be able to build it yourself and I'm going to use the house to help us unpack this a little bit so I've been watching a lot of ground designs during the last two years of various lockdowns and it's really helped me think about the house as a way of understanding different intersecting systems different intersecting kinds of expertise and people diverse ways of approaching construction of that as a kind of place that people will inhabit you have your architects who are obviously designing the proposed dwelling you have your structural engineers who are investigating structs I should say cracks not structs cracks who are looking at the kind of safety of the structure whether it is safe given the environment that it sits in the manifestation of the soil clay is it clay what is the climate like we have a number of different kinds of experts who are part of constructing that we have electricians who themselves sit within this really interesting system of rules and standards and certifications that kind of both constrain how energy is provided to a house and also I built around what people want from the energy needs and again like when you think about an AI system you have got these kinds of roles starting to develop as part of the way that we design an AI system you know perhaps the equivalent of your electrician for a machine learning recommendation system is your implementation engineer the person that needs to help you plug it in or your deployment team you've also got increasingly in homes this overlaying of lots of other kinds of networked infrastructure smart home devices so it's not just one technology it's lots of different kinds of technologies talking to each other and in many cases our what we call AI systems are themselves like part of networks of computers talking to each other and learning from each other to kind of infer things about their environment we also like intuitively know that in our homes we live with wildlife and and that there is a real world environment that surrounds us you're currently looking at an image of the possums who live in the breeze blocks in my laundry because the bad thing about having breeze blocks which are kind of open bricks in Canberra is that possums will come into your home and make a bed for themselves and there are very strict rules around how and how far away from your house you can release a possum so they're there for life but we like we live and our structures have to account for these other kinds of environmental considerations and we've also got in how we construct these dwellings ideas of people but people changing over time that the dwelling that you construct for right now is not necessarily going to be the dwelling its occupants need in two years five years ten years as they grow and evolve and we also kind of kind of talk about the difference between a house in a home a structure and a shelter that what the difference between a kind of structure that just placed in a context like an object that's dropped that doesn't think about these intersecting systems that it relates to the dynamics of the people that are inside it and around it and the environment that it sits within that the kinds of approaches to building homes that don't take account of these things they're the bad homes they're the homes that break over the time they're the homes that are not built to last and so we kind of intrinsically get though as people who occupy homes that we don't need to know everything about how a home is put together we need to know the connections we need to know the intersections of these kinds of expertise that you know when you have an electrical problem you call an electrician but you shouldn't really be building a house with how the structural engineer whose job is safety is looking at it and yet we don't need to understand everything about how it works we can look for patterns and those patterns help us make sense of whether it's safe for us whether it's sustainable whether it can be trusted and so these are the kind of you know something that we intuitively grasp about the way that we construct our dwellings for example we still don't necessarily think of in our approach to AI design we typically still focus on the system itself and the dynamic environment within which it sits is a secondary consideration and so this revealing of its pattern is something that's really core to cybernetics there's three concepts that I really briefly take you through before we conclude that help you grasp a little bit of how cybernetics would approach responsible technology design the first kind of concept is feedback cybernetics is very interested in communication feedback between different kinds of systems so feedback between a recommendation algorithm and the humans acting on those recommendations and what they tell you about how that system works and I love this quote from our founder at the School of Cybernetics distinguished professor Genevieve Bell the output of a system is its future input when you just like stop and think about that for a moment the output of a system is its future input it really changes the way you think about you know in this case if we were talking about an AI product you know an algorithm that's going to predict your likelihood of getting a job it is no longer an object that is just a product that is out in the world making decisions suddenly it is having a role in the kind of decisions that are being made it is not just ingesting information it is also providing information that it is going to then rely on to make future decisions it is kind of acting we also talk in cybernetics about causality of a systemic character so moving away from this simple idea of linear causality and the image on the slide is probably very familiar to you because it is an image from the protests by students in what I think was the end of 2020 protesting the automation of the results of I really hope I get the exam right at A levels if I've got the wrong the final exams that you sit prior to entering the university where because of the disruption of the pandemic a predictive algorithm was used to try and predict the kinds of results student was get would get and so a linear view of causality would say well the past results that you've had at your particular school suggest a kind of provider linear causal relationship to your future results so if we use your past results we can predict your future results and therefore kind of automating that process is a relatively straightforward one cybernetics would say well actually in a context like this where there are so many intersecting systems with interdependencies and interrelationships these are real people in schools of many different characters in many different contexts you are forced to accept that there are a number of kind of causal influences on the kinds of results we get in our individual schools where those schools are how much funding those schools receive what your own socioeconomic background is like and that say something about the way those results will be perceived and the way those results are scored that need to be taken into account before even trying to predict or automate the results of a level so kind of say well hang on you shouldn't be approaching this problem this way maybe automation of your results is not going to get you the outcome that you want and the third concept that I wanted to really briefly talk to you about was perspective and so another like really key transition point in cybernetics was moving from this idea of viewing systems from a place of objectivity to recognizing that whenever we look at something we are also part of what it is that we are observing we bring our own values and we bring our own perspectives that was a really crucial difference between what was called first order and second order cybernetics this move from these AI systems are I can view them objectively because I'm not part of them to well actually we're always part of the systems that we observed and this is a really lovely quote from Norbert Wiener sorry not from Norbert Wiener from Heinz von Forster who was another he was a physicist and a philosopher and one of the kind of most very prominent cyberneticians for kind of much of the 20th century and he has this really lovely quote about objectivity as a popular device for avoiding responsibility when the essence of observing has been removed the observer is reduced to a copying machine with the notion of responsibility as part of that observing successfully juggled away and so second order cybernetics not only encourages you to see that you are part of the thing that you are observing but also that there are many different perspectives that there are other ways of viewing the system that are not the way you see it it also helps you think about things like the boundaries we set around the problems that we encounter with AI boundaries and how it is that we decide information that is relevant to the design process is a really kind of rich area of debate and reflection in cybernetics trying to get us to think consciously about what it is that we think is important to consider in designing an AI system and a really lovely illustration of what happens when you don't do this when you don't really critically consider the information that you think is pertinent to the success of an AI product and the information that seems irrelevant is in this story which was a story about a Google AI product that was trying to predict eye disease from retinal scans and it worked really well in the lab when it was just a computer processing large quantities of images of eye disease when it was deployed into the real world it had a myriad of problems it was deployed in a number of hospitals in Thailand to try and support diagnostic processes but images were much more varied in the real world than they were in a lab setting network connections in different hospitals varied so in some cases nurses had to wait several minutes sometimes hours to have a result out of the diagnostic AI the AI was learning to infer things from the images that weren't actually there and it's essentially had been designed without any consideration of the real world context those dynamic considerations and so if they pushed the kind of boundaries out of how they designed that product so not just think about how it is that we get really good quality data and how is it that we get really good quality compute power and a really clean and accurate algorithmic output and instead broaden that boundary to what will happen when we put it in the world what will the effect be what are those dependencies you start to see a more sustainable and durable approach to AI design at the school of cybernetics these are the kinds of challenges that we grapple with every day and we have an incredibly diverse group of people around us to help us do that because that was also kind of part of the kind of core philosophies of cybernetics was that there is not one way of viewing a system you need to take account of different perspectives and approaches and so at the school of cybernetics we have everyone from theater makers to roboticists to nuclear physicists my background originally was in law although I've always worked in kind of data standards policy and engineering and these varied perspectives help us consider AI design from a very grounded dynamic vantage point recognizing that the view that we have of the AI that we're designing is not stable and that the world that it is in is changing all of the time I wanted to leave you with this sudden thought bubble that I had tonight whilst I was looking at watching over my two-year-old in the bath because in the bath tonight she invented a game where she was making pancakes using a couple of bowls and a whisk and she created this really elaborate way of helping me through the pancake process and it really reminded me of another quote from Heinz von Forster where he talked about two different ways of seeing the world he talks about discoverers and inventors discoverers see themselves as citizens of an independent universe something that's separate to themselves whose rules regulations and customs they will eventually discover and inventor on the other hand sees themselves as a participant in a conspiracy whose customs rules and regulations we are inventing and he goes on to talk about how when you encounter discoverers and inventors they don't even necessarily identify themselves as discoverers or inventors but actually it really helps explain the real difference in how they see the world but actually he points out that you don't want one or the other both give you a richness in the way that you build systems and the way that we experience the world he says the discoverers ultimately become things like astronomers astronomers physicists and engineers the inventors become your poets your biologists your cybernetists and you ultimately living together isn't a problem as long as the discoverers discover the inventors and as long as the inventors invent the discoverers as long as they kind of recognize the value in each other's approaches so I just wanted to leave you with this kind of question for yourselves is when you hear that like this way of viewing the world that's articulated through cybernetics as a discoverer or an inventor which category do you put yourself into and how does that shape the way you see the AI systems that you're part of that you might design that you purchase or that you interact with and with that I just wanted to say thank you and I hope we can move to questions well thank you so much for that and you left me kind of pondering what am I in terms of discoverer or inventor which is a really good question and I think I know what I'm leaning towards but I will ponder it a little further you talked a lot there Ellen around about the kind of systems and I think what was really fascinating is you actually really sort of broadened out our perspectives on what that system is relating to and I think I've fallen often into that trap of thinking of quite a sort of small narrow system and I love that kind of description of constellation and that kind of real broader perspective so I wonder what yes how how is that going to move forward in the future where do you think we're going to sort of end up with our sort of systems and structures for AI what can you see in the future so I think at the moment we still really talk about AI as though it's one thing so even when we talk about regulating AI we're like we're going to regulate AI and we're going to kind of create an omnibus piece of legislation that will regulate AI rather than thinking of it as kind of as a constellation of different kinds of things that will be regulated at different points and for different kinds of purposes you know because if you just use the house analogy that I used in the presentation we don't have a housing act and we don't have like one way of viewing a house we have standards around energy provision and energy pricing we have building regulations and certification we have consumer codes and products and expectations around the kinds of household appliances that we have we have cultures and customs that shape how we use that space and we don't and like we kind of get that which like yep it's a complex network of things and I think ultimately like we are going to head there in AI I really doubt you know in the next within five years I'm sure we'll still be talking about AI as an omnibus one object because we'll have a few cracks at trying to contain it to something that feels discrete but I think in 50 years we'll actually have a much more sophisticated set of bodies as in kind of professional bodies regulatory bodies educators you know we won't be limiting AI education to computer science faculties it'll be just like a much more mature and more sophisticated web of sets of expertise and we'll recognize that you'll no longer if an engineering team says to you in the same way that if you had someone come to you building your house today and they're like oh I can do it all like I'll design your house wire it cement it you'd go like this is probably not going to be the best house but at the moment quite often there's like a as a famous essay in computer in computing called programming sucks and it talks about how in engineering teams we quite often do exactly that it's like oh I can do this and I'll I'll take on this role and I can do security and I can do the programming and I can do the deployment and user research I kind of get that as well and I think we'll move away from that or it'll become a sign of a like a bad smell that's really fascinating and I think you're right we I think also interesting what you were talking in there around about those time scales and where we are at at this particular moment in time we have just reached that point where we're really grappling with the concepts and trying to understand them more and we're all trying to make sense of it but we haven't got enough of the picture yet and so it's going to take us that yeah 30 to 50 years before we really have that really sort of sophisticated ecosystem so I'm not seeing any questions pop up yet and please do pop anything you want to ask in the Q&A or if you want to join us on the panel and just pop your hand up and Matt and Melanie will move you in to actually ask your question but while nothing else is popped up I was just sort of thinking around about what we were talking about there and about trying to get to this broader system with this much wider perspective and I suppose I'm kind of curious around how do we get there what we're going to need to deliver that or move into that direction over the next 30 to 50 years what's it going to take so Genevieve who's the founder of our school has this lovely term productive discomfort we're going to have to get much better at coping with productive discomfort because quite often when we talk about kind of diverse viewpoints and diverse perspectives in technology design at least we talk about like well we'll get a woman engineer or we'll get a person of color to be an engineer or we'll get someone with a disability to be an engineer and not we'll get very different disciplinary backgrounds together who have different ways of like valuing things of importance together to try and collaborate on building a technology because when you all share the same mindset but you might look quite different actually you're not in a place of discomfort you all share the same language and you're going to work the same way and so one of the challenges that we really have in our approaches to education and our approaches to workforce planning is like it's really hard to work with people who don't think the same way as you do like it's not fun all the time in fact in a lot of ways there's way more friction and way more arguments and things take longer to get to consensus on because you start not even necessarily understanding what each other is saying and so we and it's not the case that you can just drop very different people together and go well now the magic will happen like I've got my physicist my theater maker and my roboticist this is going to work out well there's a lot of kind of culture building and expectation setting and team processes to help build the respect for these different viewpoints to get comfortable with not necessarily understanding what is happening and that's like a real cultural shift not only in the way we educate because we educate for specializations but also in the way that we build our teams where we kind of again break them into specializations so I think it's part of what we all want but it's really hard and there's a reason we shy away from it when you get it no and I agree it is it is really challenging to work with people from very different disciplines where you sort of have to almost establish vocabulary and some sort of baseline understanding before you can kind of come together as a team but that does remind you of that analogy some recently came up said to me it's around about that kind of grit in the oyster that creates the pearl and kind of create something really really special and I suppose an interesting thing is around about thinking libraries as well and libraries are very we like being organized we like being disciplined we like our categorization and we're quite kind of yes we like everything in particular areas categories you know we have particular disciplines within the library sector so I suppose how do we within libraries start engaging and working within AI and machine learning and what skills do we maybe have that we can bring to it and what skills do we really need to foster so that we can be part of this kind of wider discussion that's happening yeah that's it's a really interesting question and before I answer I wanted to pick up on a kind of point you just made about like you know libraries liking categorizing information and putting things in particular places and actually something that I always say to machine learning researchers is please go and look at libraries because libraries have had to develop over hundreds of years very sophisticated ways of thinking about how it is we manage information like the consequences of classification um how it is that we choose what is important and what isn't important you know the processes of deaccessioning it's just it's not the case that you just go through a library and you're like well we'll just remove all like there are very sophisticated processes when I was at IFLA I was blown away by the sheer volume of committees and governance mechanisms dedicated to the curation and management and preservation of information that we really don't consider at all in the context of data management for machine learning it's just a big set of stuff and then we're only now starting to talk about like oh but the the data we have is not necessarily representative or we're classifying it in particular ways that don't necessarily reflect the context in which we want it to be used so there's just all of these lessons from the library sector but I really wish we would learn from in machine learning so I would just feel like I'm constantly telling people to talk to libraries about information science but okay the skills and expertise that are essential to starting out in AI and machine learning so I really don't think it's about like doing a bit of machine learning or going and doing a course in programming I think it's learning to identify patterns and data maybe this is my bias because it's my background but I always found data a very accessible way into machine learning and AI and by data I really just mean like the classification and the curation of information because when you start to unpick what it is that is collected how is it collected what formats is it in where has it been collected for what purposes are being used you start to unravel like a number of practical questions about your implementation of your product also a lot of kind of consequential questions what effect is it going to have and you start to see forward to how am I going to have to improve it over time so I think I find informatics really interesting and accessible as a way into AI and machine learning kind of giving you the clues and the things to look for you know we talk a lot like you should learn about machine learning but basic principles of statistics I think are really useful so I would say like some basic understanding of statistics and I don't mean that you can like do Bayesian inference but you just know what causality is as a concept and the difference between causality and correlation that that kind of thing and so Jenny's mother is just asking do you feel a focus on explainability expandable AI indicates a broadening of perspective to at least include the human element is explainability manifestation of the feedback we're talking about so yes and no because yes to the first part of that question I do think it demonstrates a broadening of perspective we're starting to say we need to articulate what it is a system does but most of the time when I see explainability being talked about they don't think of it in a feedback sense it's we will be transparent about what a system does we will disclose information and not and what people think our system is doing and the difference between what we say our system is doing and what people are doing with it will change our own response so I'm not saying a lot of the like listening or attempts at listening based on those explain explanations because that's ultimately what you really care about because it's you know it's one thing to say oh well if we just tell people how an algorithm made a decision about how their exam results were calculated that will make sense to them and it will be an explanation that they instantly recognize you look at the reaction to that explanation and the difference between what you thought you were explaining and what they've understood to then go back and change your approach so I do think it's the broadening of perspective I don't think we've actually gotten to a point of that dynamic relationship of feedback which is actually going to be really important it's not that God it's just more to think about it's your system predicting the outputs of someone's exam results is not working if it is wrong and making poor calculations of results and as part of improving the performance you need to be listening and receiving information from the kind of contexts that it's operating in. No thanks for that and Paul has joined us to ask a question. Hi Paul. Hi Kersti how are you doing? Hi Ellen. Hi I'm from the University of Surrey in the UK and I've been finding this really fascinating because I've sort of got a half question but we've recently just launched a Pan University Institute for People Centred Artificial Intelligence so it's a little bit about what I'll put the link in the chat actually if I can just grab the link so what we're trying to do is what you've been speaking about Ellen in times that we've identified all across the university across all the different faculties we've got this interested LAI from different perspectives and it's very recent because it's just launched so it's really going to be our main one of our main areas of research in the coming years which we're really going to focus on but I suppose my answer unanswered question which is a bit what Kersti said earlier I you know it's all launched so it's new we're setting it all up but how do we as a library get involved in that if you know if you had seen that and thinking right we've got this People Centred Institute now what in terms of the library and the services how can I get involved in that and what do I do to really because at the moment I'm excited by it but I don't really know what to do about it if you see what I mean. Yeah do you have any engagement with them at the moment? Not really no because it's only been set up I've met the director of it once and I suppose the engagement I have is I you know I've had the talks because I'm part of the research leadership group across the university so I've been involved in those bigger conversations but not individually so that's what I'm thinking about this moment now it's been launched how do I get involved so not directly but indirectly through this on pbc for research. So I would think about it in a couple of ways this is only me thinking about you know both some of the challenges we have as a school kind of you know we don't use the same language but it sounds very similar and kind of trying to move to this human-centered approach is there's two things like one is one is I really would try I'm very passionate about libraries you can tell but I'm like there is so much in terms of library science and curation of information and collection of information and that is actually central to these conversations so it's not like a service it's it's not just a library as a service provision but the history and the culture and the lessons that our first information kind of processes and providers can give us so I would really see if there's a way to you know like present into the human-centered AI institute on like well like what is it that a library does because most people just don't understand it they're like it's a place that organizes all of our licenses for subscriptions and sometimes you might go in there for a quiet place to study so they kind of don't quite understand this legacy in this history and the second thing is I think libraries are really valuable and in a service provision context for helping navigate some of the epistemological challenges we have bringing diverse researchers together is they don't even necessarily understand that they're not using the same terminology or that you know sometimes you're not even aware of like what sociology is as a discipline and what the most used resources are associated with sociology in that particular collection and I feel like even in my own context that I operate in if I had a way of kind of looking like going okay well I work with an anthropologist and is there someone that can help me make sense of what are the seminal I shouldn't use that term we hate using here because it's like a very gendered term but like what are the influential scholars in this space that if I have done a little bit of reading of that I will understand what my colleague is saying when they use a term like socio-technical imaginary. Brilliant thank you very much that's really helpful. No and I think we've just had someone kind of highlight that this has been one of the most clearest kind of talks explaining what AI and cybernetics is and I have to say it's really one of the things that's really struck me is just that kind of broader system that kind of constant I'm going to use that constellation phrase if nothing else and I think we need to sort of change the way we talk around about AI but I suppose one of the other things we have within our sector is our knowledge and understanding of kind of data and I think I was kind of very interested when you were describing the library there you were talking about us organizing licenses for our content and you've already moved and shifted from that building with the books where there is a quiet study space which I think is an interesting shift and I think you're also highlighting we need to change further and do more digital shifting in order for us to kind of be part of these conversations and people to naturally kind of start engaging with the library as a place in which we can help facilitate what happens in AI in its kind of broadest sense so I think you've given us a few thoughts and suggestions if there were any other pointers you would give to us to enable us as libraries to continue engaging in this debate and conversation what would you say to us? So I would definitely tell you to check out some of the work of scholars like Sophia Noble who is the author of algorithms of oppression she's just been awarded a MacArthur grant in the United States for her contributions to AI and the reason I immediately thought of her is because she also has a background in information science and so she just has this again really wonderful way of positioning the challenges we have with algorithmic design in also some of the lessons that we've learned from information science so she's fantastic and there's a documentary on Netflix that Joy Boil and Winnie and Sophia Noble and a few other kind of leading intellectuals in this space are part of and I'm forgetting the name but I'll give it to you afterwards it's someone else might know it in the chat and they'll drop it in but it's a great documentary in a really accessible way into some of these topics so I'd say that number one and the other person I'd probably say just because they're so accessible and engaging more specifically on cybernetics is some of Professor Genevieve Bell's writing and speeches on this issue so she just did the garrion oration in Australia this year which really places cybernetics in this very changing context we're in now with the pandemic and she's also written some lovely essays for the MIT technology review and for Griffith review exploring new cybernetics so they would be my two excellent and if we were thinking about doing something as libraries what would you get us to do to take us forward in this space so I know I've said because you can just I'm like I'm really passionate about libraries and I really think you should be approaching your centres for AI and computer science and saying can we do a guest lecture or a keynote on like like what a library is and I don't mean the presentation of like here's where you can find the library here's the services that we offer but like what is your role and history and function because that's when people's the light bulb moments go off that there's um you know we're going to end up reinventing a lot of processes and lessons that have already been learned in information sciences and I just would love us not to do that like when we talk about data trusts I say sounds like we're talking about a library but we're just giving it you're going to pretend yeah and I think it is we as libraries need to shout more about the things that we do these days and I think maybe overcome still I think people still have sort of stereotypes in their minds when you say that word library buildings books study spaces maybe your licensed content is beginning to merge more but we need to shout about all those other things that we do how we you know deal with research data long-term preservation accessibility discoverability you know deaccessioning yeah all of that so I think that's a call for us all to shout more about what we do in the broadest terms and make everyone more aware of that