 Felly, yna'r unrhyw yw Ceres Hersie, y prinsipol ac ymddangos ymdegol yn y cyfnodol. Ceres yn ystod i'r cyfnodd honno i'r Llyfrgell, a'r cyfnodd yn ymddangos ymddangos ymddangos ymddangos. Mae'n ddweud o'r strategist digital, ymddangos o'r organisio'r cyfnod, cyfnodd, o'r organisio'r cyfnodd, ac ymddangos o'r cyffredinol yn cyfnodd o'r cyfnodd, yn cyfnodd yn ymddangos ymddangos. Caerys is also a King Scuba Diver with experience in military demolitions, and I'm hoping, though probably in vain, that she'll weave these into her top today. Caerys will speak on the paradox of emerging tech. Welcome, Caerys. So thank you for that warm welcome. As Robin said, I work with a team at a company called Postshift. We're a bespoke digital consultancy, and much as my day-to-day work as the head of consulting is interesting, actually my favourite part of that job are the sometimes slightly crazy questions I get asked in the course of consulting on all kinds of emerging technologies. Those stories I will try and share some of those today with you. But my interesting and emerging tech comes from years of standing between IT teams and the business, trying to help them speak each other's languages. And I think that the digital age that we are experiencing at the moment makes that translation job all the more interesting because of course digital moves so quickly. And really brings into sharp relief the level of change that we need across our organisations to make the most of these technologies if we are to not only survive but thrive in this new and crazy way of working that we find ourselves living through. The thing about emerging technology is that it's something we can't ignore, and in fact what is emerging technology for many organisations is actually run-of-the-mill for others. And so while we might not be at this level of robot butlers with knives standing at our door quite yet, we are in a place where at work emerging technology is changing everything about how we do our jobs. Everything from the employee experience all the way through to the customer experience, from AI sorting CVs during the hiring process to the algorithmic sorting and filtering of content from the digital workplace. We know that the introduction of this technology causes all sorts of knock-on effects. But in most organisations emerging technology comes with an additional focus, whether that's digital transformation, increasing levels of agility, or just a clear future focus strategy oriented around technology. And for some people that can create anxiety because not everybody is comfortable in that emerging tech environment. And what I want to talk to you about today is sharing some lessons that I've learned through my consulting with firms about how you, even if you are not completely conversant in all of the emerging technology ins and outs, how you can still be part of that process, how you can help your organisation become ready for the kind of changes that we're talking about. First of all, do you remember your first encounter with computers? I certainly do. In fact, the computer on the right there, that was my first computer. It was a BBC Micro. I still have it. It's upstairs and it still works, believe it or not. I maintain it will survive an apocalypse. But what's really important about this is some research that Microsoft did two years ago, which showed that those formative experiences that we have with technology inform the way we still approach technology now. Whether that was a fear-based approach to tech, or it was pure excitement at being able to be hands-on with this new exciting box that we found in our classrooms or in our workplaces. Everything about those interactions really does shape the way we approach technology now. It's also true to say that during those early days, the frequency with which you could interact with technology was also important. For me, my father bought me on my fourth birthday, this BBC Micro, and by the age of seven I was able to programme it. And to me, as a slightly socially anxious child, that was an amazing thing to interact with something where I could direct it, to do certain things and to react in certain ways with a predictability. I loved that, it was amazing, and it really gave me a way to engage with this new and exciting technology, and that's really carried over into the way I view technology now. But I think for many, it's actually more of a challenge in the organisation where we see this culture of learned helplessness, where at home we might be perfectly happy to be updating apps on our phones, we might be happy to do some basic troubleshooting of our laptops. As soon as we come into the organisation, we find ourselves in a position where we become hands-off. Digital is not our job, IT is not our job. And so this years-long struggle really needs to change if we are to ever truly embrace the future of digital and emerging technologies within our business. For me, one of the most challenging things about where we find ourselves as organisations at the moment is not simply the kind of market forces we've been used to in prior years. It's no longer just about a new market composition or new products and services launching in our spaces. It's actually about the combinatorial effect of all of these things hitting at once. Take this last year, for example, it hasn't just been that COVID has been a big market force changing the way that we work. We've also seen as a result of that changes in our behaviour as customers, changes in our behaviour as employees, changes in the market composition. We've seen many organisations fall foul of not being able to survive the latest market forces and therefore disappearing. So all of these things coming together, creating these really strong combinatorial market dynamics, means that everything is moving very much faster than ever before and not always in predictable ways. This makes it hard for us to keep track of what's going on in our own sector, but also across industry boundaries. That cross-industry boundaries is really important because we don't always know whether threat is coming from or the opportunity is coming from. It can come from many different places all at once. Of course, there's no shortage of technology finding its way to our doors, whether it's in a consumer world or within the enterprise. Gartner, every year, very helpfully for us tracks what's up and coming, what's going through its trough of disillusionment and what's moving on to the plateau of productivity. But the discussion around all of these types of technologies tend to happen only with a small group of technical experts. So we talk about AI with a small team of those who understand what AI can do for the organisation without bringing to the table everybody else who could also make a contribution. That's driven by two things, the first of which is the interest in emerging tech tends to bubble up in one part of the organisation that has a need to address a problem. So they start looking at that, they start talking about emerging technology. The other reason, and that this comes back to the discussion earlier around learned helplessness, is fear. We don't know what we could contribute and so we don't try. But one of the most important things for emerging technology is that they come into our organisations in a way which encourages open and transparent conversation. Without those open and transparent conversations, we could potentially end up with implementations which do not serve our business in the long term. And we all know that the more technology bring in, the more change you bring in, the more fatigued we can get. So it's important that these are the right decisions for our organisation. So everybody needs a voice. These kinds of isolated responses to technical opportunities, we see these in all kinds of organisations which are already heavily siloed. Siloed by their organisational structure, siloed by their ways of working. And in this year actually siloed because it is much easier for them to respond to local circumstances rather than trying to take a global view. In times of stress and crisis, we tend to bring ourselves in to limit how far we're reaching out and to work only with the teams who we are affected by the most. So in an organisation already siloed, introducing lots of new technologies in different parts of the business makes it even harder for us to communicate across those silos. If multiple teams are looking at the same kind of tools, maybe we end up with two or three of the same type of thing. If they have slightly different requirements, at no point is everybody coming together to kind of hash those out and figure out the priorities. To give you an example at one organisation I've been working with recently, an accounting firm. Doing a quick audit we found that seven departments had each gone out and purchased subscriptions to social media monitoring platforms. Why seven? Who could tell? But what we had to do was rationalise those and it's a very quick £500,000 saving for that organisation to rationalise their IT base down to a single tool. So this kind of cross silo communication is incredibly important as we start dealing with increasing emerging technologies. In addition of course, as Joe Bonamassa would remind us, just because you could, you can doesn't mean you should. And this is incredibly important when it comes to emerging technologies. One of the biggest problems we see and we see it most clearly out in the consumer marketplace, of course, is that software developers don't always know the best things for us as a whole, as a whole organisation, even as a team, or a society as a whole. And so it's really important that the people who are coming to the table are not only concerned with what you can do with this emerging technology, but they are equally focused on what you should do. For me it's a crucial question that every team should be asking themselves of every piece of technology brought into the organisation. It's also important that we try and future project that conversation a little bit, because of course just because it's the right thing now doesn't mean it will be the right thing in the future. To give you an example of the kind of challenges that you might face, I mentioned earlier the application of AI to the sorting of CVs and the kind of initial sorting of CVs that happens to get to a shortlist. Now this is something that is readily available on the market, anybody can purchase it, and many HR departments have indeed moved to this kind of AI sorting of applicants at early stages. But one of the challenges that we've seen in some of those organisations is that actually they're not hiring for diversity, they are hiring now based on what they know they need. And what that means is some of their most talented are getting removed from the process early on in the sorting without any explanation. And that's largely because of course the data sets that people are being that the AI is being trained on, have a number of biases or flaws in them, or we're just not looking at a broad enough pool of talent. So as you can see, it's not always as straightforward as finding a good application for emerging piece of technology. It's also a question of making sure that the outcome is actually what you intended when you started. And so for me, this is a key role that the whole organisation has to play. We can, as a small group of like minded emerging technologists, always come up with the right questions to ask. We're quite biased already. We love technology, we love to play with it, we love to put it to new uses and make sure we're getting the maximum value. But that doesn't always create the best outcomes for the rest of the organisation. One of the biggest conversations that I encourage everybody to have around emerging technology, whether it's as simple as some new digital workplace technology such as Office 365, or it's as complex as that AI addressing the categorisation of new job applicants, is to think about two things. First of all, any ethical issues. Now, this is particularly important if the decisions that come out of using that emerging technology are going to affect people's chances at anything. So in that job hiring area for me, there's a real ethical issue around using a black boxed AI to do that sorting, because the outcome is so important to real humans who have replied for your jobs and deserve every chance to prove their worth to you. But it's not only ethical issues that should concern us. We also need to think about what the emerging technology is going to do to our employee experience as well. Is it going to change the way people are working? Is it going to change the way people are interacting with each other, or interacting with products and services, and therefore we need to design the right things? So it's not always just purely ethics. It also has to be designing in the right things as well. If you think about nudge theory, so that idea that you can use communications and systems to nudge people towards better behaviour, remember this can also be used in reverse. You can also nudge people towards a particular outcome that might not always be beneficial. So always think about those kinds of things as well. Bad design decisions can be every bit as damaging for an organisation as the ethical issues around emerging technology. Finally, the one thing that really bothers me in many digital transformation conversations is the focus on everything needing to be an app. For many organisations, this is actually an incredibly dangerous thing to replace some of their high value products and services with a simple app. Now I understand the drive to do this. Everybody loves an app. Everybody loves something they can download and use without human intervention. However, what this can do is really devalue the product and service that you offer. And so sometimes what's most important is to provide a good user experience and a learning experience whilst keeping our human delivered services in the high value areas of that without hiding them behind an app. So it doesn't always have to be as clear cut as creating that app. So I think that's another thing to keep in mind when we're talking about emerging tech. And for me, the biggest challenge that organisations face right now is not in emerging technology, but actually in the business processes and the management capabilities that our organisations have, which are unable to cope with those 21st century internet technologies. And as probably one of my favourite management gurus, Gary Hamill said, right now your company is using 21st century internet technology and business processes with 20th century management all built atop of 19th century management principles. And so as those who are not necessarily embedded in emerging tech but are interested in being involved, one of the best ways we can do that is to help our organisation rethink its approach to management, leadership and ways of working. And I'll talk a little bit about how you can get involved in that later on. But I wanted to give you an example first of all of where I've seen this in action. So I'm very lucky to get to work with one of the biggest companies in the world, the Robert Bosch Group. They're 480,000 people across the world and they allow me to run some pretty crazy experiments with their ways of working and emerging technologies with different teams. But one of the first times that I got to really dig in deep on a project, they'd been asked by Tesla to pitch for a particular component of the S3. And Bosch had done a great job, as they always do, of creating a really amazing engineering solution to the problem that Tesla had posed. Had pitched it, Tesla loved it. And when it came to signing the contract, Bosch said, okay, so that's fine, we'll deliver that in three years for you. And the guys from Tesla kind of laughed and went, well actually we kind of meant six months. And this led to a lot of soul searching in Bosch because one of the things Bosch is known for is high levels of quality, safety and being a very German company following the process. And so the question they asked me was, how can we enable this team to deliver the same amount of work in six months as they would normally do in three years? What is it that's causing the barrier? And the truth of the matter was, none of it was to do with technology, all of it was to do with the organisational structure, ways of working and the culture that they had at the time. And so the leadership team at Bosch allowed us to take the team entirely out of the normal structure of the organisation, surround them with what we called concierge services, which meant that if they needed to access a service at Bosch such as HR, finance, procurement, they would have a single named designated person. And that person would help them through the process of accessing service. They would have access to whatever workplace technology they needed to be able to collaborate well and communicate well. And they were given full control over how to spend the budget that was being allocated to this work. Over the first three months, the team made radical changes to the way that they worked. Every week, they were changing and tweaking based on the data they could see of how good their collaboration was, how much deep focus time they were being allowed per team member, how the communication was flowing and where decisions were being made. And by being able to have that level of control over how they were working and what tools they were using to work, they were actually able to meet Tesla's deadline with the same levels of quality and safety as they normally would. And that was really my first big proof that if by changing the conditions, by changing the structure, the ways of working in the culture, this company could really do some pretty amazing things. And to me, it just goes to show tech is the easy part of this equation. Implementing technology is not difficult. Getting it adopted, getting it embedded, getting it to work really well with your ways of working and leadership and culture. That's the hard work of digital transformation, of any kind of digital transformation at all. This was further proved to me when I got to run yet another wonderful project with these four German manufacturing giants. So Continental, Daimler, Bosch and Mahler. Now these guys are notorious, notorious is the wrong word, but they're famous for sharing learnings. And they get together regularly at least once every month to share some of the challenges that they are facing. Now each of these companies at the time was implementing some decision making AI into some of its teams. Now these teams were chosen because they were all high performing and they were really creating incredible work prior to the project. Unfortunately, each of the teams experienced a real drop off in productivity after the introduction of this decision making AI. And none of the teams could really put their finger on why. So during a six month long research project, I got to work with and interview each of these teams on a fairly regular basis to uncover some of the reasons why they were struggling. And they were struggling because of a collapse in psychological safety. They no longer understood how the decisions were being made. They could no longer examine them. They could no longer examine how they were being affected by and how they were being made in the first place. And so it made me realise that really having a black box AI make decisions for a high performing team was never really going to work for these guys. They are after all engineers. They like to know how the how is most important to them. And so what we were able to do was actually to explore some new emerging technology that was coming out of DARPA in the US at the time called explainable AI. It allowed you to open the black box of artificial intelligence and see how the decisions were being made. Now don't get me wrong it wasn't an immediate fix but over time the engineers could get much more comfortable because if they disagreed with the way the AI was making the decision they were able to challenge. And to me this is a much healthier way for us to interact with emerging technology. This isn't a replacement for the human work that we do. It's an augmentation. It's an enhancement of what we do at work. And for me that started to create the right kind of culture for these people. So where do we begin? So as I said my projects tend to put me into teams who are very comfortable in technology but not necessarily so comfortable in some of the human factors. So that is to be honest where I often start. I have to ask the team what is the actual capability that you are gaining by using this emerging technology. Not one of the bells and whistles. Not what's the exciting stuff that you get to do but what's the actual organisational capability that you are building. And that's particularly important when it comes to big things like AI like blockchain and others. What is the actual capability you wish to give to the organisation? Because it's not just AI. AI isn't enough. And what I often encourage them to do is to define that capability need in terms of what the world calls an agile user story. And I do this because agile user stories as used in agile software development are a really neat way to wrap up a requirement. So as you can see here you might describe it as as a sales organisation we need to offer an integrated experience. Not front ends to set apart to a set of separate silos working apart to grow key account value. So in other words as a sales organisation we need one face to the customer so that every customer knows where to come when they want to request more services. It's a really simple way to describe some very complicated technological needs but it's easy to understand. It's clear business language not technology language and everybody around the table is able to understand what you're trying to achieve. It's also easy to judge whether or not you're going to achieve this because you've got something very plain business language to measure it against. The second thing I love to do is to pull out what we call the value proposition canvas. Now this is a it's a creative commons canvas so everybody can use it and it was created by the guys at strategiser. And it's a really neat way to start unpacking the true value of what is being provided. Now on the left hand side we unpack using the product itself. So what does this piece of technology give us in terms of things you can do. The gains it creates and the pains it relieves. On the right hand side we unpack what the customer is trying to do at this point. What is the what are the customer jobs that they're trying to get done. What are the gains they're looking for and what are the pains are they experiencing. And what we're looking for is do these things match or are we actually a mismatch and therefore we're not looking at the right product or service. Again, this is not a technical exercise. This is all about knowing your customer, knowing the person who is going to be using this technology. And that is crucial when trying to look at the future of emerging technology in our organisations. When I'm unpacking a capability, I like to split it down into how we create it, the real nuts and bolts. Because actually when you're building a capability isn't just about the technology that you're putting in place. It's about the data that you need to make that capability work. It's about the services or processes that you need to wrap around that technology to add value to your organisation. It's about the skills and the people who you need to make it work. And so for me, by thinking further than the technology, we're able to bring many more people to the table. And that to me is the aim of this game. We need as many people as possible to be discussing the impacts of emerging technology. We need true diversity of thinking to be able to make emerging technology work well for our organisations. And as I've said many times so far, this is not about the technology. This is about the structure, the practice, the ways of working the culture and the leadership. This table comes from some research that I was able to do with some of the most high performing digital first organisations around guys like Google, Spotify, Automatic and others. And these are some of the attributes that they were looking for, that they were looking to build and to sustain in each of these areas. So, for example, they were looking for their networks to be very, for their organisations to be very networked, to be laterally connected to each other rather than relying on the hierarchy to cascade information, to be very resilient. They were looking for ways of working that encompass data driven ways of working. So data driven decision making, data driven innovation. They were looking for organisations that are highly collaborative. They were looking to be agile and iterative. So these kinds of attributes are ones that we can build even if we're not working right now on emerging technology. And so one of the imperatives that I feel for each one of us is to help our organisation to become ready for emerging technologies and to be ready to join the conversation. And one of the ways you can do that is to start to practice some of these attributes in your day to day work to become passionate and purposeful about the work that you do, which I know librarians and information professionals absolutely are passionate about the work that they do. And to really show that to others as you start engaging in the conversation around emerging tech, to be innovative, to be customer driven, to really know who your customer is and to be able to advocate on their behalf. To be curious, to always be looking for ways that you can try out emerging technologies. To maybe, for example, download a local personal AI bot and have some interactions with it. To be truly inclusive in the way that you're thinking about your teams and you're thinking about leadership. So leadership is no longer a hierarchical position with a capital L that you get awarded on a certain day, but instead build influence throughout the organisation. All of these things will help you be prepared for the future discussions around emerging technologies. And as I say, this is a really good place to start learning to work agile. Not agile with a big A, the methodology that's dictated to us by the software community, but to just adopt some of those iterative ways of working that can help you move faster. I also like to think in three levels of emerging technologies. As I say, emerging tech doesn't mean the same thing to all pieces of the organisation. So first of all, we might think about technology that optimises, that makes today's ways of working a little bit better. It removes some of the duplication, it makes the processes flow a bit easier, but it's just a basic upgrade on the way that we're working. I might consider, for example, social media monitoring platforms, optimisation, whereas others might consider that to be more innovative. It all depends on your point of view. But for me optimisation is something that we should all be doing every day. We should all be looking to optimise the way that we work, remove the waste, remove the duplication, make more of the investments we've already made. We then have transformation. Transformation is where we use technology to do something fundamentally differently. This can be a really exciting area to get involved in, but I would argue that you shouldn't spend as much time in transformation as you do in optimisation. You want to spend a fair amount of your time optimising what you've got and then some of your time transforming some of those products and services using technology. The final level, of course, innovation. Mixing, remixing and combining capabilities to create brand new inventions. We really do not get to do this very often. It's a really small amount of the transformation puzzle, but it's an important one. If we can come up with brand new services by combining and recombining our capabilities, then we should absolutely take the opportunity to do that. But for me optimisation and transformation are the first two steps on the ladder. So don't feel the need to jump straight in at that innovative invention level to get involved in emerging technologies. Work to get to a place where change becomes routine. This is going to make everything very much easier going forward because I can promise you that the rate of change is not going to slow down. That's almost cliche to say, but nonetheless it's worth stating. Change needs to become part of our routine. I like to take some of my inspiration from the DevOps community. DevOps is a new and emerging discipline from the IT world where they have brought together both development and operations into a single infinity loop, as you can see here. It's not so much DevOps as a practice, it's actually DevOps as a culture that I think is really interesting. Always looking for the 1% improvement. Always looking to automate a small part of the job. Always looking to improve a small part of the job. It's almost a lean methodology running in the background, that kind of continuous improvement. But really working as an N20 rather than mini silos inside your own ways of working. So really working on change becoming part of your everyday routine will help much of the emerging technology discussions. And then finally, because the agile manifesto was written very much as a software, we have to rethink a little bit what it is we're looking for from our leadership and management. And I think it's really important that as we move into an emerging technology space, we focus on these five things. We focus on the people over the process. We focus on the dynamics over the documents. So the interactions, the conversations and the flows of information over the static documentation. We focus on collaboration across silos rather than cascading information up and down the hierarchy. We focus on being adaptive over prescriptive. This is very important when working with TEP. We can constantly make small adaptations to our own ways of working. We don't need to prescribe to our fellow colleagues how they do their work. We all know each best how to get our work done. And then finally, the most important for me, leadership over management. We really need people who are willing to influence and to lead in these complicated times. Not because they are technology experts, but because they understand their organisation. And they understand a bit about the structures and the ways of working, the culture and the customers that they're trying to serve. Those people are more valuable to me than the technical experts in the discussions of emerging technologies. Agile transformation cannot be an island. It must be something the whole organisation engages in and continues to help move forward. Having conversations with every single department really helps bubble up some of those broken processes that need fixing. Some of those daily bug bears that we can help with. And also some of the ideas coming out of other parts of your organisation will help you put together new ideas around some of those capabilities. So really, we have to demand more from our leaders, unfortunately, to start joining up some of those emerging technology efforts, focusing on what we should do, not what we can do, and always bearing in mind that the structure, the culture and the ways of working of our organisation are as important to get right as the actual emerging technology that you choose. Thank you very much. Well, thank you. That was a really fascinating introduction. Absolutely fascinating. It was really interesting to have that balance where I've seen a model for where AI and tech replaces human beings, and you've absolutely given a completely different perspective on that. It's about how we employ, exploit and use it to best advantage. With the human aspect, I'm absolutely critical to it. Really interesting. Thank you very much. It's sort of great context for the eighth of the conference, but also for the next presentation, I think, where Massoud will focus down a little more to areas familiar to us. Thank you, and do keep the questions coming in and comments in the chat function, and we'll move straight to Massoud now. So our second keynote is Massoud Cokar, who will talk on living in the age of invisible intelligence. Many in the audience will know Massoud is currently director of library and archives at the University of York, but will shortly take up post as university librarian and keeper of the brotherton collection at the University of Leeds. And he's currently bored lead for our L UK's digital scholarship network, where he's applied his interest in an experience of digital leadership and innovation to great effect. He also becomes our L UK Vice Chair later this week. So over to you, Massoud. What a fantastic introduction to the conference by Keras, who's really set the scene for this presentation, and also something that I am passionate about as well. I want to add Delvin to the presentation itself. I really want to take a minute to say a massive thank you to our UK community. Six years ago, I was standing in my very first RLU conference doing a lightning talk, feeling a mixture of excitement and anxiety, if I'm honest, and the community was extremely welcoming, really supportive, and really allowed me to flourish within that community. And I hope that people who are attending the RLU conference for the first time, or have been here before as well, feel the same way as well. So thank you hugely to the community itself. So today I'm going to talk about living in the age of invisible intelligence, and it's something that is going to happen. As Keras was also mentioning, I think the change is not going to stop. In fact, it's just going to grow at a very significant pace. And over time, particularly technology is going to become more and more invisible. And that has a vital impact on how we perceive this world, how we work in this world, but also how we live in this world. And I hope that by the end of the session, you get some food for thought on what that means for academic libraries and archives and special collections as well. So before we look to the future, let's look at the past and let's look at what does it mean in terms of the industrial revolution. The main thing that I want to highlight here is that the pace at which technology and industry is moving is significant. So if you look at the first industrial revolution and the second industrial revolution and the third industrial revolution, roughly speaking, there have been about 100 years in the middle in which a major dynamic industrial shift has happened. But when you start thinking about the shift from industry 3.0 to industry 4.0, you recognise that that shift shrunk by half. It's taken only about 50 years for industry 4.0 to kick off. And many experts at the moment say that industry 5.0, which would be primarily about hyper automation or a complete seep of artificial intelligence into our daily lives, is not further than about 30 to 35 years in the future. So the pace is significant and the change within that is significant. But before we move on to that, let's look at what industry 3.0 has done to us. So the way I like to phrase it is that industry 3.0 has made us technology augmented humans. There's a lot of technology in our lives now. We use that a lot. It really helps us. But fundamentally, if you ignore the last five or so years, the technology has been pretty non-intelligent. It's been there to support us in achieving things that we are not good at. It computes large numbers. It really allows us to understand where to go next in terms of directions. It's fundamentally there for doing things that we as humans can't keep in our brains pretty much. So it allows us to visualize large amounts of data. It allows us to move through a GPS device in your car. But what it can't do really well is if someone dug a hole in the road two hours before you're driving, it won't actually be able to tell you that that's the case. And there is a level of intelligence that's always needed on top of that. But moving forward, particularly in industry 4.0, we are moving into a domain which is AI augmented humans. And what AI augmented humans is is basically the thing that Keras was talking about that artificial intelligence is going to augment the things that we do. And in 2019, Gartner released a trend insight report where they say the way we experience a digitally enabled world is changing. Through 2024 AI will see into every software, service and physical asset driving a new level of automation. And that automation will free us from more creative activities, thus making us AI augmented humans. And there are a couple of things I would like to pick up in there. One, it's not significantly changing the digital world itself. It's changing the way we experience that digital world. So we are already living in a digital world and it's the change within that digital world that's going to automate and that's going to change to the next industrial revolution. And it links back to our Lucas digital shift strand as well, which is both about moving from analog to digital, but also understanding the digital shifts within the digital domain as well. And the other thing I would like to pick up is the timeline. So the timelines are 2024, which I don't think are going to be true. I think it's going to be about 2028, but that's not so far off in the future either. And I also would like to quote this from a fantastic book called Supermines, how hyperconnectivity is changing the way we solve problems by Thomas Malone. And in this book, Thomas talks about over time the boundaries between what people do and what machines do will keep changing. But at any given time humans will do the things the machine can't. And the reason why I find this fascinating is because I believe in this. But I also find that the boundaries between what humans can do and what machines can do are increasingly getting blurred. Still very wide apart, but at the pace at which we are getting blurred is quite exponential and interesting. Before we delve too much into what does this mean, let's talk about what Gardner is referring to when they are talking about AR will seep into every software service and physical asset by 2024. That sounds rather scary. And what Gardner is really talking about is the artificial narrow intelligence, which is the ability to achieve specific goals effectively in a given environment. And pretty much about 99% of AI that you'll see around you at this time machine learning algorithms are artificial intelligence based. They are there to do one task or one or more related task really well within a given environment. And that's the form of AI that we deal with at this time. But there are other forms of AI if you really think about what humans, most humans actually want to see in the future. What they are interested in is artificial general intelligence, which is the ability to achieve a wide range of different goals in different environments and contexts. And that is what humans are very good at. We can be put in a new environment and we can deal with that environment in different tasks in different ways. And that's what artificial general intelligence aims are. Now, most experts think that this is multitude of decades away. But what's very interesting is since 1950s when the artificial intelligence domain was kicking on, experts have always believed that artificial general intelligence is about 20 to 30 years in the future. So the actual achievement towards this is going to be a long way in the future as well. However, if you speak to Elon Musk about this, he would think this is about 10 years in the future. So I think there are far, far different views from industry practitioners versus experts in academia on this. And of course, the third form is artificial super intelligence, which is the ability to greatly exceed the cognitive performance of humans in virtually all domains of interest. These are the doomsday scenarios or the kind of movies or TV shows that depict this person of interest, minority report, matrix, et cetera. And the question I'll ask is do we really need this? Right. Carras also mentioned this, that the most important thing at this time is not necessarily artificial intelligence, but augmented intelligence, which is augmented intelligence is the human centered partnership model of people and AI working together to enhance cognitive performance, which includes learning decision making and generating your experiences. And this is crucial because this allows a level of trust established between humans and artificial intelligence when they can work together, when things are not entirely fully automated and when humans can see a role in that discussion and in that process itself. And there are multitudes of examples of this in the industry at the moment. All the way from predictive trends, which movies should I watch next on Netflix or show me a random TV show that I would like based on my preferences, all the way to Amazon book recommendation systems or the TikTok video recommendation system, which I think is one of the most fascinating black box algorithms out there, because it's extremely addictive as well. Moving from trends to also politics and elections and customer services generally about who are the undecided voters and what can we do and a whole new range of AI is kicking in here called Emotional AI or Emotion AI, which is really understanding how you can detect human emotions. And the correspondence between what people would say under a group setting because of peer pressure, but what they would actually feel in those situations and huge concerns around privacy and security and feelings and all the other things and morals that go with that. Lots of things in gaming about recreating artificially the environments that you want to be in and give your as real experience of gaming to self driving cars to auto piloting systems to financial and biotechnology sectors as well in terms of fraud prevention but also in biotechnology on in fact there's a fascinating project running at the moment through machine learning algorithms on where the next COVID strain might emerge from and what are the factors in that but the one I would like to pick on is medical analysis or medical domains in general where artificial intelligence has significantly transformed the domain and the example I would like to pick up on is a 2020 nature article where a machine learning computer algorithm has been shown to be as effective as human radiologists in detecting breast cancer from X-ray images. It was a joint project between Google Health, Deep Mind, Imperial College, NHS and Northwestern University in the US and what they were able to do was to design and train an artificial intelligence model on mammograph images from about 29,000 women. And what's fascinating about this is that the algorithm itself was as accurate as an expert human radiologist and it has huge implications on what it means for worker productivity, what it means for intent and causation models, what does it mean for what if something goes wrong, models, etc. And it has implications on how NHS for example might work in the future because at the moment NHS has a two check mechanism for cancer detection. So if one radiologist is able to detect cancer another radiologist has to verify it but statistically speaking could this model look like human radiologist does the first test and artificial intelligence does the second test and the second radiologist is only involved if there is a dispute between the two fascinating domain of what it might mean for health industry in the future as well. And this then leads us to trust question and there is still a long way to establish trust between humans and artificial intelligence and this is primarily what Keras was also referring to because of the black box nature of machine learning algorithms. I think it's really important to highlight here that the black box nature may not be by design. In some cases the algorithms are commercially sensitive so they are black box by design, but in other cases they are black box because they are so complex in the way they calculate things that humans might not be able to decipher that in the same way. The advantage of machine learning or artificial intelligence algorithms is that they can compute vast amount of data in a very short amount of time and hold a lot of visualizations to reach an outcome and humans can't do that. So what does it mean to basically open up this black box as well. And not only that but it has significant implications on legislation and lawmaking and what does it mean in terms of if an artificial intelligence algorithm causes harm. Who is responsible for the intent, what are some of the causal relations you can develop, what is the legislation governance on it. It's a massively increasingly challenging domain. And this then leads to the concept of explainable AI which is can we open up these black boxes to a level where a level of interpretation can be done on why the algorithm has reached this outcome. And there are a couple of examples I'll give on this. The first one is quite a favorite of mine which is again a paper that was written in 2020 but looking at the challenge that was set in 2007 which I know is a long time ago. In 2007 there was a challenge set for a machine learning algorithm to detect particular objects from an image data set. And there was a machine learning algorithm which was detecting horse images basically. And what was really interesting was when they looked at how it was able to detect those images, they realized that the training data, most of the horse images had a watermark in it. And while they were really training the machine algorithm to detect horse shapes, actually the algorithm decided that the best way to detect a horse in that particular image is by looking for that watermark. Why wouldn't it? It's an optimized way of checking something. And it was only possible to see why this was the case because the black box algorithm was, we were able to open up and detect how it was calculating whether something was a horse. And the first sign was when a car was detected as a horse with the same watermark in it as well. And then the other one which is a complete black box algorithm is the YouTube recommendation system for the next videos. And there's a fascinating article on this and I've got links to all of these in the notes and in the references section. And it's a fantastic piece which allegedly talks about how YouTube algorithm really radicalizes people. And the example that the person gives in that is she's a reporter who was basically looking at a couple of Donald Trump videos after the 2016 election, I think. And realized that increasingly she was being recommended more and more or tried video content on that basis, which caught her eyes. So she said, what's happening here? So she created a new YouTube account and started watching Bernie Sanders and Clinton videos and realized that she's not being recommended more and more or left videos on that basis. And what she she after more analysis what was revealed is that the algorithm is based on human psychology about really looking at what's behind the curtain and really exciting and basically upping up the ante to really start indulging you more and more instinct on the platform itself. And it's not just relevant to politics. She also did some searches on jogging and recommendation systems took her to learning ultramarathons also on vegetarianism, which took her to veganism. Not that there's anything wrong with veganism or running ultramarathons, but I think the algorithm is itself going and upping the ante to really engage you in those discussions. And is that the right thing to do? Does that create a really more polarized world in the long run? And there's a lot of artificial unintelligence examples out there as well. And I think Keras was mentioning about recruitment systems and there's a back in 2016, Amazon scrapped its own recruitment tool, which was showing bias against women because all the training data it was using was about previous resumes from the last 10 years, which were primarily male for women. So primary male for software coding roles itself. The one that I would really like to highlight is security concerns as well, which are really, really interesting. So I don't know whether any of you've seen this Tom Cruise deepfix discussion, which again happened in March 2021 on a TikTok where a Tom Cruise impersonator was augmented by a deepfake version of Tom Cruise replicating the gestures, the movements, the voice, the facial expressions. To that level that you can't distinguish anyone, and what does it mean for the future? Can we watch a video or listen to someone and say, well, that's a good way to determine whether that is you or not? And that again, delves into so many of those concerns. A good example again is that someone did the very first deepfake cybercrime last year, which was mimicking a German CEO's voice to ask a UK CEO come to power to the same company to transfer them. I think it was about 450,000 pounds to an account using deepfake for the voice. And there's just so many examples of racial bias and everything else. But one that I would really like to highlight and I do want you to go and have a look at that is the GP3 example, which is an open AI platform called a robot wrote this entire article. Are you scared yet human? Now I'm not scared, but I'm really interested in what implication it has because the language that that algorithm can generate is almost human like. It's in some cases nondistinguishable. So do read that article if you get a chance. Now all of this is also coming to higher education's interest as well. So we've seen lots of examples of how it's impacting on higher education. Now there's so many of these, so I won't go into all the details, but I just want to highlight a few. The Institute for Ethics in AI at Oxford Gallan Turing Institute, the Center for Future of Intelligence between Liverpool and Oxford Cambridge, Imperial and UC Berkeley. I do want to mention MIT because they are just so, so advanced in this and in terms of their quest for intelligence. The Center for Technical Moral Futures at Edinburgh, the Institute for Ethical AI at Oxford Brookes, the Institute for Ethical Education AI at Buckingham. There are just so many examples. It's having a really interesting impact on higher education. But apart from industry, the other main area is society and at this time the global challenges do still remain in front of us. And despite all these problems and all these concerns with the algorithms, many think artificial intelligence is the answer. And it is to some degree has a role to play in solving global challenges. So McKinsey in late 2018 looked at all of the artificial intelligence use cases they have and map that against the United Nations Sustainable Development Goals. And they realize that quite a strong mapping exists already and with growing number of artificial intelligence use cases they can see more and more use cases of these algorithms to support societal good as well. And I would again recommend you to go and see that article because there are six real life use cases that they've given in there. And the one that sticks in my mind is about using a combination of drones, video photography, as well as machine learning algorithms to detect poachers and alert law enforcement agencies in Africa, which was quite an interesting use case. And when global challenges are there, it has an impact on your local country policies as well. And if you look at the UK government policies and artificial intelligence back in late 2017, the industrial strategy highlighted a grand challenge on growing the AI and data driven economy as a core objective. The UK government research and development roadmap in July of 2020, very clearly highlighted that the long term ambition includes artificial intelligence quantum technologies, robotics, and the combination of these technologies. We've all heard. Well, I hope we've all heard the news about launching of the advanced research and invention agency aria in February as well. And we've also seen the skills for jobs paper that has come out from the government about the future of jobs and the skills that are needed. And the reason why I've added it here is because digital is now spoken in the same line as maths in English. And that is fundamentally going to change the future of society in the skills domain. But it's not without its issues. And we've also seen that there's a growing dichotomy of what's considered strategic subjects and non strategic subjects. And we've seen what Kevin Williamson, the Secretary of State for Education said to the House of Commons on our proposed reform to the teaching grant for the academic year 2021-2022 will allocate funding to deliver value for money for students and the taxpayer. It would increase support for strategic subjects such as engineering and medicine, while slashing the taxpayer subsidy for such subjects as media studies. Now, I think this is personally extremely short sighted and media studies is so crucial information literacy media studies critical thinking are some of the most crucial subjects. But also, I think the conversation continued on performing arts, music, archaeology and other subjects as well. And this really frustrates me, but also makes me think this is the wrong way to approach this because we know that technology is not the answer on its own. Very, very aptly illustrated by Safia Noble in her book, Algorithms of Oppression, How Search Engines Reinforced Racism, where she talks that we have more data and technology than ever in our lives and more social, political and economic inequality and injustice to go with it. And it's absolutely crucial that we need to look at the whole domain of technology as a whole, the human, the social and the technological elements coming together to form the whole of the outcome. So what does this mean for us as as academic libraries as academic archives as as as special collections as museums. And the one thing I would like to highlight was this fantastic report from OCLC is called responsible operations data science machine learning and AI and libraries back in 2019 it was released. And I feel that it's probably a bit lost because of the pandemic around the same time. But there are two paragraphs in that report that really resonates with me. And then the first one in that is advances in description and discovery shared methods and data and machine actionable collections simply do not make sense without engaging workforce development data science services and interprofessional and interdisciplinary collaboration. All the above has no foundation without committing to responsible operations. And what's really interesting is you can almost see this as two separate layers, the service layer in which we are looking at descriptions and discovery and shared methods and data and machine actionable collections collections as data. But what we are trying to do is concentrate on the service layer without fundamentally changing the operating system that we have in our organizations. We are not really changing the way we do workforce development. We are not really changing the way we introduce data science services. And we are not yet there in that interprofessional and interdisciplinary collaboration that we absolutely need in this domain. And on that point, the second paragraph that I wanted to highlight was no single country association or organization can meet these challenges that lie ahead. Progress will benefit from diverse collaborations forced among librarians, archivists, museum professionals, computer scientists, data scientists, HCI specialists, historians and sociologists, et cetera. And again, that is so crucial. So this then took me to think about this conference theme. Are we transforming or do we need further transformation from the library transforming to transforming the library? And this is a really critical question because we've been transforming pretty much as far as I've been in this profession and longer. And the reason behind that is that we've always tried to face what's new and how do we stay relevant in that domain. It's a constant evolution for us. And in some ways that's also created an identity crisis about what does it mean to be alive anymore? What does it mean to be a librarian anymore? And if you really want to do that collaboration, that interdisciplinary collaboration, interprofession collaboration, what does that look like in the future? And I think we still do have a role in transforming the library. And for me personally, that's about looking at libraries and archives as an infrastructure. And we already do that. This is not a new concept, but I think what we need to do is really start honing on where our strengths are and where our gaps are and how do we fill those gaps. So for example, we are very good at libraries as a place. We look at societal implications. We look at our own institutions. We want to be a force for public good. And we want to be a service for all, regardless of where they are, open access, open knowledge, something I'm very passionate about myself. So that's fantastic. On ethical infrastructure, I think we still have a long way to go. It's something that sits close to our heart. So I think convincing people is not going to be a problem, but actually making it a practice in reality. We still are far, far away from it in terms of supporting ethics, supporting fairness, equity and diversity. Huge, huge gaps in that area, but also in bias reduction and not bias elimination. Because, as Kara said, we are biased in our own ways. So I think it's all about bias reduction to our best of abilities, but to also act as a source of truth in a turbulent, fake news world as well. And digital, and I think I would like to say that we are exemplars of good information management and increasingly good digital practices as well, along with good leadership. And we have a strong role to play in reducing information and digital poverty. But we can adopt technology in different ways. We can be more at the forefront of this as well, which actually takes me to what is it that we really need? What can be our identity in the future? Because are we societal? Are we as a place? Are we as a service provider? Are we a digital entity? What are we? And actually, in my view, I think we need to be of the persuasion that we are an interdisciplinary infrastructure, that we are at forefront of interdisciplinarity as hubs that bring digital, technological, social and human ideas, skills and people together to enable new discoveries. And that's what fascinates me. And that's where I think the breadth of that definition allows us to be able to do all of these things with a level of ownership and a level of grasp that no other entity can bring. And if I have to think about what does it mean as a definition, and I'm definitely not good with words, I was thinking about academic libraries and archives of future are interdisciplinary hubs of curiosity, discovery, challenge and innovation. They're both inside out and outside in. They're looking institutionally, regionally and globally, solving challenges through diverse agile collaborations and reshaping their identity to strengthen their overall value and position. And there's a lot in there, which I can elaborate in any of the post discussion, post presentation discussions as well. But we also have an opportunity at this time, and I'd just like to highlight this report from the Alan Turing Institution Institute called understanding artificial intelligence ethics and safety. And the reason why I want to highlight is that a key component of their ethical platform for responsible delivery of an AI project in there includes fast track principles. So one of the three components and fast here stands for fairness, accountability, sustainability and transparency. And libraries and archives have a unique opportunity here to become integral to ethical AI to support bias mitigation to support non discrimination and fairness to safeguard public interest in safe and reliable AI delivery. And the eagle eye of you might have picked up on the fact that we've done something similar with research data where we've really championed the fair elements of research data and there's a long way to go there but it's happening. And we can do something very similar with ethical AI as well. So what does this mean for us. And this is my last slide because I really do want to leave with this note. I was walking the other day and listening to this book called think again by Adam Grant, and something really stuck with me something really resonated with me. And that was a hallmark of wisdom is knowing when it's time to abandon some of your most treasure tools and some of the most cherish parts of your identity. And I think it's time for us to rethink. What does that mean for us. Are we going to let go of certain parts of our identity to elaborate and expand a new identity to develop something that we that allows us to be absolutely relevant for the future regardless of what it looks like. And I think I will leave you with that thought. Is it time for us to think again as well. Thank you so much for your time. Thank you for listening. And I hope you really enjoy the conference as well. Thank you, Mr. I remember the first time you spoke at an RL UK conference and you stood in front of the library directors and others and you issued a challenge where you've done it again, haven't you. So thank you. That was a great compliment to to Curtis's paper. I'm feels almost appeals to me as if you've colluded, but I know you haven't. It's really worked well. So thank you both. And we should move to a conversation around questions and answers. So I guess you both talked about leadership and I guess we should start with that and I'm Curtis you ended with the importance of leadership and leaders and earlier you'd mentioned the importance of crossing boundaries. You talked about leaving behind effectively and it will take leadership to to to manage that. So, so what's the leadership type for the future needed. Is it spanning of boundaries. Is that going to be increasingly important for resilience and adaptability. And certainly for me, I think a lot of the interesting innovative work is done at the crossroads between disciplines, which is why if leaders can't cross the barriers in their own organizations, how can they span the barriers between markets and industry groups and all those other things that we have to do. Industry is an island anymore. No team is an island, no business is an island in and of itself. So for me, learning to collaborate across boundaries is absolutely essential to the future of any organization. Wholeheartedly agree with that. And I think the other thing that's crucial is to enable that in the whole of the organization as well. So leaders have a crucial role in staying connected in crossing those boundaries and bringing that information in and showing the possibilities, but a successful organization is one that culture starts embedding in each and every part and each and every component of that organization. That is a leadership quality that we need. That is a leadership quality that's not new, but it just needs to come with a different lens on. I think a secondary part of that, Masoud, is the idea of servant leadership. So a leader not being out at the front of their team waving the sword of progress, but instead actually to stand behind their team and open up the organization to them to allow them to do their best work. And that's hard for leaders, I get it. There's a whole ego thing around leadership there, which, you know, makes it difficult to not be the person at the front, but it's essential, as you say, to enable everybody. You actually have to stand behind them sometimes and allow them to do their best work. You just expect a bit of building on that slightly. You effectively are talking about interdisciplinary crossing boundaries and so on and away from historic discipline based structures. And in the last week I've heard both arts and humanities and sciences bodies talking about the need to exist within the gaps for the future. And changing university identities as it were. You're seeing that in obviously we're starting to see that in universities, but is that the way of the world now? I think so. I think, as I say, that the areas between disciplines where you have more freedom, you're not constrained by just being a scientist or just being an engineer. You are a human who wants to attack a problem to help solve it to create value. For me, there's great value for the organization and allowing us to step out of the boxes that they've created for us and to really, and to flourish wherever that happens to be. But also, we know that products and services cross boundaries. There's no single service or single product that is purely X or only Y. We create valuable products and services through that process of combining things. And the sort of implications for libraries in this. A lot of our structures are set up around disciplines and yet into this disciplinarity is the way forward and it almost seems to me that we need to think about reorganizing our faces at work. Yeah, and it's amazing how many places I've been to where the common challenges, even within small libraries or large libraries, the siloed structure of the teams and not being able to work with each other. And part of the reason is legacy structures, but part of the reason is also that in some cases we do need transformation over evolution. And actually, I say this quite a bit that one thing that pandemic has really allowed us to understand is how adaptive workforce and adaptive teams can work in the future and how we can bring people towards common goals rather than KPIs. Because over time, the way we have devised our working practices and the way we've devised everything is we have KPIs which drive performance based on a teams outcome rather than an organization's outcomes. And actually what we really need to do is move away from the KPIs structure into more objectives and key results structures where people have to come together and work together on those things and that's the only way we can break these silos. That's the only way we can bring people in that forward of we're all in it together and this is our overall objective as a unit or as an organization as well. There is nothing more limiting than KPIs structures for a team because you have to achieve them and that means your leader has to focus on those first and everything else second. And for me, Mesudia, quite right, the objectives and key results framework which came out of Google enabled them to build things like Google X, which is a moonshot factory of almost constant innovation. That's because they're working to really big objectives and nobody has to reach 100% of their objectives. If you reach 70% at Google, you're doing really, really well. And so that kind of stretch mentality, along with that kind of crossing boundaries idea, I think can create quite a lot of impetus for change. You're both very positive about the future, providing we reach it in the right way. Obviously, a great deal of awareness came through as well. I would mention the model I'd seen in an exhibition of ENA exhibition which predicted which jobs would be replaced by what date, by tech basically, and librarians were included in that as well. I stormed out of that, of course. Many people could be left behind as AI progresses and how do we avoid this ensure ensuring the population is able to upskill and adapt in that short time frame and what's already going on in that. I think the need to upskill is one that it's not necessarily upskilling once it's going to be constantly re-skilling through your whole life. I think that continuous learning, curiosity for ourselves, but this is a challenge for a lot of people. A lot of people come to work for a job. Nine to five, I've done my job, I've got my paycheck, I can pay my mortgage and my bills. Great. And so how that transfers into the future I think is something that we need to be really careful to help people within the work setting, within their work hours, maintain a suitability for work. This can't all be on the individual for them to do that in their own time. I do think that it's important that people understand what's involved in some of the emerging technology work. For example, to be involved in AI, one of the key skills is to be able to write rulesets and to help the flow of information. We as librarians know how to do that very well. In fact, most of my ability to do the emerging tech work I do comes from my information professional background. Not many people would understand that if I told them, but we learn to deal with classify, categorize and sort information properly as an information professional. And so I can mark up a document as a model for AI very, very quickly compared to many of my peers because I know how to do it. So there are always pieces of the puzzle you are more naturally suited to. Don't try and eat the whole pie of AI. That's insanity in and of itself. You don't need to know how to code AI. You don't need to know how to build a machine learning algorithm. Maybe you're the person who designs the question set. Maybe you're the person who designs the rule set. There are pieces of the puzzle that everybody can get to grips with. And just picking up on that, Mr. I guess there's a feeling and you gave really interesting examples around that. There's a feeling that the tech solutions are imposed on us by the big tech companies. And how can we make our voices heard in this movement on these discussions? See, I have a fundamental disagreement on that point because what else are they going to do? They are tech companies. That's in their DNA. They are going to do more fun with technology, more innovative technology. That's what they're there for. And we as libraries are information management and information discovery entities in many ways. And that's our expertise and there's a lot more we can do on that. So I don't think it's a question about someone imposing something on us. It takes us to that domain about what is it that we can do collaboratively? What is it that we can do to support each other in a way that the overall outcomes for the benefit of society and for the benefit of our economy and our culture remains the same? And that doesn't mean looking at it as a competition. That means looking at it as a collaboration. That means looking at where our strengths are, how do we develop them and how do we augment what tech is doing? And just building on Charis's point about you don't need to quote absolutely. But I think what we do need to do is to bring people who do have that knowledge to work with us to start telling us what their thinking is. And we can influence them in our thinking so that we can reach a common augmented ground in the long run. And that's something that's missing at the moment because they are doing things on their own. We are doing things on our own. We all have things to contribute in this domain, but we are not doing that in the professional interdisciplinary collaboration in the way we should for the overall benefit for the society. I absolutely agree. And I would also say that one of the things we mustn't do is approach that partnership feeling like we're a lesser part of the partnership because we're not just because they have a certain still set that we don't. We have one that they don't and this must be a meeting or in the middle. It can't be technology firms just taking a small amount from us. It has to be putting everything into the pot and I often fear that people will approach that kind of alliance feeling like the smaller partner. And that's not healthy for software firms. They need the challenge. They need us to be in that conversation to create the value, the right kind of value for the future society that we want. Interesting. So it's engaging with them on joint and collaborative approaches and interesting question came in around the articulating the value libraries and archives and glam site to bring to discussions around ethical AI. And can we do this? We've done it with storage management and research data, but can we do the same for with AI? I think both of you talked about that in your presentations. The ethics and our involvement in that. I think we ought to personally speaking. I think we have a role to play here, which is fundamental to what kind of society we want to live in, what kind of society we want to see. And very selfishly speaking, if I'm very open about this, I think this is an opportunity at this time as well. I think what I would find really sad is 10 years down the line, we reflect back and we said, we could have used this opportunity at an early time. We could have built our strengths in this area. We could have enabled others to see us as partners in this domain, and we lost that opportunity. And I think that that opportunity availing is crucial at this time in my view. I was really interested. Two things really struck me. I thought it was great, the thought of the big tech companies collaborating and meeting monthly. That's just wonderful and effectively it's how libraries operate. We're in a collaborative competitive environment and we really collaborate. What's I think a challenge? The Bosch example you used to carry us about effectively taking people almost outside the workplace and providing a control of that. How do we do that Bosch transformation within the constraints of the normal work environment where we don't have the opportunity to free people from the daily grind? The only answer to that has to be a little bit by a little bit that change becomes routine. It's an agenda item every week. The Bosch example works well because they're a very hierarchical, very process driven organisation. If you ask them for their central directives which tells you how to do every piece of your job, it's this thick and it's printed and it's handed to you on day one still. So we didn't really have a way around that. You either obey the rules or you move outside the structure of the organisation and you don't. Hence the shift, but I think day to day we can all do something to shift the needle a little bit. And if we keep shifting the needle and we keep honestly asking ourselves, is this the right thing? Is this producing the value I want? If not, then let it go. Don't carry on and see that over time reflect each month, right? How much has changed for my team? What can I do next? I'm focused with my team monthly retrospectives where we ask ourselves what's working? What's not working? What do we change? What do we let go? And that simple set of questions. We're taking the time to work on our team, not just in the team in terms of service delivery to others, but on our team. Is our structure right? Is our culture right? Are our ways of working working for us? Is our leadership doing what it needs to? Have we got the tools we need to carry on? That self-awareness and self-reflection, much as it brings insight to us as individuals, brings insight to the whole team and allows us to make the changes that we need to see. You're able to step back and look up, yes, indeed. A very powerful message that. So, just one last question, I think, and take a slightly different approach that's around the implications of AI for art and creativity. Will we read AI books and visit AI galleries? You already can do those things. Yeah, indeed. You can do, but will we see original works and how much of that is that going to be in the future? Mr Eudio talks about the book. Is this going to be a big trend in society? Will artists be replaced by AI? Oh, and I genuinely hope not. But I think it's, so what AI can do is do a lot of narrow analysis. So the AI and art domain at the moment is going to produce a particular kind of art and there are AI galleries out there already, which have produced some fascinating original artwork as well. But it simply cannot yet replace the imagination and the creativity that human minds have. And I think it will be a very long time till that happens. And I would not want to personally see a word that is not present anymore, that imagination and creativity. But in terms of books, again, the same principle applies. The imagination element and the creativity element is going to remain persistent in humans. And I think that's an area where we need to do more in terms of society, in terms of our schools, in terms of our higher education as well to really start focusing on those elements which are critical thinking, creativity, curiosity as a core principle in what you do, etc. However, I would also like to leave this with an example of what recently happened. There was a research group, I can't remember which university it was, who basically experimented with GPT-3, the open AI platform. And basically they concluded that for the wider consumption of research paper summaries, actually GPT-3 wrote more understandable summaries of those papers than the academics themselves. And that was really eye-opening because when you are so delved into a professional domain, you become blindsided to what others can read about it, what others can understand about it. But what this particular algorithm did is it looked not at what your expertise was, but what people want to see from that and basically wrote it in that kind of prosology. And that was fascinating. So let's just leave you with that as a thought. Well, thank you both. And I think the real takeaway from this is the augmented approach, the partnership approach with technology, which is rather than the threat, which is really powerful. Thank you both. It's been an absolutely fascinating session, a great introduction session for the rest of the conference.