 Welcome to next session, which is on reshaping user experiences with virtual reality and AI. I thought it might be useful just to kick off by saying very quickly something about the terms virtual reality, AI and augmented reality. So virtual reality has been developing since the 70s. And it's really about offering people total immersion in a different environment, often using a headset at the moment, but that probably will change in the future as well. Augmented reality offers people more information based on the reality that they're experiencing. And it's really about enhancing that physical experience that people are having. And AI is what it's been developing since the 1950s. And it's, you can largely group it into things to do with text. So machine learning supervised and unsupervised methods, and also a big area is computer vision. There's a lot of hype about around AI. But today we're talking about some real applications of the technology so it's quite practically informed by practice, which will be nice to hear about. The first presentation as title is non human narratives using multimedia and AI to investigate collections represented by Stephanie Moran, Alexander Hogan and Beth Hogan of Etik labs at the University of Plymouth. Etik lab is a research and design consultancy based in mid Wales, and Alex has been working with data for 20 years with expertise in artificial intelligence and novel data technologies. And Stephanie has had over a decade experience in arts project leadership and librarianship and Beth is a background as a much sought after consultant organizing large scale arts events. She'd like to start the presentation now. That would be great. Thank you Peter for your introduction and thank you everyone for your time today. The three of us are going to try to use these 15 minutes to give you a taste of some of the work on non human narratives enabled through and by digital technologies that we attested lab have produced over the last decade, hopefully into the future. I'd like to start by sharing some of the machine to human interactions that political bot exhibit that we built about five years ago now was able to have every period of about six months and 5000 conversations. The political bot was made for the Victorian Albert Museums 2018 exhibition the future starts here. And it was based on about two years worth of research to produce as a collaboration between Etik lab and Oxford University's Internet Institute into how automation on the internet had been exploited for political propaganda purposes. I think the exhibition's creators probably hoped I could give them something they could present as having been used to win some awkward elections around 2016, or at least demonstrate the ability to persuade people in a way I had no evidence or experience of having ever happened. But I don't have any problem with them hoping to find that. Indeed, new technologies have only really been successful so far as they've been able to position themselves as magical solutions and not just to individual dilemmas, but the bigger crisis and the functions of our work. We were asked to build it with various functions that allowed it to become an interactive exhibit. So we had to broadcast onto Twitter and other platforms information about the exhibition where to start conversations about the exhibition's topics targeted influences and we had to promote the opinions of audience members. One of its functions, which was insisted on by the curators but which was perhaps something of a diversion from traditional activities of political bots that we'd found in the wild with a chat feature. We based that on a 50 year old program that was designed to keep the conversation rolling without providing definitive statements that would take the conversation beyond the limitations of its code. I initially thought it would be a cute feature that listed maybe one or two tweets before being ignored. And that did indeed happen a lot. But what I hadn't anticipated and struck me when I was going back over the conversations was the wide range of expectations that people bought with them when they started talking to the bot. It was really wonderful for me knowing what was under the hood and how I expected it to work to see it used and exposed in ways I hadn't anticipated watching it meet or fail to meet people's criteria for what is a bot. As it was advertised as political, it unsurprisingly enlisted a lot of political commentary that had questions about who people should vote for, who it thought would win various elections about what it thought of Donald Trump. These seem pretty popular and I think interestingly could be easily anticipated and programmed for if not by me than some other bot designer in the world. Trust was a particularly salient point to raise. Along with political issues, our bot was asked why the people that spoke to you were too fat, if it could help make someone fall in love with someone and what they should have for lunch. It made me curious to the extent to which a bot might be trusted with a direct answer to such questions, especially given the other research we performed suggested that a direct bot which was upfront about its intentions and who it was can often be more successful in contributing to activist causes than something trying to be more human. Sometimes the questions it was asked assumed an omnipotence that was thoroughly undeserved. It was asked to obscure questions that would require a lot of prior knowledge of a subject, or even the kind of introspection that would be difficult for a human, or at least me. A human's member tried to hold a theological debate with the bot and ended the only when it decided that bot was offering him nothing new. Sometimes people wouldn't let it go when the bot didn't live up to their expectations. The conversation descended into arguing or complaints. It's English was poor or the people were bored when it stopped having opinions. Some people who tried to test it, at least one issued it a Turing test which failed. Others tried to ask it to demonstrate abilities that they knew what should have. Maybe I'm reading too much into these conversations, but I swear sometimes that there was a kind of intellectual jousting going on, or at least the very desire to prove that the bot wasn't as sophisticated as they knew it should be. In the wild, builders and users of political bots are constantly revising their creations, not least in response to new restrictions implemented by platforms. I think it might be nice to go back to this project, particularly with fresh eyes as to what people want from political propaganda. There's also a lot of new technology out there that would be able to inject some of the more outlandish talents that were ascribed to it the first time around. Presumably even it won't be too long until something like this is a project of a museum of the past. And as its descendants will rule over our high-tech future. I'd like now to hand over to my colleague who has greater experience with the more specifically non-human digital narratives, so Stephanie, take it away. Thanks. So I'm going to talk about three current projects we're working on with museums and collections. The first one of these ischri, which stands for interspecies communication research initiative. So this is a project that we're working in partnership with the Serpentine Galleries Creative AI Lab in London. This was a pre-existing collaboration between Etik Lab and an artist collective called Wolfen Drift. And we turned Wolfen Drift's original question, what if AI was based on an octopus rather than a human consciousness, into the question of how to use artwork to invite an octopus to communicate with us. And the Serpentine joined as project partners about a year and a half ago. So they were interested in the back end. They still are interested in the back end, how the AI works, sort of demystifying or understanding what's going on. We're interested in that, but also in learning from another animal in its environment rather than a human curated dataset. So we're interested in using visual, tactile communication artworks to invite a response. We'll be prototyping this summer our proof of concept of communicating, how we can communicate, use an AI to communicate with another animal based on Welsh birds in their environment. It might be the first step in a very big AI build project. The second project I want to mention is Project Three Month Research Fellowship I did earlier this year at the Smithsonian's National Museum of Natural History, working with research zoologists in the Invertebrate Zoology Department to try and tell the story of a collection from the perspective of the animals it represents. In this case, another mollusk, freshwater mussels, you can see the theme here. The research fellowship was undertaken from a cross-disciplinary perspective, combining an ecological psychology approach with digital storytelling and information science, creative writing and artistic methods. And I was looking at a subset of the collection of the mussels from the Potomac River in Washington DC. In the end, because of the lockdowns that started as I arrived in Washington DC, it became an adaptation of a 14th century English poem. I wrote a narrative, an adaptation of a poem called Pearl about the grief of a parent over the loss of their baby girl. And I adapted this as an epic tale of love and loss through the story, through the National Museum's collection of freshwater mussels in the River Potomac. So rather than an epic of grief and loss for a child, it's a lament for the decline and loss of the whole species from the evidence in the collection data and the imagined perspective of freshwater mussels. So the Pearl of the title is the central metaphor that runs through the poem. It stands for preciousness in the original and for both loss and strength in the adaptation. So mussels create these pearls which are prized by humans for their visual beauty as a defence mechanism. Historically pearls were freshwater mussels were fished for their pearls, and for the pearl button industry that produced buttons from their shells. From a mussels perspective then pearl is both a metaphor for strength and protection, and one of the causes of species loss, one of the many causes, human causes of species loss. The eventual aim is to develop a cooperative role-playing game that speculates on whether we can invite members of another species to set rules we can play with. And the idea is to immerse players in the phenomenal worlds of mussels and their conspecifics to invite play in a milieu where rules are set by the paevious capacities attributes and environmental conditions of another species. So these freshwater mussels are a keystone species upon which the health of streams and rivers and their other inhabitants depend. And the original aim of the three months fellowship was to carry out collections-based research and develop a set of species profiles and narratives and to produce digital art work prototypes. For this game, a game environment that will simulate an underwater world affected by climate change, where the mussel player character's quest is a struggle for survival through ecosystem maintenance and engineering for habitat improvement, under the working title, Aquaforming the Potomac. And I think I'm going to hand over to Beth now who's going to speak about future Etica projects. Thanks, Steph. So yeah, I'm looking towards the future and reminiscent of when the internet welcomed archives and collections to renew multilayered world of interconnected information. AI is also transforming what we want all too complex and expensive technologies into readily accessible tools. Complex coding skills and expensive overheads aren't any longer necessary in order to create and share virtual collections and access global audiences of classrooms and individuals 24-7. Providing the capacity to annotate audio visual materials at scale, AI tech helps archives to create more detailed descriptive information about their materials, more than is feasible with human annotation alone. AI is opening up new potential for access and reuse by optimizing how people are able to search for and be introduced to new things, including things they didn't know they were looking for, even once dormant content is being rediscovered and enjoyed. In return, this advancement in accessibility is automating enrichment of metadata in archive content, and this data will be used to create virtual learning environments. These are sets of teaching and learning tools to enhance both collections and the user's learning experience. We have trained on every user that has ever visited the digital library. These data sets will enable quick improvements to be made to our virtual environments based directly on our user's needs. In the ETIC lab, we are utilizing VLEs to inform adaptions to our online spaces. An example is Kuba, our online privacy platform. Kuba, we host private secure online meeting rooms, but before you enter these enclosed spaces, a user visits our VLE in the form of a virtual waiting room. This is a transaction that generates a real mine of information. Kuba works with clients to train algorithms with their own content as training data. These tools include how the leaflet rack is stacked and the type of virtual information it displays, which user controls are visible inside meeting rooms, and how best we can optimize conferencing settings to address a diverse audience. And so it's by making innovative and creative use of the data generated by our own collection users that future spaces will be built. This great wave of data sets will themselves populate collections. And our future challenge is in the creation of new data exploration tools. And despite advances in technology to collect and store data archives, organizations are still struggling to drive value from their data stores. As a result, we're seeing a rising demand for easy to use data exploration and data management tools that organizations, they help them to extract timely and actionable intelligence from their data sets. In essence, who's used learning materials, when, how long, and to what effect is data that needs to be accessible by providers and users alike. And we can turn to another one of ethics companies to see an example of a tool saying action. So network praxis, and as a platform, a full site SI, and told us later if you'd like to have a closer look at its example of a modern digital library of companies that utilizes tools to analyze their behavior, their performance and sustainability. The two reference platforms use a combination of tools and metrics based on fighter algorithms, machine learning graph theory, enabling tracing and classifying of digital behavior of businesses and other organizations. A full site SI allows clients to measure and predict amongst other things growth, sustainability and propensity for innovation, both at the level of individual companies and whole sectors or geographies. This offers an unique set of signals to support research, steer policy, consultancy and guide investment decisions. An analysis is based on a range of unique metrics including digital maturity, innovation scoring, semantic brand analysis. All of these data points are available from a massive and continuously curated data store. Again, we've taken what would once have been a passive repository and made it both more detailed but also added the value created by actively maintaining a history of change and usage. Now one of the first projects we undertook over a decade ago was to develop predictive analytics based upon a comprehensive list of measures of student behavior in a university. We compiled a data lake of library access information, VLE usage, tutor appointments and more, and what we discovered then is perhaps even more important now. Students who use the richest and most evenly distributed mix of learning experiences perform the best. Combining staff contact with a virtual learning environment usage and a range of library accesses was predictive of success. Just as important was the realization that sharing these data with users could be a powerful tool for enhancing their performance. More and more of the usage data produced by systems is now available to help drive development of new services and quality of experience of users. We're witnessing a rapid and deep rupture change. Collections as data development is to encourage computational use of digitized and born digital collections. By making collections available as data institutions work to expand the set of opportunities for engaging knowledge sharing and collaborating. And through events like DC DC 22 exchanges and encouraged developments discovered and multidisciplinary collaborations are formed which in turn contribute to the finances in AI. And since our foundation, we designed and implemented a variety of different technical interventions involving digital strategy, automated communications, data analytics, machine learning in a range of fields. And our expertise runs the gamut from data science and programming to organizational psychology via cybersecurity and graphic design to mention a few. But we set no predefined limits on the kinds of projects we engage with. So if you've got an idea that everyone around you keeps telling you is impossible, please do feel free to get in touch with the member of the ethnic lab team. So we're redrawing the balance of the feasible and it's all in a day's work. And thank you ever so much for your time and look forward to any questions. So we'll have questions at the end. Thanks very much, Smith and the team. Okay, so I think we'll move on to introducing the next presentation. I just realized that I haven't described myself. I'm a middle-aged white male. I have a short-cropped hair, quite a grain beard, I should say. I'm wearing a light and checked shirt and dark trousers. I should say that. So the next presentation is called On the Face of It, Creating Virtual Reality VR, An Educational Outreach Capability at the Ronda Heritage Park Museum. And the presenter will be Darren Macy of Ronda Kinan Taf Heritage Services. And Darren's an operational manager of the Heritage and Outreach Services at Wales' third largest local authority, Ronda Kinan Taf. He combines this role with the teaching portfolio at the University of South Wales. His areas of research include heritage, collective memory, cultural understanding, Wales and the Atlantic activism and the power and the glory, revolution and evolution of energy policies in Wales. Thanks Peter, a really good Welsh pronunciation as well, I'm very impressed. I have ancestors. I can tell, I can tell. It's been a Welsh theme in this group actually. Just to describe myself as well, I'm similarly a white middle-aged man. I've got a white shirt on and it's worth ginger and orb and depends on which mood I'm in, but I sport ginger for those of you there. So I'm going to talk about a specific project, one specific project today. One case that we produced a VR experience called the Pulch Bar come Ronda. And like Peter, I'm not fluent Welsh because Pulch is mine, Bach is a mutational Bach which is small come his valley and Ronda obviously is where I come from. So it's small mine in the Ronda Valley. I've got some friends up in Big Pit Museum who took exception to my kind of cheekily pinching their title, but I think they're over it now. So we've based this on the Ronda Heritage Park in South Wales. We were supported in this endeavour by the Welsh Government. We funded completely by the Welsh Government's Winter Wellbeing Project, which has gone through right through this winter. It began in kind of middle of December. We had to deliver, I'll discuss this in a bit more depth later on, we had to deliver by the 31st of March, quite a tight time frame. The whole experience is an outreach experience from the Welsh mining experience at the Ronda Heritage Park Museum, which is in the Ronda in Droward. So the Winter Wellbeing Fund was based on the idea. It's funded by the Welsh Government, so it was initiative to support social, emotional, physical wellbeing of children and young people. I think it's a bit of a spillover from the pandemic and trying to get children and schools back re-engaged with a wider community and trying to encourage more interaction. So that's the basis behind it. We were lucky enough to secure a grant of just approaching £50,000 for this, which we really, really appreciated. And quite still, quite tight to deliver everything we delivered on even that quite considerable amount of money. So we kind of evolved into this project. So we began, we were involved in a different project called Last Voices in the Valleys or Last Voices in Mining, which was conducted by a company called Vision Fountain. If anybody wants to know anything about the tech behind this, please, if you drop me a line, Vision Fountain are the tech people involved in this, I'm a sources story. So I'm kind of the ideas man, and my friend Richard from Vision Fountain is more the delivery end of it. So Richard began conducting an oral history project in the Alvern Valley and also in the Ronda Valley with miners. So he was 3D mapping, created a 3D portrait of miners to go alongside oral history testimony. So from that then, there's an example there. From that then schools were creating, he took that to schools, and the schools were creating their own images of the miners listening to oral history and he picked different pieces of oral history out for me because of specific what we were looking for with the oral history testimony as well. So we picked that out, you played that to schools of six or seven different schools, some in Cardiff Bay, so there's a diverse element to this as well. Some in the Ronda Valley, some in Alvern Valley as well, in Glen Cardiff. There's an example of some of the children involved, that's our display gallery in Ronda Park. That's triabbard primary school, which is about six or seven metres from the mine where we're based. So that is the exhibition that was created from these 3D portraits of the oral history. That's the wonderful Mayor S, that's my plug, because this has been recorded and I'll show this back to my colleagues in RCT, that's me against the brownie points, so that's the Mayor S next to me. That's my tour education outreach officers, Catherine and Esther would do a brilliant job. This is the initial project, so we started out with the idea of a 3D portrait linking now with an oral history project and this was all funded previously by a national lottery grant. So back at that, we found out about this Winterwell Bien Fund. Again, I mentioned a super tight timeframe. I was actually informed on 15 December, it submit the bid by the 7th of January, and we had to deliver by the 31st of March, if you can imagine buying hardware, creating a completely new idea. We had never come across anything resembling what we were going to do with create this virtual mine. I couldn't find anything similar at all anywhere across the world. So delivering that in three months, I think, not so much me, although we did a considerable amount of research around here from my department for the tech department, which are delivering that was an absolutely amazing achievement. Why is it really important for us to do this? So that image is actually the mine we're at. Our USP really is a guided tour by former miners, so it's that idea that every single tour is completely different. There are six or seven different guides who have different experiences of being miners, and every group they go out with creates a different question, a different idea of how that group dynamic works, whether it's children or adults or older people. Everything's different. So the problem we are faced with is that our oldest guide is just approaching 80, our youngest guide is 64, we're 30 plus years away from the end of mining in the South Wales Valley. So, you know, with a great respect and trying to be as delicate as possible, you know, our miners are retired and I'll find out we saw. So we need to come up with a strategy how to sustain ourselves as an organization, how we sustain ourselves as a museum. You know, we create a similar, you can ever replicate it, but we create a similar experience. This pull component one that is really the first step in that direction. And I see this expanding, I'd love to talk about in the Q&A, but I'm quite conscious I've only got 15 minutes to get through quite a lot. So this is step one really creating this VR experience. I'd say augmented reality is where we want to end up really is asking our guides 10,000 questions each program and an insight, you've got a different response and I'm trying to replicate, what we can't replicate it, but trying to replicate as much as possible. The other thing for those of you who are not not from Wales. The other thing is massive boost heritage sector in Wales is the new curriculum. So the new curriculum is the idea of engaging with local issues where every school creates their own curriculum, they engage with local issues and try and combine the local with the global. So this offers the museum sector, the heritage sector in Wales, you know, a wonderful opportunity. It offers us a big challenge as well is how we engage with schools. But again, you know, I see this as a step forward in that direction. So it's primarily this this project primarily aimed at schools, although there are outcomes in terms of dementia and care homes with, you know, we thought of as well, but it really is aimed at schools. So we wanted interactive elements. We wanted to contain as many learning elements as possible without, you know, but we're still keeping things fun, which was again was a bit of a challenge. And we've targeted 10 to 13, you know, and it took us a while to come up with that target age group really, we wanted enough maturity that they understood how to use the technology. And we wanted it to kind of engage under 30s before the pressures of GCSE and the pressures exam come on so that they can make a perhaps a bit more free run at this then. You know, we also needed to be portable, which is thankfully where we are, you know, in terms of tech support at the moment, so everything we've used is not tethered. We've used, I don't know if I'm used to allow to use branded brand names. We've used Oculus to 256 gigabyte headsets, which give us enough capacity. And again, I'm not a tech person, but enough capacity to load that the experience completely inside the head side and also be able to add other things, you know, at the later date, hopefully. We want to make things meaningful. So we want to again tie things back to the curriculum. We want to make it a really useful, interesting learning experience. We want to be deliverable. You know, I mentioned, you know, we have the right to bid for this in two or three weeks over Christmas, and we have to deliver in three months, which, you know, any of you in the sector understand is a virtual impossibility. Excuse that that wasn't meant to pan by the way, I do apologize for that. It was a real, real, real, you know, labor intensive in terms of research in terms of, you know, we are we structured thing and in terms of the tech support and, you know, what Richard did in vision founding was was an amazing job. So real, some significant challenges. So just to give you an idea of the of the schematics of the project. So we went for a drift mine. Sorry to get a bit technical from from my perspective so the museum is actually a deep mind so we actually you go down in a cage, we thought about create the cage. There's not so many problems, not deliverable. So the alternatives to have a just fine so that a mind that we're called is quite close to the surface on the side of a mountain so you minors would just dig into the mountain side. So you're not going down you're going into the mountain side made things a lot more deliverable. So, but just minds are not used in this area. So in some ways, you know, we have to explain that in the experience as well. There is a map of our actual of the entire experience so the participants would go to the lamp room and I'll explain that in a moment. Then there'd be a circular track around the mine with different experiences and different points of engagement. I think I think we're for nine points of engagement right throughout the mine and then you'd finish back at the other side. So what I get what Richard delivered it was fantastic so he says view of the valley it's actually easy structure there and set there. As you look over that wall in the VR experience you can see the mine where the museum is so you can see the valley and it really is a fantastic piece of piece of innovation really is fantastic. There's a lamp room important again from my own area of expertise so a miner would check a lamp in. He'd have an individual lamp check with his own particular number on. He'd exchange off a lamp so every day they'd know exactly when the miners are down and they'd check the lamp back in and out so they'd know if somebody was left on the mine. So actually that's a lamp check from the museum you can see the numbers scratched out in 1984 scratched into it that's when the mine closed so I think that's a lack of defiance there but again that's something we use in the experience that's something we engage with that's something we would be trying to address as well. So when you're using it the user would enter the lamp room they pick up various pieces of kit and examine them. So that's what we went for so there's a canary in a box there's different types of lamps there's a minus flask all those are completely interactive so they use their handsets to pick them up. Which I think is a brilliant innovation as well because you know we don't allow people to use that with our museum artefacts so actually being able to identify things and to look at things and engage with them in a physical it's not really physical but in a sort of physical sense is amazing so again this is something Richard created or mapped out as you can imagine hours and hours and hours of time. Actually in the coreface so what Vision Faulty did for us we've got an underground experience the miners take people underground you're not actually going underground to simulation but it's pretty realistic. So Richard went and mapped sections of our museum and created the VR from there. I was really specific I didn't want to replicate what we've got we saw we said I'm trying to protect our product so I didn't want people to think that by using the VR headset and using the VR mind that they'd be in any way replicating what they do if they came for a tour because obviously we're an income generation we look at income generation as well so Richard mapped things out from our simulation placed in different sections and I think it's nine different sections of film. There's also where you see you pick up a mandrel you tap the side of the mind core falls away and you can replace the manuscript you get the simulation of actually digging coal. Again you walk through the tunnel so that's a great experience different films in different sections so the first bit we actually Vision Faulty put two virtual horses in there and a virtual canary children absolutely love the idea and the horses and the interaction with the horses in the canary is brilliant. There's also a little film there talking about pit ponies section two I think it's talking about the injuries in mining section three is talking about the mind experience and so on and so forth. So they'd actually mine as reviews that oral history accounts and supplemented that with some further oral history accounts to create this pretty realistic environment. There's another idea of schematic and again I'm not a tech person I'm sure Richard could be more than happy running through it. Sound again really layered so there's an ambient sound within the mine. There's also board sounds so in terms of horses and canary we've bought sounds from other tests we've not just newly created my oral history recordings. We applied an actor to give instructions as you're going through the mine. The recording. Yeah sorry sorry we'll have to close the closed it's really fascinating but if you can you can do your best. Sorry sorry so we've put an actor to talk about the artefacts and lots of testing. Kids absolutely loved it sorry I could go through the testing for the next 20 minutes as well and we duplicated it well so obviously we're in Wales so we've got a Welsh version of English version. Please come and see us if you have any more information please email me and I'm sorry I ran over but I'm super passionate about this and I think it's a wonderful opportunity for listening. Thanks for listening I'm sorry I ran over. No that's fine do you want to just say something about the testing just a quick summary? Yeah I won't go through all the figures but what we found is how quickly the children took to the product. When we went through the numbers of did they pick up every artefact yes over 90% picked up every artefact. There's a seated version was much better a swivel chair was even better than a static chair was much better than standing and trying to go through the experience so we learn loads and loads. And again I apologise for going over I can talk. It's very enjoyable. There'll be plenty time to cover things I think also in the questions we can come back maybe to the testing thing again it'll be interesting. Thank you very much Darren yeah great. Okay so we move on to the final presentation which is pre-recorded. It's called bridging worlds, novel panoramic capture and navigation of collections and existing exhibition spaces for knowledge exchange. Her presentation will be by Nina Perlman, Andy Hudson Smith, Jason McEwen, Leah Lovett, Valerio Signorelli, all from UCL. So Nina is head of UCL art collections, and she has a strategic oversight of the programs and partnerships that unlock the collections contemporary relevance for the benefit of researchers students in the wider audience and the professor of digital urban systems and director of UCL Center for Advanced Spatial Analysis CASA, an interdisciplinary research institute focusing on the science of cities within the Bartlett faculty of the built environment. I work with fellow CASA researchers Leah Lovett and Valerio Signorelli and Jason McEwen. I'm the professor of CAGE NOVA or CAG NOVA in this professor of astro statistics and astroinformatics at the Muller Space Science Laboratory MSSL at UCL and he's also a touring fellow at the Alan Turing Institute, UK's National Center for Data Science and Artificial Intelligence Hello, I'm Nina Perlman, head of UCL art collections. I'm delighted to present Bridging Worlds, novel panoramic capture and navigation of collections and exhibition spaces for knowledge exchange. I'm joined by my colleagues, Andy Hudson Smith, professor of digital urban systems, director of the Center for Advanced Spatial Analysis at UCL in short CASA, and Jason McEwen, UCL professor of astro statistics, founder and CEO of CAG NOVA. For the Q&A we'll be joined by our fellow researchers Leah Lovett and Valerio Signorelli from CASA. Technology moves fast and forward. For many of us, the pandemic has caused the boundaries between our home life, our work life and leisure to collapse. And whatever our individual exposure level to technology has been, we have all come to experience some form of a metaverse, where our physical and digital worlds are blending. UCL is home to a number of different museums and exhibition spaces. Art, Egyptian archaeology, zoology and like the neoclassical campus that houses them, which has evolved over time. UCL museums and exhibition spaces too have evolved to fulfill a wide range of functions in unusual spaces. We excel at exhibition-led inquiry that is research and object-based. This is what allows us to collaboratively create challenging and inspiring learning experiences, faster interdisciplinary thinking, invite participation and encourage initiative and much more. So the physical sites combined with facilitated collaborative and self-directed learning are very important to us, to our partners and to our audiences. But like many small museums, we felt the pressure to create digital experiences for our students and the communities we serve. And in the process, we discovered we had a remote audience. We understand that our audiences' appetites for virtual experiences will grow and that competition for their attention is high. We recognized we needed a metaverse that can bridge worlds and that we ourselves need to gain knowledge and upskill in this area. So under the umbrella of a knowledge exchange project funded by the Higher Education Innovation Fund, we brought together UCL museums, Kaganova and our colleagues in Casa to do this. I will now hand over to Andy. Nida, thank you very much. So there's these emerging worlds, metaverse worlds. They've been about for 10 to 15 years, but recently, of course, Facebook changed their name and the focus of the next five or 10 years is to build these virtual worlds. And museums, art collections will probably feel under pressure, but also want to be part of it. So from the centre of social analysis point of view, we've always been keen to sort of share techniques and software and way to be part of these worlds in a low cost way as we possibly can because we're aware that museums may not have like a dedicated resource to build these worlds. And I personally think the world should be photo realistic. There's a tendency as as from this slide here to them to have a slightly cartoon look, but if you're going to do a representation of a collection. I think the photographic twin is the way forward. So there's multiple ways over the years to build these twins. One is the classic photogrammetry route. So you go in, you take hundreds of photographs, you maybe have a lidar rig, which is where you bounce light around the room. And you do a complete point cloud. And that's a fantastic capture of the space the collection as it is now. But it's quite hard to work with the data sets can be very large, the capturing can be true to life, but can take time. So I think there's a need to do an actual representation of space, but to get it into the metaverse with a sense of depth in a slightly different way. So traditionally, as lots of museums and art collections will have done over the last 10 years is a panoramic capture. And that used to be, you know, taking a camera rig, capturing about 120 photographs of each scene, merging them and then to do a virtual tour, which they're okay. But it's kind of been done and I'm not sure how many people use virtual tours on the web now the resolution can be a little bit low and it still has notable costs. But what if you could capture a panoramic tour that had depth, what if you could capture it in a single photograph at each point where the resolution was high enough. And then it somehow took the panoramic view and recreated the 3D space behind it. That would give you a digital twin that you could put in the metaverse to show your collection and it shared. But that needs someone who knows a little bit about physics, Jason. Today's manifestations of the metaverse are far from photorealistic. Photorealism, however, is essential to unlock the potential of the metaverse. At Kaganoba, we're developing geometric AI to power the metaverse of the future. There is a remarkable connection between the Big Bang and the metaverse. We can look out over the night sky over the celestial sphere and observe the relic glow of the Big Bang called the cosmic microwave background. Since we observe this background light over the celestial sphere, we recover a 360 degree spherical image. In panoramic photography, we also acquire 360 degree spherical photos and videos. Kaganoba was founded to leverage the expertise we have developed to study the Big Bang to address the current limitations of 360 degree virtual reality. The key to unlocking the potential of 360 degree photographic content is to enhance it through AI. However, standard AI techniques simply do not work with spherical 360 data. At Kaganoba, we have developed geometric AI techniques for 360 data. These techniques also have close connections to physics, leveraging the mathematical machinery of astrophysics and quantum mechanics. Today's immersive experiences provide either realism or interactivity, but not at the same time. 360 degree VR provides photo realism, since it is, after all, based on photography. However, only the original camera viewpoint is accessible and so such content is not interactive and you can't move about. CGI content, on the other hand, can be interactive, but it is far from photo realistic. And it's very expensive and time consuming to create. At Kaganoba, we are enhancing 360 content through our geometric AI techniques to provide photo realism and interactivity at the same time and at scale. Our Copernic 360 technology allows you to walk inside 360 photography and move about. Our Copernic Wells technology, as demonstrated in the video here, builds on Copernic 360 to democratize the construction of large scale virtual worlds that you can seamlessly move through to explore. Partnerships like this project are invaluable to understand how our technologies can be best deployed to meet the needs of users, such as in the cultural sector and beyond. Together with our colleagues in Casa and Kaganoba, we embarked on a journey. We conducted research on the use of immersive technologies in art and science and innovative museum heritage education projects using VR and AR. We made many captures, explored technical and site challenges, and attempted to evaluate our student user experience of a pilot we created. We learned that knowledge exchange fuels an appetite for experimentation and innovation. We learned that knowledge exchange can lead you to unexpected places and new discoveries. From the perspective of a small museum, we gained greater understanding of our VR needs, the conditions required to meet these, and who needs to be part of the conversation. Our technology partner gained access to knowledge sites and people that can challenge technology and open new frontiers. And our colleagues in Casa, who are experts in all things virtual, gained access to trailblazes in technological innovation and culture. The collaboration required the efforts of many. And I'd like to thank UCL Culture Visitor Services Team, Kaganoba Team, UCL Enterprise and Innovation Team, Marketing Consultant Fran Taylor and Angela Diakopoulou of Sphere Insights Market Research. We thank DCDC 2022 for the invitation to present, and we thank you, our audience, for your attention. Thank you very much to the UCL Team. That was a very interesting presentation. Okay, so if the speakers now would like to put on their video and audio, we can start the Q&A session. Some questions have come in to the chat. Please, you know, carry on asking questions as we go. I've prepared a few as well. So hopefully we'll have an interesting conversation. Okay. So the first question in the Q&A box is to Stephanie. It was really interesting to hear about your projects currently doing a PhD investigation on death watch Beatles on HMS Victory, part of which involves developing prototypes. I was wondering what your prototypes look like and whether you assess their effectiveness before developing them further. If so, how did you assess the effectiveness? Hey, yeah, your project sounds really interesting too. I'd like to hear more about that. So there's a sort of short answer and a longer answer to that. The short answer is that we haven't done any prototyping on these projects yet. So the prototyping comes towards the end of the process at the end of the research, these are current projects. In terms of the ISCRE project, we'll be prototyping at the end of the summer. And I suppose maybe Alex might want to speak more about this, but we'll know if it works, if we manage to show that we've invited the birds to respond. How we interpret that is yet to be seen or heard. In terms of the project at the Smithsonian, because of the lockdown, I didn't get to do all the testing that I would, you know, prototyping I wanted to do there. So I didn't have access to the collections until quite late during my stay and to the staff. So I was intending to build, to model a digital ontology, which is something that we've done in the past, and then test it. But my initial test was going to be with members of staff at the museum in the format of a role-playing game. So instead I wrote a narrative and tested that with the people that I was working directly with, but I didn't have access to the wider staff. But we have built ontologies before that we've tested in terms of modelling ecosystems digitally and asking them questions to check if they work. So in terms of testing things in the past that we know have happened, there's quite a long and complicated answer that maybe I don't want to go too far into detail with. So there's a follow-on question I can answer. That's, I mean, a full answer, I think, the question, unless any of the other members of the team want to say anything further. Okay, so I move on to the next question then. The question to Darren, how do you go about costing out the project within such a short time frame for a bid submission? That's a really good question. Trust would be the one word I use is we work with Richard. I had complete trust in him, so he told me he could deliver for a set price. I went with that, I didn't drill down with that too much. I think if I was working with another supplier, possibly I would, and I'd want to flesh things out a bit more. Hardware is the Oculus headsets out there. I think we paid around £400 each, so it's a significant outlay. But yeah, he was just trusting Richard really and he really did deliver. We actually went back to Wales Government and asked him for more money. The extra money was, we were originally going to put subtitles in, Wales subtitles, but it just looked awful. It just completely spoiled the illusion. By having a standalone Welsh version, which means we had to do everything translated and use a different actor to do the Welsh voice over. I just think it's just a much better product. Also, when you're engaging with Welsh language schools, I don't think it's fair to give them a different experience than the English language schools. So we went back and asked him for, I think it was an extra £4,000 and Welsh Government was really good, really, really, really supportive. So yeah, he's just trusting Richard and trying to flesh things out as much as we could. And I think Welsh Government has a significant trust in us because this is such an exciting field and there's very little being done in it. So I think they were excited to see what we come up with. Can I just supplement that quickly because I had a question about that costs and so forth. What about sustainability because you talked about the organisation evolving, obviously, as things change, in terms of sustaining that into the future, have you got any? Sustainability of our product. Yeah, I think this is the first stage and then augmented reality really is where we'd like to go. There are some issues around that and the guides are very protective of their experience as well and they see any sort of recording of them as a threat, as a threat to their jobs. Even though I tried to explain that they're irreplaceable and eventually we're going to have to put something different but it won't replace them. So we're looking at a way to bid to HLF in terms of you go past the pressure point and the guide, a hologram of the guide would come up and you ask him a question and using the technology that some of the other partners have described you. They'd come back with a different answer they might be programmed with and again I'm not a techie person programmed with a thousand different answers. It's a really time consuming idea but it's the only way forward otherwise we're not sustainable. Just to expand on that a little bit as well, we're looking at other avenues of sustainability. As a coal mine, as a mining museum, the idea of mine is toxic so we're looking to reinvent ourselves as a museum of energy. So sustainable energy production and how sustainable energy production can move things forward as well. And the VR can really help us with that so it's limitless, absolutely limitless. Thank you. Okay so move on to the next question which I think is to everyone. I'm not quite sure but there was a question about how do you deal with copyright issues surrounding the use and sharing of heritage material. Yeah so I could, I mean I was thinking about that in terms of Beth's presentation so maybe can I stop there? Sure yeah, thanks Shanique. That's just a great question. Thankfully there are exceptions to copyright for heritage organisations and if we just look at a couple of them. Specific to what we were talking about today is the text data mining exception, so a specific one for data mining. It allows electronic analysis of a large amount of copyrighted works to identify patterns and other interesting information that specifically wouldn't be possible through human reading so quite relevant to what we were saying today in an actual exception by law. And another one that's quite interesting again towards and talking about heritage organisations and people that are in buildings for example historical monuments, botanical gardens is the dedicated terminal exception. So a copy of work can be made available to individual members of the public by dedicated terminal on the premises of any glam to Gallery, Library, Archive and Museum. Also interestingly there's the parody caricature and pastiche exception which we've seen quite a lot of examples of for example. Archives of music being offered up to a celebrity DJ in order to promote the archive you know they're going to rework this music in that in that context being reworked that another exception and you can give public access to your to your collections that way. So really, really there are quite a lot more of these exceptions if you want to do them up. So really there's always a there's always a way around these things in terms of heritage material it's just making sure that we do it properly and with the right ownership of the original materials. Thank you. Andy, could I ask you to say something about copyright angle because I'm sure that arises in your. You can do the interesting thing is that, you know, as a multi-disciplinary lab and you do all sorts of things, you have to sort of delegate certain bits of this. So if it's okay, I'm going to bounce the question to another delegate, which is Leah, who has been sort of helping us lead the copyright theme after that. Everybody says to be bouncing at the moment. And indeed Leah Valerio of how we do copyright from a panoramic capture point of view. Thank you. Leah Valerio, can you want to come in on that? Can you hear me okay. Well, I cannot some days not specifically on on on on this project but there's also in in general the use of copyright material in virtual reality is quite a. It's definitely a big issue because we are not just dealing with the, let's say experiences. So within the asset, but we are using assets that are connected to other big industries, for example, so we are mentioning the Oculus Quest. We are mentioning other VR headset, but the use of these devices are actually sharing data with the external partners that in certain cases it could be fine in other to create some. Issue in term of privacy, for example. In terms of the copyrights of in our case in our prototype that was a is the the Plexman gallery inside the UCL building so we purposely we didn't capture any so we didn't take any photos with with people inside it that could create problem in terms of privacy and but also in terms of copyright. I don't know what specifically I cannot then on on on on our project specifically. That's really interesting insight actually so thinking about when you're sharing data you're sharing on material and the issues that arise from that. Beyond the problem of the copyright. Yeah. Exclusion for that just to mention there is the incidental inclusion exclusion had a bit of a tongue twister, but it does help in situations where you are creating panoramas there's also the freedom of panorama panorama exception as well, just to make it even more complicated but you know it's starting to cover those incidental captures the more the more museums are seeing value in social media. Examples of incidental media that those laws are starting to change and yeah those are two more exceptions you might be able to use. Interesting. I mean, so so in terms of what I'm hearing is that there's quite a lot of complexity around this so you know I'm thinking about smaller museums and smaller galleries and libraries that an archive that want to access these kinds of technologies. What are some of the and this is a general question I think what do you think are some of the barriers to uptake and I particularly think when I say it's the UCL that's kind of real cutting edge stuff so how you know how does how does that stuff that those activities how do they get taken up into the actual sector your your viewer and your experience. So we've, we've worked on in this with various groups of last 20 years or so, and it's just their fantastic groups of work with museums and amazing places, but they often don't have a dedicated digital team. And as we're moving towards the virtuality and the metaverse know some of the big players out there have big funds and they're doing some fantastic work. And it's how we do sort of trickle the technology down so actually it's easy to use. I think more importantly, it's low cost. I think it has to be rapid capture and low cost. And this is kind of what the grant was about it was about merging museums with culture with astrophysics with our lab where we're multi discipline lab and seeing how we can do things which can sort of go on online can be used. And you don't have to do a PhD or to have a master's degree to actually know how to use it and know how to make it yourself. Thank you. Does anybody else want to say anything about about that I was thinking also a lot about user user and I've got the testing question still Darren as well but I mean that you know involving people in the in these. You know I was thinking when I was hearing about the communicating the octopuses. Obviously also want to communicate here with with people and I'm quite keen on those sort of people with technology and what that kind of boundaries, or, you know, how can we transgress some of those boundaries as well. Does anybody have anything about, you know, engaging with users in when you're developing these kinds of technologies or these approaches. It's something we we're looking at we want to stretch is in terms of interpretation is it's fine finding a pitch on interpreting a traditional interpretation board is inside that talk about the people being part or seems patronizing or seems a bit too convoluted. So it's an interactive interpretation where you can switch languages, you can switch age appropriate, you can switch content, you know, from Spanish to English to where we are the Spanish university in this week and they try to read things in English and obviously Welsh, but, but even, you know, it's not a problem we've got is because we've got the added language, all of our interpretation is duplicated, which is which is great and fantastic and big believer in the Welsh language, but wouldn't it be great if there's only one set of text and you can just switch that from where it's doing the Spanish to German and the age appropriate, you know, that's something we're looking at. We'll just touching on the copyright thing is as long as long as it's free to use as long as you're not trying to generate income from it, we felt that everything's great. But if we do want to use this mind as an income generation completely changes, changes the parameters then so as long as it's free to use, we found everything, everybody's pretty on board. Thank you. Anything further on the on the sort of engaging people with with these technologies or the testing of, I mean, you had some interesting slides Darren, which I cut off on that on the, you know, on the user engagement piece and testing I think that's what you were. Someone said it was such a fascinating presentation and what an achievement. Could you share a little bit more about the testing and your results in particular was there anything that surprised you. Any struggles and teething problems. The easy use and the younger that the people were tested with the easier they found it to use obviously because of the pandemic and, you know, there's all sorts of articles out there. 350% take up in VR in a pandemic gaming. It's just, you know, pupils and children just just took to it amazingly quickly. They came from there talking about, you know, lamps and lamp checks and canaries and boxes with which there's no way they've been engaging with them sort of things. If they hadn't done that. You know, as I mentioned, find out that the sitting on a swivel child is much better than standing because it's another safety implications, but it's just a much better experience. We tested lots of different age groups as well. You know, the slides described from university students that the primary and just the differences in the engagement and differences. And what they thought was good about the product and it was a really, really useful. You know, we still go on with that and hopefully we're going to keep tweaking in and again, you know, which is because this is brand new. It's an ongoing process. Thank you. Okay, so we're going to draw to a close now but I've got questions still I think from from Andy and from Leah. So a minute each if that's okay and then I'll wrap up. Yeah, it's okay. Just, just a quick question back because the Welsh mining world, we just so nice and they use testing. And I'm just interested in user testing of an age group. So we use the Oculus Quest, but that has a warning that it's not supposed to be used by people under 13. And I'm just interested about how people run that or do they do testing for a short period of time and therefore that scene as okay because it seems if you're going on from a museum point of view, there is an age group limit there. If I just respond that straight answers Andy, I'm not sure that's not my, not my forte, but I'm sure Richard would have, would have checked that out and it could be time specific. I need to eat, you know, definitely done stuff with primary, primary age. So, yeah, I'm sure he got, I'll go back and ask that question. If I could just pick up on one thing in the Q&A. Somebody asked, are we wheelchair accessible, completely wheelchair accessible. Everybody's invited. I'll give you a free tour. Fantastic. Lea, did you want to have the final word and then I'll have the final word. I was, thank you so much. I was just going to add to the conversation around user experience and testing our, the work that we've been making is accessible via both screen and VR headset. So I think there's one point which is around kind of multiple platforms or channels for access is one way of kind of addressing that issue. And then in terms of testing, we haven't quite got there yet, but we're setting up a drop in workshop. So it does require a little bit of resource in terms of, you know, so that your guides aren't out of a job term, thinking about how that's supported within the institutional setting. And on other projects in this kind of area in general, we've also done more work around digital co-creation so that our end user group is involved in the process of designing and creating digital experiences. And that again, it's sort of sliding scale of kind of just how much resources required because that did require an awful lot of our time investment and care in terms of managing those projects. But there are kind of different ways of involving people at all stages of the process.