 So welcome, everyone. We will wait some minutes to let everyone get in. But I'm welcoming you on behalf of the EDEM, the European Distance and Learning Network. And shortly, we will begin with the webinar. We are already 29 participants, so we are really excited. We are going to have a very interesting and exciting webinar today. We already see some people getting in. We are already more than 30 participants. Hello, Cristina from KimiSupara. Hello, Zoi Hassan. Laura, it's a pleasure to meet you. I think since we have a very tight schedule, I think that we can already start. So welcome, everyone. My name is Luis Villarejo. And together with Rijicé Raquel de Lázaro, we will be your hosts today for this very exciting webinar session. Today's webinar is started. You can handle. You can teach it. Use of Excel technologies to enhance teaching and learning methods in online higher education. As you may know, this webinar is part of this year's European Online and Distance Learning Week, probably organized by EDEM, the European Distance and E-Learning Network, which is co-funded by the European Union National Plus Project. We have an exciting problem today, as I told you, consisting of two talks and a Q&A session at the end of these two talks. You will be able to pose your questions on the chat. And Zoi said we'll be in church of recruiting these questions. And then at the end of the two talks, we will be able to chat about them. Let me tell you a little bit about Jijé. Jijé is a postdoctoral fellow at the Faculty of Psychology and Educational Sciences at the Open University of Catalonia. And myself, I will be chairing the session. So good afternoon, Jijé. Can you say hi to the audience, please? Hello, everyone. Good afternoon. Hi, Jijé. Jijé is an anthropologist focused on using digital imaging technologies, such as 3D structure light scanner, computer tomography, micro-city, as well as 3D geometric morphometrics to study functional morphology in a broad, competitive and phylogenetic framework. In 2012 and 2014, she received the Rasmus Mundus grant to complete the International Aester and the Doctorate in Prehistory and Human Evolution at the Robira Iberigiri University at Spain, the Musee National Distor Natural and Sorbonne University at France. And as a part of her work, she integrates techniques and methods from bioarchaeology, bioanthropology, functional morphology, paleontology, and digital heritage, rendering her research particularly cross-disciplinary. Jijé has also actively participated in bioarchaeological and bioanthropological studies and field campaigns. She takes an active role in promoting science in both academic and non-academic communities. And she is also actively involved in promoting career paths in esteem. In 2013, she became an Rasmus Mundus program and regional representative. And as a part of this international role, she organized and participated activities for the recruitment and orientation of new students. And regarding myself, my background is in computational science, especially in educational technology, where I have developed my professional and research career. I am also co-founder and CEO of Immersion Studio, which is an spin-off of the Open University of Catalonia, where we develop immersive learning experiences for educational institutions and companies using especially interactive 360 video in order to enhance retention and empathy across a wide different of verticals. We have in this time at Immersion Studio have had the luck to work for clients like United Nations or the European Society of Intensive Care Medicine, where we have had the opportunity to train more than 20,000 professionals with immersive experiences. So we are really thrilled about the actual state of the art of XR in education. So before introducing our speakers today, let me tell you about why this webinar and why now. As you know, over the past year, which has been defined by the COVID-19 pandemic, we have witnessed the boom in the application of digital media to education. This has had a big impact in traditional teaching, teaching operations, shifting to online delivery. Whether and how such activities will continue in a post-COVID-19 situation remain unclear. And apart from this, are they online and open university with a long-last practice in providing flexible and innovative educational options, widening many learners' possibilities. In this webinar, our goal is to share the experience game, good practice, and the pros and cons of handling with 3D technologies, augmented and virtual reality resources to amplify a multimodal, active, and learner-centered method in online hardware education during COVID-19 pandemic. As an example of resilience in the educational context, exploring the use made of these digital tools will allow us to understand how the teaching and learning process has been strengthened and interactively expanded. Our first talk today will be learning in the metaverse and will be given by Professor Fridolin Wilde. Good afternoon, Fridolin. Hello, good afternoon. Hi, Fridolin is full professor at the Institute of Educational Technology of the Open University, and he's also leader of the Performance Augmentation Lab at the United Kingdom. After Fridolin, we will have the opportunity to listen to Pierre Bourdan and his talk XR for e-learning. Good afternoon, Pierre. Hello, good afternoon. Pierre is lecturer and researcher at the Faculty of Computer Science, Multimedia and Telecommunications at the Open University of Catalonia. As I told you before, during both presentations, you will be able to post questions and Gise will be collecting them, and at the end of the two talks, we will be able to discuss about them. So, let me briefly go with the first talk. I will introduce our first speaker, Fridolin Wilde, and I will read his short biography. Fridolin seeks to close the dissociative gap between abstract knowledge and its practical application, researching radically new forms of linking directly from knowing something in principle to applying that knowledge in practice and speeding its refinement and integration into Polish performance. Fridolin leads the Special Interest Group on Variable and Ace Learning of the European Association of Technology, and Ace Learning. He chairs the Working Group on an Augmented Reality Learning Experience Model of the IEEE Standards Association, as well as the Natural Language Processing Task View of the Comprehensive Archive Network. He is convener of a Key Standards Working Group WG-11 for Future Proofing, Augmented Reality Virtual Reality in a Joint Technical Committee of the International Standards Organization, an International Electrotechnical Commission, part of the Subchart for Computer Graphics, Image Processing, and Environmental Data Representation. He also co-chairs the IEEE iCycle CXR for Learning and Performance Augmentation. Fridolin is a trust-appointed governor of Oxford Inspires Academy. He's trusted reviewer for several funding bodies, including the EU and RCU key. Fridolin is and has been leading also numerous European projects like at the EU, European Space Agency, and Nationally Funded Research Project, including ARETE, Open Real, LAR, Work It, TCBL, Tell Me, Tell Map, and a long list of others. From 2015 to 2020, Fridolin was Senior Research Fellow at the Oxford Brookies University. From 2009 to 2016, he was at the Knowledge Media Institute of the Open University at the UK. And Fridolin has also worked as a researcher at the Vienna University of Economics and Business in Austria from 2004 to 2009. He has also studied at the University of Regensburg in Germany with extra murals at the Ludwig Maximilian University of Munich and the University of Hiddelschein. So it is a pleasure for me to give the floor to Professor Fridolin Wilde. So Fridolin, please go ahead with your presentation. Good afternoon, or if you watch this later as a recording, good morning, good evening, whichever time of the day it is when you watch this. My name is Fridolin, I'm very pleased to be here and speak to you about the naughty things we have been doing in my lab over time and in particular the things related to what becomes a big thing now, the metaverse. I thought I've organized my talk to you in the following four parts. So challenge, opportunity, what it actually is that we've been doing and then the associated research findings, of course, then I will finish off with a short summary of everything going on. So let's deep dive first into the challenge area. The challenge is not just there because of the pandemic, but the pandemic has accelerated a trend that was already happening. The trend to remote work, to more e-commerce, to automation and that disrupts significantly society, that disrupts in particular the way we work and the type of work we need. McKinsey, for example, estimates that up to 25% more workers than previously estimated now need to switch to occupation. It's the same as before, but accelerated that we look at market winners. So of course healthcare, but classically, STEM professionals, health professionals, management are the winners of this pandemic and the previous wave of automation. And there are also losers and that is in particular in office support, warehousing, agriculture, food services and so on. This is not just something that McKinsey confabulated. If you look at earlier studies, like the one from Frey and Osborn, the colleagues down the road here in Oxford, they have found already in 2013 through a similar analysis that a major part of our societies and our jobs are being disrupted by automation. If you want to human yourself, check out on Google, BBC and will a robot take my job? They have a fantastic database built on the work of Frey and Osborn, predicting whether the job we have, the job of our friends and families and neighbors are actually safe and surprisingly, they're not that many. This is something governments have recognized for some time. Here in the UK, we see a shakeup of technical education, of taking the further education where new things are being introduced, T levels for example, as an alternative to A levels, a push towards apprenticeships, skills funds, investment in further education colleges, yada, yada, yada. That's happening and that's not just happening in our country here, that's happening across the globe with differences of course between the nations, but the run to technical education is a trend that certainly has to be further accelerated. Some people take this far and say, this is going to impact university education as we know it and through the pandemic, in particular the shortcomings in the current models we have in university education come to light and some of them will not survive this pandemic. Scott Galloway for example predicts that the big elite universities, the Ivy League, the Russell universities will be the winners of the fallout of the pandemic, team up with big IT to significantly expand their enrollment and offer hybrid or online only even degrees, maybe retaining the mixed experience with the brick and mortar experience and networks in the university for a select few, but overall seismically altering the landscape of higher education. It may even be that this does not require universities at all. So if we look around, if we look at investments, if we look at startups, we can see the market is shaken up. Tony Blair's oldest son diverted a little bit from the policies of what his father was standing up for when in government and raised money for a unicorn significantly called the multiverse where without university alternatives to university education are being built. Google, IBM, all of them do have already offers in specific spaces like here, grow stronger with Google or IBM Academy where specific professions, particularly in the IT field of course, but increasingly in other areas too are being catered for directly by big IT. And that's not surprising if we think about it, these companies have business models that require constant growth, that require to make promises to shareholders come true and markets are at some point in time full the only opportunity to grow is to open up new markets or break into existing ones. So the push towards education and the push towards healthcare and digital healthcare is not surprising. The battle to control the metaverse has begun. That's the context, dark, but at the same time, I think we also have a great opportunity with particularly this black swan of the pandemic. And the opportunity is that through the pandemic, we have experienced a push towards distance education technologies. We have informed predictions by Sir Michael Barber, for example, who commissioned a study for the Office for Students Regulator for Higher Education in the UK for all 400 further education colleges and universities of the UK, predicting that the pandemic has changed our approach to learning forever and a disruptive avalon has arrived and we should all work together to use this as a slingshot, as a gravity assist to roll out more educational technology across the board in all areas. And I'm very proud that I had an opportunity to input here as well for the area of AR and VR. The report in particular says there is a huge potential to strengthen and expand in particular technical education, skills-based learning and training because it helps create an authentic experience in areas where we otherwise simply can't, where it's too dangerous, too expensive, access to tools or kits is limited. That's an area where we can expect these technologies to strive. They also have a potential to really change the approach we have to learning and teaching because it offers opportunities for different types of personalization as well as an unprecedented strong link between theory and practice, as I like to call it, a strong link directly from the potential for performance, for the potential for action, for competence to link with its performance and vice versa. And that I find as a technology and as a learning researcher, particularly exciting. Yeah, if we don't want to leave the field to movement out there who are engaging in building the metaverse, who are engaging in building this next generation of personal computing, the future of work, the future of learning, then we need to get off the bench now and act and see how we can bring innovative new technologies to continue to be an important number in this field. The name is, there are plenty of names for the metaverse. You may have heard it in different flavors, the mirror world, the augmented reality clouds, the magic verse, the spatial internet, the overlay, cyber physical systems, or then the classic notion of extended reality, mixed reality, augmented reality, twin world. All of them share in common that they start seeing personal computing more as a platform. They start seeing this movement of turning away from devices that mediate our access to the digital overlay and rather using unobtrusive wearable computing technologies. This example that you see here, that's a spatial map of my office using computer vision technology and LiDAR scan on an iPad here to figure out where our walls, where our platforms, is there a chair, maybe even mixing in some spatial understanding and using that then to anchor augmented content and deliver it to the user. If it's delivered on smart glasses, it even becomes a rather believable reality. Just to clarify this as well, what I'm talking about here on the mixed reality spectrum is more on this end, end of augmenting the real environment with light touch, providing some overlays, but otherwise being in the real world. Whereas here on the other end of the spectrum, and that's I think where Mark Zuckerberg gets it a little bit wrong, we look at a fully immersive environment where the best use is in simulations where otherwise we can't or it's dangerous and certainly not to exploit people in due to poverty and precarious living conditions to give them a sort of matrix that keeps them happy. Yeah, but it is a spectrum, often the difference between the two is it's not black and white and there are applications that can even change over time meandering between the experience points. So what have we been working on in this space of the metaverse of the overlay reality? We have been working on rapid reskilling, particularly on my chair, the last decade or so we have moved a lot of efforts across various EU projects. At the moment, most notably Ahead, which is funded in the interactive technologies call, where we investigate the use of augmented reality for education on end of primary and beginning of secondary school level. But a wide range of work. Our vision is that we create rapid reskilling that we need to prepare for a future where the speed is of the essence, the speed of how quickly we learn something and that then defines whether a business is successful or not, whether a person is more employable or less employable. And for us, we see augmented reality as a key ingredient in that which has the potential that it enables us to go from unskilled to mastery in no time and deliberately provocative in saying that on the job. And I think that's even more notable. The separation between work and learning is artificial. It doesn't need to be there. We can enable people to be productive from the moment they engage in learning. We use wearable technologies. And I'd like to invite you also to take a look in into our overview chapter here for wearable and enhanced learning from Yertel's special interest group on wearable enhanced learning, where we provide overview over the ages of technology and enhanced learning that we're going through. We are certainly in my point of view in this phase where we will see a lot of new tech we already do see a lot of gadgets and smart glasses are at the verge of lifting off with more than 50 producers worldwide. Ray-Ban recently brought out a new set of smart glasses. The big ones are jumping on it. It's a question of time till the market is big enough to be considered on par with mobile phones. And that doesn't happen from today to tomorrow, but device penetration rates are increasing year after year. However, over in this space, I think a lot of innovation will happen that we will only see in a few years where we see at the moment exciting research prototypes such as, for example, around smart textiles, textiles that can be fully... You can build a computer completely out of fabric using conductive threads to build the equivalent of a PCB board using an embroidery machine. You can use stretchable components or touch components to embed interaction directly in the things you wear. And in that space, in particular, the applications in haptic feedback, physical feedback are unprecedented. In that space of guiding people really with movement for sports, for rehabilitation, for extra games will be truly exciting. The years to come will show. But let's stick with the smart glasses here for a second. If we look at reality as a platform, if we look at augmenting reality, how does that actually work? In my definition, this touches into four areas that we can manipulate and control. That's reality in itself. Augmenting reality can sometimes mean that we alter really the physical reality, that we design spaces in a way that they're more amenable for tracking, that they are more amenable for projection or stabilization, that we avoid reflective surfaces and things like that, for example. You find today manufacturers of helicopters, helicopter, who would design the frames of their helicopters in a way that they are more amenable to tracking so that they can deliver board books, maintenance instructions and delights directly and better than if they hadn't altered reality. The more common place where we manipulate things, and that's where reality as a platform kicks in is, of course, in the delivery system, where there are plenty projection, smart glasses, contact lenses, maybe in the more distant future, handheld devices, any Apple device from 2014, any Android device after 2016, not any, some of them don't have tilt sensors and stuff. Many of them can use modern augmented reality technology and that is increasingly a market. But we have to keep in mind that here on the left side, we need to understand and design for perception and design for experience so that we get stuff through our perceptive system of the human, which already does processing manipulation from the retina to the ganglions, to the visual cortex and from there, then weaving it together with other thoughts and fantasies and experience. When we augment reality, we talk about looking at all these four stages, manipulating reality, manipulating the pretend reality, the augmented reality in the delivery system, but also doing it in a human understandable way with perception design and a good learning experience. There are many tools. Years ago, one would have said, if you want to engage in the space, you have to build an app. You have to hire a developer and a designer and you build an augmented reality app. And that's, I think, what is now changing with the metaverse clearly coming. We treat reality as a platform which allows us to allow people with less specialist skills to already create and design that reality using editors, using authoring tools. There are many. Here, I selected just a few, like reflect very strong standing in the automotive sector, for example, in training. Adobe, fantastic tool, Adobe Aero. In reality, quite similar. Many of them share that they are truly good augmented reality editors, but they are not necessarily tailor-made for education and training. They lack some of the support functionality, some of the affordances that we require when we turn to learning. We are just about to publish a new report, a review of authoring tools functionality in support of learning, where we conclude across the 900 or so screened works that we looked at and drill down in our systematic literature review, of course, into a lower number of papers that we included. We found hardly any evidence for any of the higher levels of bloom. As you can see here, most of the affordances supported by systems in the literature focus on the lowest level, remembering and understanding, some of them then going towards the direct application, but hardly anything going further in supporting learners in analyzing, evaluating and creating. It's clear that education often, or educational applications often lag a little bit behind what technologically is possible. I shouldn't think so for our flagship technologies. On the contrary, sometimes think we stretching the envelope, but certainly with the big milestones we see here on the technological development, the first real smartphone, where augmented reality then becomes possible with camera, see-through and overlays and stuff like that, and the invention of the first serious consumer-grade smart glasses, something unlocks and we will see a lot more development in the space undoubtedly moving forward. Yeah, so lots of tools, not many made for education and training, and that's why we lifted off about seven, eight years ago, coming into a decade, yes, to build our own solution, which now in the third rewrite and the fifth or sixth research project surrounding it is now released as open source, which we call Mirage XR. Mirage XR has a vivid developer community of 28 members at the moment, actively releasing version 1.8 is about to come out with a backend in Moodle, I'll speak about that in a second, and we of course would cordially invite you to check it out, and I will happily put a link in the chat where you can find the Moodle plug-in, which we're waiting to get into the Moodle plug-in repository as well as the sources for the cross-platform application which runs on mobiles, iOS and Android and HoloLens 1 and HoloLens 2 and more platforms to come in the future. So a cordial invitation to try it out, we believe in it, we think we're on the right track here, moving things forward towards reality as a platform for learning and teaching. The way it is conceptualized in our system is like that, we consider learning to take place in activities. Each activity consists of steps, steps always have a task station, so a location of interest, and of course a description of the step. And so that you don't get lost, there's always an aura surrounding you, if you look down, you will always find a floor line which takes you to where the action happens. In that space, then we have different types of augmentations attached. They come from us, from a instructional design model. We've modified for CID, for that for CID consists of the four areas, supportive information, part task practice, procedural information and learning task. So there are different instructional design methods for these four different sub areas. And in the predecessor project to are at the weekend, we already looked at the rich array of augmentations that support these areas. We have further extended this now to cover at the moment 14 types of augmentations from rather standard stuff like images, videos and audio to more complicated things. For example, we have an experience capture mode that captures what the expert does, asking them to provide using a think-aloud protocol instruction like they would tell it to a trainee. So you walk through space, it records your torso and hand movement and in the future at some point also define great finger movement and voice. And when you press stop recording, upload to the cloud and your learners get it, they will see a ghost of you in the room with them that explains how things work. Yeah, there are many other ones. I will also speak a little bit more about one of my favorites and that's holographic artificial intelligences, intelligent tutors character models that like Siri or Alexa or Cortana you can talk to, but additionally they can also show you things. Yeah, I'm not gonna talk in more detail about them but I'll show you what this looks like here. You see an example, this is in a hospital space. This here is a ghost recording of a expert providing some explanation about how to deal with a pediatric patient and explaining the drip pump as well as then in the next step using our visual language with overlays to direct attention to a specific spot here. So this is delivered on smart glasses. You have to imagine that you see this shimmering ring and fire in the room with you but otherwise you see the real world unobstructed and that's how on smart glasses this is delivered on a mobile phone of course. It's see the world through a window with a video feed which then looks less shimmering and transparent and not as cool I find, but nevertheless a very professional. The way we have built our technology stack is we have developed a combined authoring and viewing tool Mirage XR which is available for four platforms. iOS, Android, HoloLens One, HoloLens Two which are different hardware platforms and this communicates with a repository plug-in for Moodle where the content produced is stored can then be further edited in the web. So it's much easier to quickly pack something in on a keyboard for the task card for example rather than having to type it on a holographic keyboard. And there is also then a learning analytics solution in our case we use a product from a small Oxford based company learning locker which is I think the most popular solution in that field to collect real world interaction behavior traces. And at the moment we're also working on the first assessment modules. We already have some and are extending that further. On the right hand side you see interfaces to 3D repositories like Sketchfab which allow us to tap into resources available. Sketchfab has more than four million 3D models available hundreds of thousands of them for free. There is something for anyone's taste and it can also be used to upload stuff additionally. Behind the scenes we have started in 2015 to work on an augmented reality learning experience model for the IEEE Standard Association which is now since 2020 a fully fledging standard. Behind the scenes we use this activity description language and we call it workplace description language. So the environments surrounding the use of the classroom, the home environment, the work environment to deliver these learning experiences. So activity ML, the modeling language describes step-step-step interaction in which augmentations can be brought up and how to link with them, how to talk to them. The workplace modeling language then contains info about how to recognize things, how a specific location can be detected, how objects can be labeled with image target markers or how to communicate with senders in case you wanna take a direct reading from a machine that then alters the activity sequence and personalizes it, things like that. We have of course a layered application which has grown quite complex over time. There is increasingly more good documentation available supporting developers also in extending the technologies that we have here. It is a service oriented architecture. So some of these are then stored in the cloud and database access is secured so that we have a thin client application where the overlay and learning sequence is stored online and then downloaded and executed. So reality as a platform. Here a bit more complicated example. It's a guide to recording a high quality training from our nurses a scan that requires regular practice. Contact a ECG, an electrocardiogram placement with these 12 lead electrodes. We've seen here the first step to calibrate the location which allows us then to put up a hospital bed in the room of the student or if we remove the bed and tie it to a real hospital bed then we can put it in so that we have the patient bringing to the patient directly in a real bed. The foot of the bed in the trolley where resources for this procedure. So here that's those stages. I'll jump in a little bit. So here we see a virtual patient. An ECG, which is a playlist procedure those electrical activities. Meeting the patient, checking position and then doing the quiz for example on where the different electrodes need to be placed. So that's relatively straightforward for wrist and ankles unless you mix up left and right. It's at the my right or the patient's right. It gets a bit more complicated than when we look at the placement of the chest electrodes. We have to count it across the spaces following here the instruction and then testing where the placement is correct. So here one that's the first chest electrode is placed and on the right side. And we can see that snaps in place that is correct otherwise it will jump back. Yeah, that's just an example on the right. You can see what the same application looks like on a mobile phone here on a iPad. That's basically what we've been doing at the moment. We research a lot into holographic AIs where we've just released the first feature of character models that we scan from real people or model and then we can truly walk and talk to the users around with some space. Hi, I'm here, an intelligent virtual teacher. My work is to help you learn geometry, especially 3D and 2D shapes. You'll know how to identify different shapes and also understand their features. Are you ready? I am ready. What's your name? My name is Friedman. Let's go over that ice maker activity and look. Well done. You can find that we are living in a three dimensional world. Everything has quite width and length such as books, balls and houses. 2D is different from 3D. 2D objects cannot be physically held and they don't exist in our real life but 3D shapes are tangible and also can be picked up. Now the first geometry you need to learn is a cube. Look, I get it out ice. Could you count how many faces it has? Six. That's it. Could you count how many edges? And then it continues a bit further. It's powerful. It allows users to explore if the dialogues are well structured, things in their own speed with their own problems. It's an art, however, to design these dialogues. So we use a web-based tool for that where you develop a dialogue tree and depending on how rigid or how flexible that dialogue is, it can guide people quite precisely to something that allows more sort of a Siri, Alex, and Quintana like open situation where you ask a question and then it takes it from there. Still a lot to be done to make them on a graphic AIs and a little bit less uncanny. We also are experimenting with animated characters like an alien to see if that makes a difference and are going to investigate over the coming years also the role of trust and how that is influenced by all these different visual and other properties that we can manipulate, appearance, behavior, intelligence, responsiveness. As you will see in one of our latest publications. It's in this one. Yes, so that's the stuff we're engaging with. So in a quick run through still some of our findings all though, of course, this is ongoing work. We're just ahead of two big pilots, one for OpenReal within the Open University where over the course of the next year we investigate the impact of these technologies on student engagement, retention and many other things. The other one with teachers in ARETS where we're looking at a significant number of teachers as well as a second pilot in Italy with a significant number of students, primary school children, where we look at the bespoke build up based on our framework. And the stuff I'm going to present will of course grow and we will have more findings. We know however from the trials we conducted before with astronauts in space on the ground in replica module of the International Space Station and in a physical self simulator of Mars as well as with maintenance engineers of airplanes as well as with radiologists in training for medicine that the general acceptance is very high. So if we pick one of the parts here from our space testbed, we can see for example, the expectation regarding that facilitating conditions are must be there is high, that there are specific aspects here, PE that's performance expectancy that's from Venkatesh's original model. The expectation is high that it impacts on performance on precision and the other one is completeness knowing when things are finished. That's in line with our findings from the European Space Agency Technology Roadmap that had similar expectations with regards to performance expectancy. But also if we look at the effective quality HM2B for example, it has a very high effective quality that certainly the case because the medium is new smart glasses in particular are new but then again, I think this is also something that will not necessarily wear off that is just in the nature of augmented reality. So we had the moment consolidating findings and are extending our prediction model for technology acceptance, which we call TAMARA. There's one more publication missing coming soon before we apply it in the next pilots. With regards to the overall user interaction satisfaction we just to pick a few know that on these polarization scales people consider technology to be wonderful stimulating and easy and generally still are on the very positive side of this polarization scales. We also looked into demographic effects and at the moment can't find any. This may change, I hope not. It's an opportunity to get things right this time. However, since the medium is particularly new we did not find any differences between students and experts male, female, young, old degree of education. The only difference we found was in the area of self-certified computer knowledge which made a significant difference. So people who say they are very good at computers also will enjoy it more. Whereas people who say they are particularly bad at computers will not necessarily enjoy it so much. But all other standard demographic effects that we've seen in the past for new technologies or not so new technologies then like experts, male and young performing better is certainly not the case right now. And that's a good thing. We will see how that changes over time and what can be done against creating such demographic effects. With regards to retention and memorability, it's more tricky. So we do have positive findings but not necessarily everywhere. The one where we saw it most was in the aviation test bed where most of the questions would be remembered better with the AR condition and the control group was only performing in one question better. If we level that out across all test beds, we had some surprising effects and that was not the ghost. We were expecting that the ghost would be a highly motivating thing that is fun to use. We were a bit surprised that images in general had a negative effect on retention and only in one test bed, a slightly positive one, it's probably down to the quality of the images that is something one can read from here. I'm sure. Yeah, well, for funsies, we also checked whether there is a problem with vertigo. If there is any simulator sickness with smart glasses here with augmented reality smart glasses, and we did not find any, the only difference we found on the HoloLens one was a bit of eye strain, which is not so surprising. The planes are relatively close. It has something to do with the verdant accommodation conflict. It simulates perspective, but isn't really the same perspective. So people reported a bit of eye strain and that may have even been improved now with the HoloLens two, which has a more balanced design as well, the ability to fold it back and some things like that. Yeah, to summarize and conclude, the pandemic has accelerated undoubtedly the trend towards technical education as the answer to increases in automation, jobs that can be done by a robot should be done by a robot, but somebody needs to invent and service that robot in a nutshell, maybe not, that's probably a bit too exaggerated. The university is certainly more challenged than ever in particular in the English-speaking countries with high study fees, that there are high study fees create room for alternative provision, and people and companies are jumping into that field, as well as universities. We're expanding our apprenticeship program, for example, year after year. There is certainly a big question in the room, what will happen with Meta, Microsoft, IBM Academy, Google Grow, and whoever else the multiverse jumps into that space as rapid reskilling promises growth. There is space for innovation for getting somewhere quicker with earlier productivity, less separation of learning and application context. The Metaverse is one of the technologies that are coming, that are disrupting everything, and that is an opportunity that we can grasp in the tech sector. So for extra learning in the real world, as I call it, we have already plenty of evidence, and that is also in line with what other studies find, that it can be used very beneficially to relax constraints in space and time, and then of course, associated cost savings as well. It can increase engagement, it allows us to track performance in quite different ways, and that makes the separation of learning training context from application context less necessary. Maybe it certainly has a potential to completely disrupt teaching as we know it. Maybe in 10 years, we would meet in the Metaverse rather than in a video conference call. There are new approaches possible with wearable computing that have more to do with capturing what the expert does and then using AI to understand it, converting our ghosts to artificial intelligences that can converse with the user is one of the goals, of course, that we have. And it can drastically reduce training downtime and learning downtime, especially where travel to locations is required or not possible or only possible at the cost of safety. It provides certainly more autonomy to learning, teaching, training than we ever had. Just as an example, our colleagues from Altec said, a Mars mission is only possible with technologies like these because other than for a moon mission, it would then take six years for people to train making it basically unfeasible. And that requires a paradigm change in the way we produce things and in the way we learn rather to creating repositories where you get then life guidance when you need it rather than practicing things a long time ahead. Yeah, that's it for me. Thank you for being here with me. I'm open to question. I think after the Q&A and yeah, would like to finish with these two pointers. I think XR for learning is an opportunity that can help us deliver a quality education at a cheaper price and can motivate decent work and economic growth. Thanks. Thank you a lot for your presentation on the exciting and mighty things you are doing and its global context. A lot of opportunities are unfolding in education and technology today for sure. I'm sure that the audience is eager to get to the Q&A session to further expand on some of the topics that you have laid on the table. So now it's time for the second presentation. Let me briefly introduce Pierre Boudan by reading his short biography. Pierre graduated in computer science and robotics engineering. He has been teaching computer science in France for 10 years before moving to Barcelona to work as a researcher under the supervision of Melles later in the event lab at the University of Barcelona. He's now working as an associate professor in the multimedia team of the computer science department at the Open University of Catalonia. And he is responsible for the 3D programming, virtual reality and video games programming. His research considers the use of virtual reality and immersive technologies as tools to carry out research both at the technological level and at the psychological level. Starting, for example, the behavior of people in virtual worlds, the contribution of immersive technologies in educations or health or e-health. It is a pleasure for me to give the floor to Professor Pierre Boudan. So Pierre, it's your time. Thank you for the introduction. I think everything is, you said everything perfectly. So I'm trying to share my screen. I think it should work. So thank you for inviting me. I will do a short presentation and also a little bit more, a little bit more, maybe do it yourself than the amazing project that Fridolin has shown us. First, I will talk about a little bit, what I call the XER technologies and the differences I see between these different technologies. A very short part about the evolution and the characteristics and then some application in education, some ideas and some project that we are running now and the challenges and the conclusion that will not be very different from the one of Fridolin. So to begin with, I think it's important to define what we call immersive technologies. I mean technologies that are trying to emulate the physical world from digital or simulated world and this is creating the sensation of immersion that embodies inside the application. I think this is something that is common to all the medium. Even if you are on a very, I would say small, maybe a screen of a mobile phone, if there is something in the virtual world that happened in this screen, you are immersed inside the screen and you forget everything around you. So that's how that works. The difference are important. There are different technology as we've seen. There is this continuum and so in this continuum, there is the augmented reality on one side and the virtual reality on the other side. So in the augmented reality, we have the reality and we have some information on top of it. And usually, especially in education, it has been a little bit more used because it was a little bit less expensive when the smartphone have been available commonly. So it was quite current to do some work with augmented reality. Virtual reality at the very beginning was reserved to big company or big laboratories because it was requiring a lot of technology or very expensive helmet or these kind of things. But this has changed. And there is, as Fredolyn has explained us, there is like a mixture now between those technology and the difference is not so evident anymore. And you have like HoloLens, which is a helmet which is offering augmented reality and you can probably see less and less difference between the two difference. And the last one is the video 360. So there are videos where you are at fixed point but you can look all around in 360 degrees. So it's a bit different but it has also been used quite a lot because it was easier on the technology. You just have to record the video and then you could display the video and it was already offering some freedom to the viewer of the video in the sense that he could choose the direction where he wants to look. But it's a bit different in the sense that you can't modify the video. What is register is already registered and you can only change the point of view but not the place where it has been recorded. So another point I would like to record to the audience is that also we are thinking sometimes that this is something new. Actually it started in the 60s and I like to remember this sensorama machine from Morton, which was a simulation of traveling with the motorbikes. And it was really immersive and really multi-sensory. As you see on the seats, there was some vibration to simulate that the people were traveling with the motor. You had also some smell of the gasoline and things and you could see in the video, in the screen, on the screen, you could see the recorded movie of the track that has been done with the motorbike. So since that time, of course, things have changed a lot. And I think one of the key point was 2010 with the release of affordable H&Ds and helmets that change a little bit the dawn. Before that, virtual reality was reserved to big companies or big universities because it was requiring a lot of investment for the technology. But nowadays, the technology is less and less a problem and it's improving and the price is also decreasing. So it's more affordable. In education, there are different motivations and I think the previous presentation explained it quite well. For example, you can use it to do some time travel. You can do things that are not possible like you can explore other planets or you can also avoid dangerous situation. It is also very interesting for an ethical reason. For example, when you think of students learning surgery instead of working with caliber, they can work with a simulation. And many studies have shown that it's not the same but it's giving very good results. And so it's much better if you can train on a mechanical system with an artificial simulation than working on caliber. Other advantages of a traditional method are that it can transform the abstract into tangible. So you can have something abstract and you can represent it in the virtual world and you can make it easier to understand because you can figure it. And sometimes even you can manipulate it and this is something very important. I think that the other medium for learning not necessary out, you can learn by doing instead of just observing. And of course, I think it is a complementary method. It is not maybe necessary to substitute everything or to think how to substitute everything with these technologies. But I think they can complement and they can give a lot of new opportunities for students. For example, it is very interesting when you have a desirable situation but concretely that you can't achieve. Like if you want to travel with a classroom with pupils for example, let's say in many different points like, let's say, I don't know, in Greece, in Germany, in Romania or in Mexico, you couldn't do that. But using the virtual reality, you can in a way transport all the pupils in the same place and visit, I would say in the same time almost the different places and have a visit of these different point of interest. Also, it can be interesting to break the boundaries of the reality. For example, you can define a virtual world with different law of physics and you can even in real time manipulate this law of physics and see the results. So it's again learning by doing in the sense that you can manipulate the law of physics and see the results in real time. It has also been demonstrated by many authors that it helps the students to develop their creativity and innovation. So I think it's very interesting when, not only because you have this gadget effect, I would say, or this war effect that you have a new technology and you bring it to the classroom and the students are interested because it's new. But also many students have shown that it's really increasing the creativity and the innovation by doing this, you're elder student to be more creative. It has been also very interesting to have safe and simulated environment and it shows that when the learner is in a safe situation, it decreases its anxiety and this improves the learning and encourages the collaboration. So in that sense, it's very interesting. It is also used to visualize the data that are complex or to help making decision-making in a more efficient way. For example, there are application in medicines when you have to learn how to give bad news to, for example, to a patient and this is, or to his family and this is a situation which is quite complex and difficult to learn. Most of the time in nursing school or in physician school, they don't learn it and you have to learn it facing the patient and trying on your own to explain them the bad situation they are facing. So by doing a simulation you can learn with a virtual agent like we've seen in the simulation of the previous presentation how to deal with this situation and this is very interesting. Another point that is interesting is that it can help to put yourself in the shoes of someone else. So you can suddenly become a girl, you can become a guy, you can become also a racialized person and so you can understand the world from another point of view and this is something other technologies can't offer in the same way. So this works very well if you use especially a technology that is called embodiment where you have an avatar and you control the avatar and you see this avatar from the first person perspective. So when you move, you see the avatar moving and you feel that you become this character, this avatar or so it's virtual or so it doesn't look really human, it doesn't really change anything. Your brain is treating this character as your new body and then you can start running experiments or learning actions. So this is very interesting for example to prepare future teachers to manage a difficult situation like bullying or a meeting with parents in the same way as the medical doctor we said before but it can also help a lot for doing role-playing activities like it's necessary for psychology students or also for law students for example, if you want to become an advocate maybe you have to learn how to be the judge, how to be the, I don't know, the defense lawyer or the accusation and so you can play the different role and you can switch from them and learn from that. So I will show you something different, something that we've done for students who want to learn by distance space design so it is an application and concretely it's a short video where we were explaining the virtual environment to the architects who were teaching the classroom, this classroom. So you can see that the person is wearing the helmet and entering into the virtual world and so she can manipulate and design the world, she can add some furniture or some decoration element like this plant and she can manipulate and control the whole environment. So that was the idea. The idea was to have this virtual environment and to teach the students how to design the environment depending on the function of the environment. For example, if you want to do a co-working space or if you want to do a nostal, you wouldn't design in the same way. An interesting feature is that we implemented a virtual camera so you could organize the virtual space and take a picture of the virtual space and include it in the reports you have to send to the teacher. So it was pretty much the same situation as if you would be in the real place and would reorganize or redesign the place. Another very interesting feature is that you can manipulate the environment and control, for example, the day of the light or the light of the day or the time, the temperature so you can see the effect of the different lightings during the different period of the day. Sorry. So another example that I wanted to show you is a pilot that we are developing right now for teaching photography and video. So the idea is to have a metaphor where we use the mobile phone as the camera and we want to have this augmented reality application where the students can manipulate their mobile phone as the camera inside the augmented virtual environment. The idea is by using this to encourage the student to act and experiment and having this freedom of action and the specificities of the project or is that the teaching is done only remotely and the activity have to be asynchronous. So I have also a very short video that I will comment of the application. So we have different, the idea is to learn the different plans that you can have with the camera and we have implemented different mode like free mode, sequential or interrogative where you can see the different axis, the different plan with the camera. So if we go for one mode, so you've seen briefly the marker that is on the table and then you have the virtual environment that appear and manipulating the camera, you have to go on the indicated plan. So for example, it says go to the PICAL plan. I don't know the English way of it but it's from the top I would say. So you have to go in this, you have to orientate your mobile phone to be on this precise position and then the application give you feedback. So you can change the... So for example, here it says front arc so you have to be in front of the application of the character and the application is giving you some feedback. Well, I think you basically understood how it works. So what interested me in this and what I wanted to tell you is the difficulties we've been facing during the development of this prototype. And particularly how taking ownership of the technology has been difficult. What could be done, what cannot be done and how to design the activities without knowing the technology is complicated. It's very difficult for the teacher to develop an activity when you don't know the limit of the technology. I will give you an example. You can have... At the very beginning, what the teacher wanted was to manipulate the lights. So he was preparing some exercises where there were different lights and like projectors or photographic lights and to see the influence of the lights on the shadow of the different characters and how this could influence the scene and make it more sharp or more soft. The problem is that with the technology we have actually only a few mobile phones are able to manipulate the lights in a decent way. And also if you have more than two lights probably the system does not work very well anymore because it's too complicated for the system to calculate all these lights. So we had to redo all the exercises and forget about this. And moreover, it's not only the technological problem but also the way of thinking that has to be changed. For example, this teacher is a specialist of video and he's very good in that sense but he's thinking in video mode, what I call video mode. I mean, in a timeline which is sequential and he's planning the action sequentially. And when you have this augmented reality application where the students have the freedom and the application, the different actions are not sequentially organized, it's completely the consulting for him and it's difficult for him to think about activities that are not sequentially organized. So it's not only the technology but a little bit more and it needs the people to get used to this new way of thinking. In that sense, the dialogue between the technological part and the teaching part is really the keystone. It's really important and also it's important to have very short cycle and to have constant exchanges between the people. Otherwise, it's very complicated. So the challenges in the future, I think my colleague already explained this very well. The software and the hardware costs are improving a lot. I think both parts are really now affordable and I think you can find very interesting software and hardware available and it will, for sure it will continue on this way. The logistics and the scalability is also, I think something that is almost sold. In our case where the teaching is 100% online, for example, we need to have the students to have some material that is sufficiently powerful to execute the augmented reality or virtual reality application. And at the beginning it was really a problem and now it's becoming less and less a problem. The accessibility and the dizziness, I think also these problems are almost solved. They are improving a lot. The scientific validation is also something that is proven already. But I think it misses long-term engagement and a demonstration of the long-term effects. And I think still the investigation is at an early stage. They've been, as I said, I mean, this is something that is studied since the 50s but still we are in an early stage in the sense that we need more research, especially to define what should immersive technology be applied or not. I don't think everything is good to be done with these technologies. What are the contributions, the costs, the limitation and also some studies, especially in learning and e-learning that goes beyond what I call the war effect or the gadget side where you do a comparison between control where students have to do traditional mathematics and some other modality where they use exciting new elements or virtual reality or augmented reality stuff. And of course there is more engagement and there is more attractivity for the part with the technology, with this material and the technology. So to conclude, I would say that it's really a promising tool that these technologies are really promising and they offer almost an affinity of possible application. I think there is still a lot to explore, especially in the field of e-learning. I believe that this is something that, as my colleague said, probably in the next 10 years we will see a big differences coming and a lot of people will probably be teaching the metaverse. I think it is very valuable to enrich the learning experiences. I think we should not think of it as something to replace the way we are teaching now but a way of enriching and adding a new dimension to this learning. And also I would like to recall that the ethical aspect are very important and should not be neglected. Especially when you think of this metaverse and all of this new virtual world that are connected and where people will interact together, this will probably raise, I mean, this already raise ethical issues that we have to think about and as academics we have to study. I just would like to remind you that when you live a virtual experience or an experience in the virtual world, this experience is real. I mean, if you are someone, if you have like a bad experience in the virtual world, also it's virtual, it affects you and it can affect you badly. So we have to think really and consider these ethical aspects as something very important. And the last point is that the technology is ready. I think it's the moment. So if you want to do more with virtual reality or augmented reality, well, jump on it and take your chance, that's really the good moments. Thank you very much for inviting me and I'm available to answer your question if you have some question. Thank you. Thank you Pierre for such an interesting and exciting presentation. It is wonderful work you and Fridolin are doing to introduce XR for teaching and learning in higher education despite all the challenges and difficulties that you are opening the path. I would like now to give the floor to Gise for the Q&A session because I think that we have some questions in the chat. So please Gise. Yeah, hello everyone. So in this last part of the session, the speaker will address all your questions and I already gathered a few of them. So first question to Fridolin. Thank you, Professor Wilde. Please, do you have experience as well with dynamic manufacturing process AR computer simulation? Yes, so of course adaptive manufacturing is one of the areas where I personally think AR guidance makes most sense. So adaptive manufacturing where you have a production line that allows you to configure the product that comes out of it at the end in many ways in surprising ways. BMW here in Oxford, for example, I think they allow for a million different configurations of their mini and every mini that comes out of the end of the production line which is like every 68 seconds is different. So in particular for some of the more complicated tasks where errors can easily slip in, I think there's additionally also potential for validation for quality assurance. So I think that's the perfect tool for adaptive manufacturing and couldn't imagine how we can get more performance gains without AR learning training and guidance. Fantastic. So I have a question for Pierre in this case. So aside from experience in virtual reality for space design, could you talk about their involvement using XR resources in education? I think you mentioned a little bit about this, but if you can mention other examples that you didn't say already. Well, I don't know what in what field for example you would like or I think there are so many different possibilities. My own experience is I think with this, for example, for this architecture or interior design application that we developed, one exciting point of it is that you can make the invisible visible. We've been working for the health of the building, for example, so how the different components can affect the health of the people that are living inside the environment. And one interesting thing is that you can visualize the different, for example, magnetic fields that are present in a room. So depending on the electric facilities or the wifi, the different electric or electromagnetic fields, so you can make a representation of them and you can, in that sense, that can help to organize the space. And something is knowing it, something is visualizing it inside the virtual world. So you wear the glasses and you enter the world and you switch on the visualization of this electromagnetic field and it just become clear where you want to put a chair and where you don't want to put a chair. So this, I think this is interesting in that sense that you can visualize what is normally invisible. Terrific, thank you. So another one for Professor Friedling. Has holographic artificial intelligence been tested and implemented already as learning resource? Yeah, and our miragex are, we've just released it. So it's possible to use it directly there. Evaluations are in planning stage. So we have done already a few things we're particularly interested in the element of trust that people develop towards holographic AIs and that learning is the perfect application for that. What else can you think of where trust is necessary as that we did some pre-tests and fine-tuned the new metric scale that we're proposing for that. And the next step for us is now to go out and test it with real end users. Yeah, so we did a preliminary evaluation also in healthcare, which was insightful. We had a holographic AI that was a rehabilitation trainer teaching post-cancer surgery survivors how to do exercises. So we have motion captured a good set of standard exercises for different conditions and worked with medical people and sports scientists in a decision tree that helps select the right exercise. And there was a success, but yeah, also showed that there's still a lot of work to be done in the area. Yeah, so bit by bit. And there are a few studies out there that go in that direction, but it is a very cutting edge field. There's still a lot of work to be done. Well, regarding future steps, there's another question for you. Professor Wang says, regarding the rapid risk filing with AR, what are the steps you're planning to take in the short term? The last part was what are the steps we are? What are the, okay. What are the steps you're planning to take in the short term? The steps we're planning to take in the short term. So for us, I think one of the key motivations for this work is that content is still the biggest shortcoming in the field. And I think everybody here on the top, I see nodding. It's, whoever you ask, whichever review you look at, it's content is the problem, content is the problem. So that's why we invested so much time also into an authoring tool. And our Russian model is, if we can make the creation of XR learning easier so that any teacher can do it in kind of one-to-one time, the time it would take to create something in real in a lesson, if we can capture that and make it accessible as an engaging experience in augmented reality in the same amount of time-ish, then we should be able to produce content quickly and therefore roll out rapid re-skilling to any area. And that's certainly a big challenge because we know garbage in, garbage out, good teachers don't fall from the sky. And it takes also some practice to get the nicks and necks of a medium. But that's the goal to make this easier and we're investing a lot at the moment into new types of learning designs. The roadmap there is towards March, April next year. We wanna have some more advances for learning design support templates and configurable activities where you already have an activity set up and then you fill it with flesh as a teacher. So it tells you which parts you need to create and which parts should be user generated. That's one of the things we do. And at the same time, we've just rewritten our tutorial manager so that we can teach people skills in production more easily so that you can have a special session on how to use character models, for example. But overall, yeah, I think that's the main angle that we need to take, make unable teachers more easily to create content, get away from the geek corner of the kind of Photoshop class of people, not everybody has that skills to edit on that level, we want stuff that everybody can use. Lovely, I think Mace has a couple of questions. Yes, thank you, Yusef. I wanted to lay on the table, I think that it said that almost every month, we see new initiatives from private companies regarding XR in many fields, sometimes in education also. And at the same time, we see public institutions which are pushing forward research initiatives to increase the knowledge base around this matter. Which is the role that you think that public sector should adopt regarding this and the different agents involved in education? It's an open question for both speakers. Or you tap into one of my favorites. I think education could be the enabler. I think we always curse about the role of earlier infrastructure support programs. Like I remember in Bavaria when I lived there, I grew up in Bavaria from the South. We had a program called Schools to the Data Highway and it was rolled out, it basically gave fast internet access to schools and that was it. And we complain a lot about the lack of instruments, tools, pedagogical support, that it's just infrastructure thrown at people. But I think at the same time, that's exactly what we need for XR and an investment by the public hand would make a difference. If you look around in the US, for example, the military has just decided to renew its contract with Microsoft for purchases of the HoloLens, which equates to some that they cited for the worth of the contract, equates to giving, I did calculate that for a different talk. It was something like a dozen or more devices per American school. So for the same amount, you could have fitted every school with enough devices to have a full classroom enabled. Like 30 years ago, a computer assisted language learning labs, like 20 years ago, fast bandwidth rooms for video conferencing. Remember when we had to go to a special room? We can do this with smart glasses. Why aren't we doing this? We can make this massive infrastructure investment and push the whole sector forward. We can help create a market and currently there is not yet a market for smart glasses. And if we do it right, we will do it with the right pedagogies, go support with the right tools, with the right teaching strategies, learning designs and everything. We can do this in a good way. Well, I agree. I think it's really also important to have support from the public institution. Like I remember 20 years ago, maybe, I was working on a European project with Eurocopter and they had this new tool that was a blackboard with a projector and you could virtually draw on top of the real blackboard. And this was amazing at that time. And I think now it's something that is quite common to find in some of the schools, ordinary schools for everyone in Europe. So I believe there is always a gap and a time between what is available for the research or for the big companies and then the time for the technology to become available. I think we are at that time and I think it's important to continue the support of the research and of the company that are innovating so that we can develop these new technologies and we can expand them and we can make them available. I think also if I remember well, one of the very first proof of the efficiency of virtual technologies in education was in medicine where certain were teach to do minimally invasive training. And at the beginning, this was quite expensive. And now I will be in the Tribunal of Thesis from a surgeon from Columbia next week who designed a very inexpensive system which is working very well and it proved that it can be used as well as the very expensive one. So it's something that helps and it's not necessarily very expensive. It doesn't have to be something big and something expensive. It can be also very efficient and very affordable. A laparoscopic simulator, I believe, yeah, right? Yes, yes. Yeah, I also know that project is really exciting. I also wanted to ask you both about one of the things that we have drawn during the session which are the most promising lines for XR in education for the following five years. Fridolin has also mentioned, has already mentioned that content is a pain in the neck for the sector and authoring tools should be one of the enablers in this sense but which are the lines that you see that are getting more mature in XR education in the following five years. If we take the right research funding forward, then certainly I think in that area of the flipped classroom where we can transgress who is there, who is not there, hybrid meetings, who is coming in, that situation that the big companies are depicting for our private and work life that we dial in there, we have an avatar representation that we can get a recording of a teacher explain us something that we have a holographic AI to converse with and that we have a hybrid classroom part, offline part, home part in the school. I think that would make a big difference the flipped classroom as I call it. So I have a couple of questions from the audience. Well, another one. Okay. Is that, I believe this is for Fridolin, but it's not quite sure to me if you also can ask this question. Is the distance learning curriculums that were discussed are available in cloud to access? Yeah, so our stuff is in the cloud if you want. So we also have a small spin out company that can fit you with your own servers, putting them up in the cloud. It's called Wicked X Experience Capturing Services. That's possible and at the OU we're pushing a lot towards creation of open educational resources around it for XR. So that we release some stuff for free and for everyone to just further the field and kickstart this development. So there is a test server that we currently maintain as well that we can hook you up with. Probably I've had the project. There are some possibilities. Lovely. I have a question for Pierre. I hope I read it well. What is the estimated ROI for the AR distance learning curriculum? For example, for the photography resource that you presented. It's difficult to say because it depends a lot of the context. In this case, we are going to present the pilot of the application in December to the students. So I will tell you if everything goes well a little bit further. In other projects, it's interesting but of course at the beginning the investment is still maybe something more important. So it's something which is, I think if you want to plan a project you should think of investment in the material and the software and the benefits will come further and that's also what is difficult to evaluate in the long term. I think that's why I'm saying we need more studies in the field where with long term evaluation of the different application or the different modalities that are offered to the students and that's precisely what we are doing actually with these pilots. Okay, so I think we are taking the last two questions because we are running way out of time. This is for both the speaker and for Luis. It says, would it be possible to mention some already available simulation for online classroom teaching? In my case, I don't have any available online light resource that I could recommend that is freely available or something, I don't know if freedom in us as maybe some sample or something like that. There's plenty of stuff out there, just check the stores. So they're bespoke applications in many areas both in virtual reality as well as in organization as in augmented reality. It's more and more stuff available. You just have to look through the stores or there are also some lists floating around on people promoting educational material. Just to mention some of them, I would mention a class EDU for example, which is a tool that it's already been using primary schools here in Spain. And also the use of generic spaces or work spaces that are being used to conduct classes like horizon work spaces for the space application, which allow different people to share a common space and then carry on different kinds of activities. And one of these activities that are being carried out are educational activities. So this would be some of the many that are available right now and that are really being used. Okay, so the last question is a pretty long one is for all the speakers. Have you considered that this kind of apps we show us an immediate reality, virtual reality could live in some of our abilities such as imagine or visualize when they can show immediately as whatever we could imagine? I think this is a pretty long question and I see you're relying. I think it's quite easy to answer in the sense that it's a bit like using a mobile phone. When you're at the beginning when you were using a mobile phone and I still know some people that don't want to use a smartphone because they say that it's damaging their memory that they don't, if they are using the mobile phone they don't remember the number of their mother and that they used to know or this kind of thing. So of course, I mean, this is something that happened and that's why it's also important to follow the studies and the academic work on this sense because of course, being in this virtual world it has some side effects and as I don't know using Google Maps also has some side effect and that maybe you cannot find a place in a new city that you don't know if you don't have the Google Map working. Well, maybe we will have some side effect like this and this is an interesting question but I don't think it's really a risk and as I said, I think it should be sort of complementary activities, especially for learning not probably not to replace at the time for sure not all the activity we are doing for teaching. And if I may add, I truly believe it has a lot to do with the activity design. So you can design things very visual very unimaginative, but you can do this also differently. I've seen someone develop a augmented reality learning activity for time management two days ago where you think like, how the heck would you do that? It's nothing visual. It's really a question of the activity design and thinking of situations where you motivate creativity. It's a question of the instruction not a question of the medium. Yeah, exactly. I completely agree with you both. And I think that we mentioned that before, I think and it should go in the direction of providing authentic experiences in fields where without this technology, we will not be or we are not able to provide these authentic experiences because of they are implying high costs or high risks for the health of the people. So it should go in the complementary sense that Pierre just said in order to provide authentic and provincial experiences in those situations in which it is difficult to teach or to learn for these sectors. So it should go that way, I think. Are there any other questions you say in the chat? No, that was the last one. So, perfect. So just to conclude, since we are running out of time as they said, I have to say that I'm really happy of being part of this terrific webinar. We have had the opportunity to get to know how extended reality is currently being used in education and in some other fields also with many initiatives exploring the pedagogical effect of these experiences and the different roles involved in education in the educational world. We have underlined also that content is a challenge right now for the field. And thus, authoring tools seem like a good way to overcome this bottleneck, taking into consideration also affordance and accessibility. If we are able to provide authoring tools to as many people as possible, we will be able to produce a lot of content and reuse this content for educational purposes. We have already talked about the solid research background we have already, but that it clearly needs to be further developed in terms of long-term results and also ethical implications among others. And we have also talked about a little bit on the role of public sexual in relation to the many private initiatives that are disrupting the XR and education world. So, I think that we have reached the end of this seminar which on my opinion has been very enriching. And last thing would be to thank Iden, to thank Giuse for the organization, for the Q&A session and our two speakers, Frida and Piat for such a nice talks. I think it has been really enriching. So, thank you everyone and stay tuned for next webinars on these and other interesting topics. Good afternoon. Thank you. Bye-bye. Thank you. Bye. Thank you. Bye for now.