 Hello, everyone, and welcome to the 5.30pm session of the 2023 Open Simulator Community Conference. In this session, we are pleased to introduce the presentation, Developing Virtual Simulations to Support Retention of Skills in Essential Newborn Care. Our speakers are Matt Cook and Rachel Humoren. Matt is a research engineer at the University of Washington and lead developer for the virtual ENC project. Dr. Humoren is an Associate Professor of Pediatrics and an Adjunct Associate Professor of Healthcare Simulation Science and Global Health at the University of Washington. Please check the website found at conference.opensimulator.org for speaker bios, details of this session, and the full schedule of events. The session is being live streamed and recorded, so if you have questions, send your tweets to at opensimcc with the hashtag PoundOSCC23. Welcome, everyone. Let's begin the session. Thank you so much for that kind introduction, and we're so pleased to be here amongst friends to share the progress that we've made over the past year with some additional more technical details that Matt's going to share about our thinking around the development of virtual simulations with our case study, as always, being essential newborn care because we care about babies. So if you could go to the next slide, Matt. So there are lots of reasons to be concerned about the general high newborn mortality rate. In over the years, we've seen declines in the mortality rate, but primarily in countries that are higher resource like the United States, United Kingdom, other European countries. We don't, we have not seen the same level of decline in sub-Saharan Africa and South American Asia, other places. And in more recent times, these declines have actually reversed. So we've seen an uptick in maternal and newborn mortality, which makes projects like this even more important. Next slide. So there are a range of ways that educators have been trying to address this issue. Over the last couple of years in particular, there's been new curricula and novel tools developed for use in low resource settings. And the goal has really been to maximize learner access and retention. But as we all know, just talking at people through a lecture or even giving them a video to watch is very passive learning. And we want our learners to be actively engaged. And the best way to do that is through virtual simulations. Next. Or one of the best. So Matt's going to share a little bit more about the work that we've done specifically on developing virtual simulations for essential newborn care. Over to you, Matt. So the nature of this work is based off an existing curriculum. There is an established correct way of doing things. There are correct actions that need to be taken, a correct order in which to do them, and a correct way they need to be portrayed. And if we do anything other than what that established correctness is, we are teaching incorrect principles. We also decided we wanted to avoid being able to let our players run down rabbit holes, not only to simplify things for ourselves, but we made the decision we didn't want the possibility of them doing an action and then having a appropriate reaction that leads to negative consequences for the baby, such as dying. We wanted things to stay positive and productive for our users. So with that in mind, what we did to start this project was we mapped essential newborn care. All of the steps that need to be done under any number of different conditions of if this, then that. If the baby is in this condition, how do you respond? So the positive, there are pros and cons to this approach. The positives are with this mapping, we can go expand this into any number of simulations. So we started with 12 levels where that include several of the learning objectives that these providers need to be able to, for conditions these providers need to be able to respond to. The downside is that there is a rigid sequence of actions that these users must perform, which is kind of different from the flexibility and creativity you someone might want from a different kind of educational experience. So since we're dealing with low resource settings, we decided to develop our VR application for the Google for the cardboard. So based on a running off of mobile phone. Other higher end VR devices such as the Vive or the Oculus Quest would be ideal for the level of interaction we want for that's good for learning. But obviously in these settings, that's not an option. They don't have access to those tools, but everyone does have a phone in their pocket. So, but this comes with a big problem for interaction and mechanics for doing this stuff because there are no controllers, no clickers, no buttons. We can't even count on the possibility of being able to reach the phone in order to tap the screen. So that would limited us our interactions to exclusively gaze based, meaning they have to stare at an item or an object for X amount of time in order to click on it. So there are all kinds of interactions they need to be able to perform, such as testing equipment, giving injections, talking to the mother. And it was really important to us that this experience be as immersive and interactive and hands on as possible. So that led to a major challenge. So we decided that the players would be able to interface with the world through three basic means, clickables, grabables, so we can pick up an object and then drop it elsewhere, and then just extended staring. And then the actions would be animated appropriately, depending on the item and at what point in the simulation we are and those actions sometimes might change. We also elected to give feedback in real time, as opposed to waiting till the very end to tell the users what they did wrong, especially easy to do this since there is an established right and wrong at every step of the way. So we did this with correct things, if they did the right thing, and buzzes with error messages, if they do the wrong thing. If they get stuck, we had the option for hints available. They could pull those up manually at any time, but if they had say 30 seconds of inactivity insinuating they didn't know what to do, then the hints would pop up automatically so that they could keep perpetually moving forward. We tried a whole bunch of different, we tried to figure out ways to gamify this experience. We really wanted to increase the inherent interest in playing these simulations. Unfortunately, given the rigid nature of it, that was hard to come up with too many solutions, but one thing we did do is we scored each level by their percentage accuracy for correct actions versus incorrect actions with minus minimal points for hints to encourage them to not use them, but also encourage hints over just randomly trying things if they get stuck. And then if they got perfect scores, we would reward them with just fun, silly wall arts that we created with generative AI. So our users are not very experienced with VR or video games or may not necessarily have the most access to technology, period. Yet being an educational experience, we needed to design this to be as intuitive and easy as possible for the lowest common denominator in terms of our user's ability. Now, this created major challenges for us being technological natives. What seemed intuitive to us oftentimes proved to be very, very wrong in practice. And so here are some of the lessons we learned. Right off the bat, not every user, in fact, not most users, but some enough users had trouble with the even the core concept of holding a reticle over an object to interact with it. Now, I don't know if you can read that picture, but it says hold the orange dot over start button to select. We thought that was going to be our intro, the first step of a tutorial to teach them that basic thing. Thought that'd be good enough, right? No, definitely not. So what we wound up having to do was draw a animated line for between the reticle and the object we wanted them to click on as just that initial step. And then it would change color from like a warmer colder type of system. So then once they got that idea of clicking, that didn't necessarily translate to interacting with objects in the actual game space. And then if they did enter for grabbables, if they didn't know how to click on an object, they could pick it up. But then they were continued to be confused by the idea of how to drop it down. They didn't understand what these drop zones, drop areas were. So we wound up having to introduce a full tutorial where we gave a simplified environment with a repeated exercises of just each of the different types of interactions they would have in this experience where there's only one thing to do, one thing to click on, one thing to look at. There's no way to get it wrong. We had major problems with timing based clicking. So this took the first this took the form of ventilation. So this required rapid action, quick responses and precision. It was very important to us that our users have the understand what the correct timing is for ventilation and that they actually be do perform the actions themselves. So right off the bat, you can tell you can see the problem. We have the for the given target rate, we have to pump every once every 1.5 seconds with a two second gaze period. Yeah, that's not gonna work. We experiment with a bunch of different things, including changing up the representation of the visual representation of the timing, changing up the that gaze period, changing the fundamental pump mechanics were all together. And nothing worked. Ultimately, what we had to resort to was just extended staring at the ventilator. And in order to get that requirement of understanding what the right rate was, we had different rates available, and we'd randomly initialize one, and then they would have to recognize either the correct or incorrect rate and then adjust accordingly. World navigation was a problem for your educational experiences. Honestly, if you can avoid having to navigate a world and just have everything directly in front of your users, I would do it. We tried to include some manual teleporting. So having to click on a teleport zone or teleport object. So to give our users a sense of control over their actions, but that proved not to be to work very well. Didn't it wasn't as clear as we'd hoped. So what we did was instant use a system of basically just teleport where you click. If you clicked on something far enough away, you will teleport over to it. And that worked well for our clickables. But for our grabables, we sometimes introduce some intelligence for anticipating where they would want to go after they pick up an object. So for instance, if you grab a baby from the mother, we can probably assume you are looking to drop it over in the on the exam table. So we would teleport you over there so that you're interacting with it nearby rather than trying to set the baby down on a table from across the room. UI was one of the bans of our existence. We had five different kind of types of UI and being VR, it all had to be exists in world space rather than being fixed on this on the screen. And so we had lots of conflicting requirements here. They all had to be visible without having to, you know, go searching for it. They all they couldn't interfere with one another or obscure key objects on other key objects or be obscured. And different combinations of them might be open at different times. So the way we managed to get around this was some of the some of the UI, such as dialogue, we fixed in space, you know, like to act more sort of like speech bubbles. But others, we had to had to have follow gaze. And we introduced a distance threshold to make to trigger that distance that follow, because it's annoying if the UI is just perpetually following where you're looking. And in order to prevent them from obscuring key things, we had to introduce the motion clamping both vertically and horizontally. And that clamping had to change throughout the experience. So depending on what what other UI was open, depending on what is important to be able to interact with at that time, or where you're standing in the room, those points where you those angular regions that the UI cannot cross into might be different. And I'll hand it back to Rachel. Thank you, Matt. That was amazing. So once the technical team did all the things that they did, if you go back one slide, as Matt described, we were ready for testing in the physical world. So we took the application, put it on mobile phones. It was a unity based app. And pilot tested it with 70 nurses and midwives from 23 facilities in Lagos, Nigeria. And the goal was really to test the feasibility and educational effect of using these simulations to see if they really help them learn and help them retain skills. So we started out with an in person course to train them to develop a baseline because the intention of using these simulations was for retention of skills. So we wanted to be sure everybody came in with the same basic knowledge. And we introduced virtual ENC to them after the initial training and after they had done all their baseline assessments. Next. And so this is the slide that has a lot of graphs on it. So I'm not going to walk you through every single one. But we had seven measures. And these were all standardized measures. Two of them were knowledge checks. That's the KC. And then the other five were skills assessments for the kinds of things that we were teaching them how to give a baby ventilation with a bag mask ventilator with the breaths like Matt was talking about how to recognize that a baby is sick, how to treat the baby if it needs treatment, how to refer the baby if it needs to be referred and when to refer the baby. And what we found was that using this application with the VR actually improved their skills from the in-person class baseline. So their skills were at a low level to start with. They got better after the class and then they improved subsequently with using the virtual ENC. So we're very excited about these findings. We think they continue to show us that using virtual simulation is a very viable way and may actually be you know, a revolutionary way to teach these skills and will save lives. So believe this is the last slide. If we can go to the next one, we will want to oh, one more thing. So qualitative themes. So we actually asked them what they thought and did focus groups with our nurses and midwives and this is what we found when we analyzed the data from those focus groups. They felt that using this is very convenient is it can be done on their own time. They could do it anywhere, you know, at home on the road, they could do it at work. They like the feedback. They like the realism. They felt it gave them confidence and it made them more competent and that they could remember what they needed to do step by step and they had multiple stories of how they use these skills in real life in their work when a baby needed resuscitation. So there was impact and they were very happy. Next slide. So I just want to thank all the team members. This is just acknowledging that this was the efforts of a large team spread across different continents and different groups and everyone worked really, really hard to bring this project to this point. And we did get some funding from the National Institute of Health for this development as well. So I'm acknowledging that too. So thanks everyone. And if you have any questions, I'll put my email in the chat and Matt can share his as well. Feel free to reach out to us. We're very excited about this work. So if you have any health professionals that you think might benefit from using this type of application for learning, we are very happy to share it. It's going to be freely available, particularly for folks that really care for babies and need to keep up their skills. Rachel, I have a question for you. I've I work for a university that's chiropractic that works on infants, right? And so the and it's a metaversity. So of course, they're using BR and virtual worlds, etc. for a lot of their procedures. And we looked at a project we were designing using haptic interfaces. So the practitioners would wear gloves, right? And get force feedback as they were manipulating the objects which would include patients and, you know, the various pressures you might might apply. I was wondering, have you thought of that for your simulation? Yes, we've definitely thought about using not in this particular circumstance and with this group, but just in general thought about the potential utility of haptics there are certain types of procedures in particular that would benefit from haptic feedback. I think we've been really cautious about introducing anything that might increase complexity given the kind of environment that we're working in, even, you know, having something work working on a mobile phone in that space is it can be challenging because not all phones that are sold in low resource settings because they get really cheap phones, even have the accelerometer or things like that that can be required for VR. But I think haptics have a place in any types of type of skills training, and especially in the medical space. I'd love to chat with you more about sure. Well, you know, one of the things that comes to mind as we're wrapping and we are wrapping is the fact that your origins you conducted research with two open simulator regions in which you had 25 bots and you had all these procedures to study disease in African countries. And did you remember that the virtual world, the software, at least that Second Life was used for haptic testing that before it was ever repurposed to be an online virtual world. So our origins are kind of there. And I didn't know if you were aware of that. I was not aware of that fact about Second Life. But yes, I have used, you know, I used open sim for prototyping for years before we moved using Unity, but only because of the limitations of needing to have work on mobile phones and offline. We do a lot of integration to like that where we develop everything in the virtual world, and then we port content to Unity and then develop a mobile app that we then use with the military education. So yeah, that's a great great blending of the technologies. I want to thank you, Matt and Rachel, for an informative and wonderful presentation. And thank you for all your work to help save babies. As a reminder to our audience, you will want to check out the conference dot OpenSimulator.org to see what is coming up on the conference schedule. Next, we have Joyce Bettencourt, Rianne and Chet Noir for the closing. Also, we encourage you to visit the OSCC 23 Poster Expo in the OSCC Expo Zone 3 region to find accompanying information from the presenters at their presenter booths. And Rachel and Matt have a booth as well as to explore the Hypergrid resources in OSCC Expo 2 region, along with our sponsor and our crowd funder booths located throughout all of the OSCC Expo regions. Thank you again, Matt and Rachel, and to you, the audience.