 So, we were asked by the XR Village to come and just give a little talk about the work that we've been doing at the Idaho National Lab using some XR equipment. So behind the scenes of our daily lives, there's critical infrastructure that's supplying us, you know, all of the necessary services, you know, water treatment, our electricity, and all of that infrastructure needs to be secured. But as, you know, working in the security kind of aspect, we know that these systems definitely aren't as secure as we'd like them to be. And so part of the work that we've been doing at the National Lab in partnership with the seller group is to work on securing those systems and helping, you know, smaller cities to know that their systems may not be secure, that there are vulnerabilities. And we've been using XR as a component of that, of that outreach. So my name is Colton Heaps, and I've been working with the Idaho National Laboratory for about two years. It's been really exciting to be able to work with in a laboratory where they have such unique infrastructure. So for those of you who don't know, the Idaho National Lab is a research laboratory that's focused on nuclear research. And so we have, you know, a couple of different reactors out on our campus. And with that, you know, microgrids and a lot of different infrastructure that enables us some unique cybersecurity research. And so I will kind of starve out by describing the group that I work with with digital engineering and some terms that I'll be sharing and introduce the seller group and then get into how we've been using mixed reality and virtual reality for that work. So what is digital engineering? Digital engineering focuses on supplying data to researchers regardless of the platform that they're performing research on. So in, you know, Times Pass, we had researchers passing ground documents, they'd have sacred word documents that had all of the information for the projects. That is not a good way to do engineering. So we're trying to centralize all of that data no matter where it comes from and enable the different platforms to be able to reach out and use that data. So digital engineering helps to provide semi-autonomous design, autonomous operations and real-time anomaly detection. It integrates the threads of data, visualizations, AI and ML and physics models into a cohesive digital twin. And so a digital twin is, it's a buzzword. So we're trying, it gets thrown around a lot in industry, but we're trying to define that at the national lab. And so I like to compare it to the iPhone. So we all know, you know, the impact that the iPhone had on our world. And because of the iPhone, you know, let other smartphones rising popularity and that truly changed our lives. But the iPhone itself is made up of a bunch of smaller technologies that already existed and those technologies themselves weren't necessarily revolutionary, but the combination of all of those made something special. And so that is how I see digital twins. And it's the combination of all of these different technologies that have existed in the past, but the combination of all of those together makes it something special. So what is a digital twin? Like I mentioned, the digital twin is emerging of all of these different technologies. The connection of integrated data, sensors and instrumentation from physical assets. So taking that real-time data from the systems and feeding it back into visualizations and AI ML algorithms, the AI and an online monitoring and advanced visualization with mixed reality and virtual reality. So that's kind of the INL's definition of the digital twin. And a digital twin mirrors the physical model while using real-time bi-directional communication to perform this magic, enabling the user to take in data more efficiently, giving the operator more data in a way that they can understand. And so put that into context of the work that we do at the lab, like I said, nuclear research and working with these nuclear systems. We need to be able to have safe, small modular reactors in the future. And they need to be able to have autonomous corrections. So broadcasting into the future to see if these things are safe or not. And then if they're, you know, going into a hazardous direction, then they need to be able to fix themselves. But we've been using it with the cyber security group at the INL, specifically the seller group. So seller stands for control environment laboratory resource. And so seller is sponsored by DHS's cybersecurity and infrastructure security agency, or CISA. Seller is a lab-scale environment with sector-specific platforms with operational physical components as well as operational IT and OT environments with real controllers and human machine interfaces to operate the physical components. And so there are about six of these training platforms that seller is provided to the public. And each of these platforms have unique HMIs, PLCs, and protocols that are used. And so it takes the breadth of, you know, manufacturers' hardware and protocols that are used in industry and displays them here as a resource publicly available for, you know, international use so that anybody can gain more experience in securing this critical infrastructure. So the environment is used to train cyber incident responders and expose them that actual networks and malicious activities are similar to what they would see in the real world. Support tool evaluation to determine effectiveness in operations and research mitigation measures that may counter those effects. Seller's capabilities have matured to include six sector-specific platforms with six different controllers, HMIs, and network protocols. This allows for diverse options to fully represent a broader range of the nation's infrastructure. And it's definitely a valuable resource that is being provided and helps to supply this resource to smaller cities that may not have the funds to put towards cybersecurity and securing their infrastructure so they can take advantage of these resources. Okay. So in this seller group, while building these systems, they started to reach out to digital engineering just as a way to visualize their, you know, their plans in their new workspace. And so they were planning to build, they had one existing skid, I believe, and they were planning to build more in the space. And so they wanted to use some of our XR capabilities that we've been using across the lab to visualize these components in their facility and maximize the efficiency of the space. And so it's maybe not a very exciting way of using mixed reality as it, you know, seems like everybody's using that now, but it's very valuable and saves so much time and money. And so we were able to use some virtual reality goggles and take the existing CAD models as geometries and import them as meshes into our visualization software. And mainly we use Unity for this because of how easy they've made it to work with a variety of hardware. And so with those CAD models and the BIM model from their facility, so BIM is building information management model. And they took both of those models and put them into one so they could move each of their training testbeds around the facility, maximize the use of the space. And also in the design of the physical testbeds, they were able to see them to scale in front of them, interact with the different components, make sure everything was, you know, would work just how they expected. And so as that partnership grew with Seller, we started working a little bit more in public engagement, which kind of led into us realizing how valuable extended reality headsets were in this work. So with the new testbed designs, they had, you know, an interface similar to this where they were able to have a portion of the HMI in the virtual world, and they could click buttons to toggle on different effects, just run through in software the iterations of how things would look and ultimately save time. And so all of these testbeds, they represent physical critical infrastructure. So they have the same OTI components that are used in, you know, saying like this is just a laboratory space. And so to help to increase engagement, they have different effects like that smoke that you can see in the bottom left corner. So when the system, you know, maybe the HMI is vulnerable to a certain DLL injection, that can cause some actual physical damage to the system. Maybe, you know, turning off some of the fans in a lab space or reversing fans in a fume hood, causing gas to go into the lab space. So working with this critical infrastructure has physical effects. So, you know, it's not only some database or something that goes down, but it can actually cause harm to a system and people. So then we worked in public engagement during the first sort of half a year that we were working with Cellar. And as you can tell by the ugly carpet on the ground in the pictures on the slide, we're here at DEFCON. So we had the opportunity to come to DEFCON as well as an array of different conferences around the nation. So on the left is a picture of a small test bed that they developed for use in conferences. Because then the full test beds are like six feet tall. And it would just be a pain to pack those around to conferences. So they developed the smaller version, which still has real TIT components. They can, you know, share that with the public. But then in mixed reality and on the, let's see, on the left, that man is wearing a Microsoft HoloLens. And so that's a mixed reality system. And what he's seeing is being shown in the middle there. So because we aren't constrained to any sort of space in virtual or mixed reality, we can make it as representative as we want to. And so their small demo that they brought, hello, hello, their small demo that they had brought emulates an electrical substation. And so in the visualization, we modeled out, you know, a representative substation. And then we were also able to expand it out to show more of a suburb or community. And then in the attack, we can show how it affects the entire community losing power. So at the bottom left, you have the physical skid in the environment, the lab space that they're housed in. So they just have some computers with webcams that are monitoring these systems. And then anybody can log in and see the attacks and vulnerabilities that are on these systems and then learn how to secure against those attacks. And then the webcam shows a kind of difficult to see view of any effects that might happen if that critical infrastructure is attacked. And so in the virtual space, again, we modeled out their systems to be able to show others resources that are available. And then we've also tried to make a more representative view of each of these systems. And so the idea is that using digital engineering and digital twins, we're going to mix together these physical assets with the visualizations. And so as somebody is working, you know, going through the training online, then that data is being relayed to a visualization that they can use with their team, you know, maybe it's a school or, you know, the workers, the cyber team of that city. And they can have a more engaging training as they go through it. And as well as just relaying state of the system, we merge all of that to a central data warehouse that we can then distribute data between the different systems, whether they need to do, you know, analytics on the on the data afterwards. Okay, so some of the other projects that we have. Apologize. So just to speak of some of the successes that we've had with this system. So it was it was developed to be remote during COVID because COVID sucks. And we wanted to still be able to help people. And so they were able to see the visualizations of what would happen, be familiar with those things, have an engaging training. But then Seller was still able to provide that training on real hardware, not emulated hardware in a remote setting. And so one of the groups was a local municipality and they were able to remote into the environment and train using this, using their lab space. And while they were going through the trainings, you know, learning how to secure these systems that they they needed to work on on their in their own municipality, their facility was physically hit. And so since they were remotely training, you know, not having to travel out to Idaho, they were able to respond to that incident and use the training immediately. So that's a big win. And so the idea, again, is to be able to reach out help as many people who maybe don't have don't have the funding towards these to put towards cybersecurity research. And so using mixed reality, we don't have to refill the tanks for these trainings. We don't have to physically do any maintenance. We can just run through training after training after training without worrying if it's going to break. And so that saves that saves us money as well, lowers the cost of the resource, and then enables us to just send out these VR goggles or mixed reality goggles to whoever is signed up for the service. And they can still have that engagement. Okay, so what's next? So at the right image might be a little hard to distinguish, but that's a router with some some signals coming off of it. And I'm using some quotes right now because it's still theoretical, but the research is being done to leverage extended reality capabilities to visualize wireless radio frequency signals. And the goal is to be able to see the radio frequency signals in an environment identify that it might be abnormal and then using the geolocation support of these systems be able to pinpoint the source of the signals. And so second research effort focuses on expanding the services to train remotely with real-time visualization, enable cyber incident responders to actually conduct their network analysis or hunt for malicious activities on their laptops while then being able to see the representations of the infrastructure behind them as they work. So it really has been, although these digital twins are still in early stages and we still have a lot that we can do to improve the use of extended reality headsets in this work, it's already helped us enormously. And so just to speak a little bit about some of the other work that we've been using these headsets for in the lab space. So apart from cybersecurity, it's been instrumental in design reviews between these different projects at the lab space. And so we'll have multiple engineers come in to a visualization and they can view, you know, the new microreactor that they're building in front of them to scale. And they can walk around, make notes on the object, move different components, and talk to each other while it's going on with data being relayed through our own lab-built multiplayer server. And so that has really saved the national lab millions of dollars for each project that we've been involved in because they've been able to catch mistakes early on before the project was actually built physically. And then share just one more project that we've been working on by creating a full digital twin of a nuclear reactor. So it's a small reactor. It's about five watts used for testing. But we were able to create a digital twin, a high level digital twin that incorporated AI predictive analysis on the system. And so it would predict 15 minutes in the future using the data that it's learned off of, predict 15 minutes in the future if the system was going to be passing some sort of a temperature threshold or threshold on the neutron flux. And because of this, it would then warn the operator and take autonomous control of the system, bringing it back into its operating range. And a big portion of that is being able to give the operator a view into what's being learned through the AI and how the digital twin is actually running. And so we've given them two interfaces between the mixed reality headset and a web display. And it's truly been valuable and has been integral for the research that needs to be done for these smaller modular reactors. So I will just open it up to any questions if there are any. And it seems to just be a microphone right here. And then if not, we can go and rest. So there's a microphone right there if you'd like to, if you had a question. Which reactor were you modeling out of curiosity? Was it the ATR or no? I'll just give you this microphone. Which reactor were you modeling? Was that the ATR or a different one? So you know a little bit about the lab. Nice. So his question was which reactor were we modeling in the digital twin and if it was the ATR or advanced test reactor at the Idaho National Lab. And so this reactor was actually a university collaboration between the Idaho National Lab and Idaho State University. And so it was using their AGM-201 reactor. AGM-201. Okay, thank you. I'll try this. Okay. You were saying that millions of dollars were saved because they were able to catch stuff early. I was just curious. Do you have like one example, like just an anecdote? Yeah, great question. Thank you. So his question again was I had mentioned that we'd save millions of dollars on construction on these projects using extended reality hardware. And he asked if there were any examples I could share. So there are some glove boxes that we've been developing to be able to process some different materials. And they were able to, you know, again we were loading the CAD models from the engineers into extended reality environments. And they were able to visualize the glove boxes within the space that they were to be built. And they could go around, have design reviews with each other. They noticed that ergonomics on the system was not very good for all of their operators. And so they had, you know, people put their arms into the virtual glove boxes. And they found that they were going to have to build some platforms for some people to stand on to be able to use the system. And then that just kind of defeats the whole purpose of building this multi-million dollar system, right? And so they were able to lower things down so that everybody could use the system. On the same project, they were able to move the glove box around in parts, in the parts that it was going to be built into, and check if they were even able to fit it through the doorways of the facility. And so they were able to determine that there would be some collisions and they broke a couple of the pieces into even smaller sizes. And then clearances between doorways has always been, seems to always be an issue on whatever project we're on. So just escape routes and things. They need to have a certain, you know, clearance between the doorways and they were able to find those and make adjustments. And so that's a smaller project, some of the other larger projects that we've been on. You know, if they were able, if they were to go into construction at all on those, before they caught these design changes that needed to be made, then it would have cost millions, millions of dollars. So yeah, thank you. Can you say a little bit about, can people hear me or not? You might need to get closer. Yeah, hello. Can you say just a little bit about why virtual reality, as you've described it, is actually better than traditional, you know, on-the-flat monitor, you know, which has been used for many, many years now to catch many of the same sorts of mistakes. You know what, I noticed actually that your glove box example is probably a good example, but do you have any thoughts or, you know, why is this, you know, better than the traditional, you know, flat screen cat? Yeah. So his question is, how does mixed reality differ from using just normal 2D CAD software and how it's been beneficial? And so how I see it is that with CAD, you need to be familiar with the system to fully understand everything that's going on. And so sharing with stakeholders especially, it's hard for engineers to portray exactly what's going on, but in mixed reality, it's there. It's just there virtually and they can walk around the system and have a discussion and, you know, it removes the need for interpretation of these CAD models or blueprints. Thank you. Are the tools for VR development catching up or caught up to the same tools you would use for a flat screen or is there still work to be done on that? So that's an interesting question. If the VR tools are caught up to the same tools that we'd be using in traditional CAD modeling. And so we're not actually doing any modeling inside of these, you know, virtual reality systems. Traditional CAD on computers is still superior. You have more precise controls and it just allows them to go in greater depth on the design as well. And so we don't do any of the design, but the software is catching up a lot and already in the, you know, past couple of years that I've been working with these systems, they've matured a lot to where we can import the CAD into the virtual reality and then if they make any updates to those systems, then it's getting to the point where we can just update it in our virtual system as well. So all of the interactions and things that we've developed as well can transition to the new model. And so I'd say that it's maturing, but it's not quite as mature as we want it to be. Thank you. Okay. Well, let's go rest. Thank you all.