 Hello, hi. Thank you for being here. My name is Bridget Kerr. I'm a computer simulation and gaming student at the University of Tulsa in Tulsa, Oklahoma. And the project that I'm getting to work on is with university faculty, PhD students, master's students and other undergraduate students. So first things first, the project is funded by the U.S. Army Engineer Research and Development Center. And what I'm showing you today are my findings and opinions and not necessarily reflect the views of the funder. I'm required to say that. So digital twins. The definition of a digital twin really varies depending on the source that you're looking at. In general, it's a digital representation of a physical object system or process that is updated in real time with its physical counterpart. So you have your physical asset, your digital twin, and a lot of data going back and forth in between. These were originally used to increase productivity in the manufacturing industry around 2002, but now they're being used across many disciplines. For example, like in healthcare, emerging technology is using digital twins to create virtual emulations of physical human tissues, organs and cells that adapt to fluctuations in data and forecast the future trajectory of the physical patient. Also in aerospace, digital twins are used to kind of assess mission possibilities and facilitate astronaut training. But let's talk about the project that I get to work on. Like I said, the University of Tulsa is collaborating with the U.S. Army Engineer Research and Development Center in the development of a virtual immersive remote sensing and actuation system. So the system centers around digital entities that are interconnected and represented in virtual reality, and we have four buildings across the university that are being included in the project. The project has five teams that are working in tandem. We're kind of all developing this system on our own and making it work together. At the same time, it has Knowledge Core, which is the hub of the data flow. It has sensors and networks, mobile robotics, cybersecurity, and virtual reality. Now, the process of developing the virtual system includes the creation of a digital model, of course, of each installation and replicating its main physical structures and the objects within it. So when I joined the project in May of this year, the VR team had purchased a high-end scanner and were kind of like exploring what kind of scans they could get with it. And I was tasked with finding alternate methods of creating models. And I'm a pretty visual person, so I wanted to know what does this look like? What does something like this look like already? What are the standards? So let's think about this room. If we were going to come in and model this room, I wanted to know, can we use simple geometric shapes for representations? Can we finally find a use for that cube that Blender has? Can a cube be that chair and maybe a longer one and wider one for the table? Or should I be able to tell the difference between this chair and one upstairs? Or can it be somewhere in between? Can this chair represent any chair as long as the dimensions are correct? Unfortunately, there are not really real-world examples to pull from. I wanted to see a visual side-by-side of a physical entity and the digital entity, and even in academic literature, those are lacking. So we had to determine the most important features for our project. And to be clear, as a whole, this project's fairly exploratory. We're trying to figure out how we can push the boundaries of what's been done before. This is a very data-heavy project, and obviously the VR team is only dealing with kind of one side of that and the flow that we have to manipulate. But let's consider this room again. If we're scanning it and then modeling it, if the next group that comes in here changes it and they want a dance floor in the middle, and I don't know, tables around the edge in a disco ball, we don't want to have to re-scan and spend all that time re-scanning and rebuilding our model. So we want models that have the ability to have objects move around and be replaced. Another thing that is important is that we have these mobile robotic units that are moving through our space and interacting with things. We have three of these Boston Dynamic dogs, and they can pick up items and they can open doors and interact with things. We need to make sure that the fidelity of those objects that they're interacting with makes sense when it's visualized in virtual reality. Ideally, you'd have all these really beautiful, high-poly models, but we're processing this on standalone VR, which means the headset is processing this, so we need to keep processing to a minimum, which means we're going to work with as low-poly models as we can. While making sure that the fidelity of those certain key components that the dogs will be interacting with is high enough that it makes sense when it's visualized. So one of the first things we tried was taking a CAD file from one of our buildings, and if you don't know what a CAD file is, it's like a digital building planning file. It has tons of information in it, but we built this room based on that CAD file, and it looked pretty good. We used the textures from images taken from the space. The problem with this is that when you build a building, it's not always constructed the way that it's planned. Maybe this wall over here is like two inches farther out or the angle of where those walls meet is not quite right, which means you have to go back and measure everything to make sure that you're getting accuracy in your model, which is really time-consuming and prone to error. So additionally, lots of older buildings, which we don't have as many in the US as you guys do here, but they don't have CAD files. So another thing that we explored was, well, maybe then we'd get a floor plan of the actual space. So we looked at a bunch of different software that would take photogrammetry or LIDAR scans, photogrammetry. If you saw the presentation yesterday, it was great. It's a lot of overlapping images that give you, you can build 2D or 3D models from them. They give you a lot of visual information. Or LIDAR scanning, which has laser pulses that the reflection gives you distance information. And so we tried all these different softwares, but couldn't really get anything that was consistent. So I'd scan this room and I'd get three different floor plans, something different each time. And they're just inaccurate, inches inaccurate, which was not really what we were going for in our project. There was, we also came across a GitHub repository that had a project in it to take a floor plan and create a 3D model and blender from it. But it also didn't quite produce the results that we need for this project. So continued development on our versus system led to two main workflows that would produce models that we could actually work with. That's outsourcing from a company that I'll just call the company from here on out. Well, the second is modeling in-house using blender. I just want to make a note real quick that our university for the simulation and gaming degree path requires two 3D modeling classes. And starting in spring of this year, they started teaching those in blender, which is really cool. So in both processes, we start with scanning the space, and that could be photogrammetry or LiDAR. And we do it, we try to, using the company's high-end LiDAR scanner that we purchased for the project, as well as an iPad Pro 11, which has LiDAR capabilities. So let's talk about outsourcing. When we were first trying to figure out how to make these models, people came to us and said, hey, I've seen amazing things on real-T websites. I can walk through this home that I'm thinking about buying. Can't you do something like that? It looks great. And it does look great, but these are just stitched together photographs, which means that anytime something in the space changed, you'd have to re-scan. And also, there's no interactivity in it. You can't move objects around. So those were not going to be workable for us. But as far as the outsourcing company, you make your scan, you upload it to their cloud service, and then there are a variety of packages you can purchase, which range from floor plans to complete mechanical electrical plumbing system models. So we purchased and tested several of these packages, and most of them ended up giving us single mesh models. So the whole space is one connected mesh, which doesn't work for us. But also, you want to go in and try and separate the objects out of that mesh and clean that up. That's from a point cloud. Can you imagine? That's with all the vertices there selected. It's a nightmare. It's just completely inefficient for editing, for rendering. So those were not paths we wanted to follow. The company also offers a BIM file option that you can purchase. If you don't know what a BIM file is, it's a building information model. And kind of similar to a CAD file, it has all the information you could need for your building, like geometry, materials, things like that. But it also has a hierarchy of objects and kind of tracks what kind of objects. You tag your objects in certain ways and create this hierarchy of what's in your model, which is really cool. But we don't need that. Our project does not require BIM hierarchy, but it does result in objects that are separate meshes, which means we can move them around and it will work for our project. So the only problem here is, as you can see, there are a bunch of different tiers that have different objects included in them. The problem is, when you look at this, you don't really know what you're going to get. Online, of course, you can see, I'm going to get these type of files. This is the general idea of what I'm going to get, but you don't know what that model is going to look like. You don't know if there's something in there that's going to be omitted because it doesn't strictly fall into furniture or MEP systems. So one thing that we have a problem with is we have a whole sensor team. They're putting a wide array of sensors all throughout our buildings, and we need to know the location of each one of those sensors. So we need to be able to visualize them in virtual reality, so they need to be included in our project, and it's just not clear whether you're going to get that or not. So what we did is we purchased the second tier BIM file, and this is it for one of our spaces. And for in-house modeling, we decided to compare... I'm sorry, I'm going backwards. For in-house, we decided to compare... because that purchase package was expensive, so we decided to compare what we could do in-house with it. So we took that same room and modeled it according to what the BIM tier package included, so we included the same objects that they included. Our process is scanning the space and importing those scans into reality capture and then creating a single mesh model, which I said wouldn't work, but we import that model into Blender and then we use it as a spatial reference for building in Blender. So you're just really building inside. I tried to show you're really building inside that messy. It's a little bit messy, but you still get a lot of spatial information in there, so we're just building those models inside of that single mesh model. For us, the modelers that we have on the project, we're using Blender's native tools. However, there are a couple of add-ons I do want to mention because I think they're really cool. CAD sketcher and Blender BIM give Blender CAD and BIM tool capabilities so that if that is a workflow that you are used to working in, you can do it in Blender. However, it was not the most efficient way for us to create our models. So one of the most important differences to note between outsourcing and in-house is the precision of certain key features. Like in our buildings, we have many different kinds of door handles. It seems like a silly thing, but the difference between pushing and pulling or using a lever or having a keypad lock or a card scanner, those are important when you have mobile robots interacting in that space. And as you can see here, the outsourced model came with some generic handles on the doors and in another door there was one that doesn't have a handle and they put handles on everything versus in-house, which you can model to the correct fidelity that will show the interaction well. So let's talk about time and cost. The time it took for the model to come back after we purchased it from the company was three days and the time it took us to model it in-house with one of our student modelers was eight hours. The prices you see here are US dollars per square foot. We took the estimated cost of that room, that 420 square foot room, cost us 420 dollars and we took the estimates for the other tiers and calculated what those would have cost us based on the accuracy of their estimate versus the price we paid. For in-house, we are using student modelers so there's not really, I don't know, we tried to find a relative money comparison. So for the median hourly wage for a 3D modeler in the US is $27 an hour, so that's what we used, even though the efficiency of our student modelers is not going to be the same and probably the quality as somebody who's been working in this field for a few years. And that's how we got our pricing and as you can see, there's a significant difference. It's a lot cheaper to model in-house. This does not take into account that the company has a subscription fee that is a monthly payment based on how many users you have on the account and how many active spaces you have. It also doesn't take into account the fact that when we receive these models we still have to go back into them and add in anything that was left out or edit things to the proper fidelity for those interactions. Also, I do want to note that moving forward with in-house modeling we will begin to build up kind of a library of objects that we can use both as generic objects when it doesn't matter if that chair looks like that chair or not. But then we can also use items that are heavily repeated in a space, doors, windows, tables and chairs that need to have that higher fidelity. We can reuse those in the space and eventually, over the course of subsequent modeling we'll build up this library and greatly increase our efficiency. So, some considerations. The total square feet in our project was 73,305 square feet. I don't know if that seems like a lot to you. It seems like a lot to me, but if we took the fourth tier BIM package which would include architecture, furniture and then mechanical and electrical, just the fixtures, not the whole system, so outlets and things like that to potentially be able to include our sensors in there it would cost us $118,754 for outsourcing versus $49,916 in-house. Like I said before, that outsourcing doesn't account for having to go back and spend time modeling to bring it up to fidelity. So, another big difference is for the companies, they estimated 35 days to return a project like this to us versus what it would take us based on that eight-hour model creation, it would take us 1,745 hours to model this in-house which is roughly 42 and a half weeks of one modeler modeling 40 hours a week which is not a good time return. So, another thing to consider is the availability of modelers. So, in our project we have students who are taking time out of their studies to work on this project and model for us. But other companies may not have modelers available and you have to take into consideration how many modelers it would take that you'd have to be able to find to complete this project in the timeframe that you need. One thing we didn't actually calculate was what it would cost to combine both outsourcing and in-house like I've talked about. If we did that, we would probably purchase the second-tier BIM package which was architecture and furniture and then we would add to it with in-house modeling. The tier two would cost us about $73,000 and it's hard to say how much modeling we'd have to do on top of that. So, I'm actually at time, so I'm just gonna say one last thing. Over the last couple days, seeing all the ways, it's just been incredible seeing the ways that people are using Blender and as a new Blender user seeing the continued development of Blender in general, not to mention the incredible add-ons and how they extend Blender's reliability as a tool across different disciplines is kind of mind-blowing, especially for somebody who just, I'm just getting into this. And our project is a little bit different in that we're just using Blender in the most basic way. But the relevant point is that even using Blender in this most basic way, we're able to use it to increase the efficiency of our workflow in this extremely complex system that we're developing in the Versa system. So, I just wanna say thank you to Blender and thank you all for being here today. If you have any questions or comments, I would be very happy to talk to you afterwards. Thank you.