 Hello, everyone, and welcome to the 3.30 p.m. to 4.00 p.m. session of the 2021 Open Simulator Community Conference. In this session, we are pleased to introduce the presentation AI Augmented 3D Visualization in the Metaverse via Scientific Virtual Observatories. Our speakers are Dr. Andrew Stricker, a.k.a. Spinoza Kunal and Dr. Cynthia Culloyne, a.k.a. Lear Lobo. Andrew is an education innovation analyst with AIR University's Lemay Center for Doctorate Development and Education. He conducts research in the future, the cognitive sciences and artificial intelligence for professional military education, and the collaborative design of assistive immersive 3D virtual and augmented reality simulations for complex problem solving among teams. Cynthia is a professor at Parker University in Dallas, Texas and a VR researcher who taught 55 university classes in virtual worlds. Her team won the $25,000 grand prize for the Mars Expedition Strategy Challenge, and she received the thinker-o award for virtual world education. Please check out the website found at conference.opensimulator.org for speaker bios, details of sessions, and the full schedule of events. The session is being live streamed and recorded, so if you have questions or comments during the session, you may send tweets to at opensimcc with the hashtag OSCC21. Welcome, everyone. Let's begin the session. Hey, thank you, Galen. And Andy and I are excited to be here. He's going to be doing a lot of the talking, and I'm going to be driving this speak easy and chiming in with ideas. And I just want to thank everyone for joining us today. Andy, over to you. Thank you, Larry. It's a pleasure to be able to chat about the work we've been doing with artificial intelligence and machine learning with our prototyping of learning simulations. And today, what we thought we might do is talk a little bit about how we've have linked up some of the tool sets that we use from AWS AI, different capabilities with our simulations and the interactive parts that we use those simulations with the people that mostly are students, and they try out and give us a lot of opportunities to see how the data is useful in giving feedback to them as they go through the simulations. This slide that you see here sort of highlights a really popular thing that we've noticed across various universities and research centers, and they're called virtual observatories. And these are basically hubs that support various forms of science, whether it's in astronomy or biomedical informatics. There's different applications of these in physics with the federal laboratories. And what we just feel like that with a metaverse, there's a tremendous opportunity to employ these kinds of visualization technologies in pursuit of science and also, of course, with education. So one of the things that has been proven to be very extremely helpful is when you can actually model complex relationships and show the relationships of the variables to help explain phenomena. Nancy Nasserian, who's now at Harvard, did a lot of the pioneering work in model-based reasoning that we've benefited from over the years. These are, if you look at the bottom of the slide, there's two different examples that are popularly known. One is a recent endeavor that was published. As a matter of fact, on the 8th of December this year, about the 3D visualization that they did with kind of an observatory collection of a supernova, data from a supernova. And what they learned from these visualizations is a new understanding of phenomena because you see things differently as you're able to see the dimensionality. And the one on the right side was NASA JPL's effort working with MIT to create a data visualization of the universe, very ambitious. And these models, by the way, require supercomputers to formulate the structures and benefit from it. Now, what we do, for example, on the JPL model, we bring simple versions of the models into our simulation. And one in particular is with our Mars expedition. And basically, the whole expedition is a set of clues that you have to decipher to interpret what in the world you're looking at. And we've had a lot of fun with this with our participants that have gone through the simulation. Lear, do you have anything you'd like to share with that? Absolutely. Well, I had a little more wordiness because I know it's hard to read our slides, right? So I gathered the slide content for you. And then I wanted to explain that, yes, we simplify the model in world. And then we also think about its complexity. And we gather all kinds of data that we're then going to go through a process that we're going to illustrate in this talk. Okay. And go to the next slide. What we've done is we've put together in our hub various tools. And the tools support the 3D simulations and visualization. So I'll work my way from the top row across down to the bottom from left to right. But composite wise, the hub that supports our AI resources and tools are accessible through the web. And we have these tools where you can go and explore in the new planetarium that we have shown here at this conference in honor of Dr. Barbara Truman. And I'd just like to share that Dr. Barbara Truman with our group from virtual harmony was very involved in helping us with development of these resources and tools along with Lear and the others from virtual harmony. And from the hub, you can access a bibliography. And this bibliography that we use, we've been tracking research in publications and artificial intelligence for quite some time. And when we see something that's related to the work that we do, we create a synopsis of that work and we cross index across several categories, subcategories, where you can do fairly involved searches. And we include information about the nature of the research and so forth. And we have several people from across higher education industry in the government. And one of the things I do want to highlight is a disclaimer. We have people from the military that have come in and use our resources. But the government does not officially endorse the resources. But we do have our students in the past who even currently access some of these tool sets. So if you go to the next part of this AI resources, you'll see a picture of our simulations. And so what we've done, we started out with the Mars Expedition simulation with AI tools. And then we went to our Grand Prix environment. And we were tracking data with the virtual Grand Prix races. And then it was about a couple months later we found out that the actual pre-circuit is using AI. So that was kind of a fun thing to find out that the application of AI and machine learning techniques is being used by the Grand Prix circuit. We also do assessments. So our AI tools give us assistance with feedback to people. After they go through a simulation, we collect enormous amounts of data. And then we feed it to the AI engines. And they give us estimates of where we can expect their progress and development to take place. So we've got these developmental models that we've been constructing. And we're very interested in this from a professional development level. So a lot of the assessments we make, we try to help people understand across critical thinking skills and problem-solving abilities what it might look like across a lifespan of development. So this is very important in the field right now with professional developments to help people understand the nature of what it looks like to evolve in your cognitive and complex reasoning skills, including moral ethical reasoning. So we do assess those levels as well. And then at the bottom row, you'll see we've got a blog. And so we generate a blog each week. We've been doing this now for, I guess, Lear for about almost, well, we're going in our second year in January. So we've got a quite a collection of weekly blog topics. And we generated a document that has several chapters and an index of each of the blogs as they relate to a framework. So we help people who are brand new to artificial intelligence and machine learning understand. We've got several topics that basically describe what is artificial intelligence compared to machine learning. And we relate that to the areas of research in cognitive sciences and several other disciplines that are employing the use of AI and ML. Yeah, Andy, on that thought there, I'll chime in. My students love the bibliography site and the research sites because we use them in my business intelligence class, in my decision support systems, and then also when I teach advanced topics and databases where we think about NoSQL databases and how all of these different technologies integrate. So thank you for that. Oh, absolutely. And you know, that as you've seen and I've seen with people that have been using our resources, they're from all over at various levels from masters to doctoral and postdoctoral efforts. And they give us wonderful feedback. And we truly benefited from it. And then the next one over at the bottom row is our seminar. So we do have small modular seminars that we've put together over the past couple of years. And they talk about some of the applications of these areas that, you know, are interesting for people in certain domains. And this brings it more to the forefront because a lot of people, you know, when they're first getting, becoming aware of artificial intelligence, machine learning, they're thinking about, okay, well, what can this do for me in my particular domain? So, you know, we bring a lot of examples up in medical practice and industry and also including the military. Then the learning neural net is something that we've been working with now for quite some time, but we run a very small neural net on our servers, but we also tap into the Amazon tool set. And if we didn't have that, we wouldn't be able to do a lot of the more advanced work. And I'm going to jump into probably more detail and you care to hear about in a minute or two, but we'll just walk you through what we do with some of the neural net processing. And then finally, on the other block there, we have the atmospherics. And this is where we translate the nature of what's being done in AI ML for senior executives. And so the atmospheric site tracks emerging breakthroughs and capabilities as it's been, as we become aware of them. And it's got automatic trackers that basically call from published resources that get put on for sharing. So, okay, Sam, can you go to the next one? I have one thing to add here, Andy. We have six minutes left and 17 slides, so I just wanted to give you a heads up. I couldn't ask for a perfect time. We'll move right along. This is our architecture. And I think some of you may have seen this before, but this supports the entire background of virtual harmony and the various parts of the hub, as we call it. And over on the right hand side is the flow that is used when we do some of the larger machine learning work with SageMaker. So this is SageMaker is a really nice set of capabilities that's offered by Amazon. And I have to tell you, it is very, very intensive. And so we turn it on when we need to use it. Next slide, please, Sam. So here's what it looks like when we turn it on. We set up what we are running to train our models on with the console that is offered. And from there, we can track all the different components as we set the machine learning flow in place. And usually, the machine learning flow, like when we've done some of our simulations in the past, we'll run it for about a week. But some of you that do this work, you know that's a pretty, these are very large virtual machines, very high computational. So it's about all we can do to keep it running for a week. Next slide, please. Okay, good. So this is what it looks like when we first set up our Jupyter notebook. And we load example data in and basically what we're doing here with the example data. And this kind of example data can come from the earlier prototypes and data we've collected. And then we basically want to train and validate and test the model. And basically what you're doing with this capability is you're generating enough confidence in what is associated and related to help explain certain parts of your framework that you're wanting the AI ML to give you assistance with in predicting where people are going to fall. Like if you're using it for a learning environment, learning simulation, you would use this to help you give feedback to students based on the predicted levels of understanding or levels of performance. So next slide, please. And so here the next part of this flow is we're getting ready now with this code to move the data into cloud storage. And these types of data sets can be when you first start out fairly reasonable, but they become very massive. And so like working as you can imagine with government data sets, they're humongous. And so cloud storage is really essential in order to be able to do this. Next one, please. So when you get your data loaded into the cloud, then you basically cycle through the model and using the data to train it. And basically it's going through calculated weights associated with what parts of the variables contribute to a good reasonable prediction. And they give you basically confidence factors associated with the model and how useful it's expected to be in helping you to actually give you the predictions that you're looking for. Next one, please. So here after you run the data through the training and validation process, you get basically a report generated. And it gives you several metrics associated with the model. And at the very top, you'll see the overall accuracy expected with the model. Now here in the one I'm showing you, there is a 0.86 rating. So that's really good. So that's what you want to see. Next, please. And you also can go ahead and plot out the kinds of expected predictive strengths associated with the model in terms of looking at what you would expect if it was actually employed. And this gives you, again, a greater confidence. Most often, what you're going to see here, and when you first do this, is several areas that or can't really be explained by the machine learning process. And you have to go back and clean the data and tweak it and work it. So I hope I don't want to really lead anyone to think that this is a straightforward process. You have to iterate back and forth and cleaning your data. Next one, please. And this final output from SageMaker shows you a cutoff and loss functions. And now what we do is we take this type of data and we feed that into the algorithmic responses in the 3D learning simulations. And this is the kind of information then that gets translated into helping the algorithms be able to give the feedback. So this screen here that Lear has brought up is actual, oops, you went back. Can you go forward again? Sure. We're out of time though, Andy. But go right ahead. Okay. I have my clocks in nine minutes. So are you saying to stop? Is that what you're saying? Yeah, we're supposed, well, no, we're going to wrap. So I was going to go ahead with the slides because you have some cool ways of looking at the results right here that show the inputs, the processing and the outputs. Did you want to finish with that slide? Well, I'm not sure. If we're out of time, we should just quit then if that's what you're saying, if we're out of time. Well, I wanted to wrap on this really cool content you have back here because this really illustrates the power of what you're trying to convey. I just love these slides. And yeah, we were supposed to stop at 10 minutes till I'm sorry, Andy. No problem. When we get together, we have such a great time. We did a talk once that was a five minute ignite where we had 15 seconds per slide. That was a wild time. Anyway, so any final thoughts, Andy, on this process, where are we going with this in the future? Well, the, you know, my train of thought is lost, but the, probably the best thing for people to do if they're interested in the work is to maybe reach out to us and we can spend some time going into some of the benefits, you know, that, you know, what it represents for mixing, you know, metaverse 3D environments with the tool sets of AI machine learning, particularly if you're educators and you want to actually give, you know, really considerable precision in the feedback as people are using 3D simulations and 3D models. So thank you very much for your time. Okay. Andy, we had some wonderful comments. I just wanted to close on this. Kayaker and the others, I don't know if you've seen the text chat, but they are stomping and said, show the slides. We love your slides. Wonderful session. So I want to thank you for all your support. You know, you probably didn't miss it in the beginning, but I explained to them that you do everything. You are a multi-talented, multifaceted person who's an expert at 3D mesh, as well as programming, cognitive sciences, innovation analysts, and a dozen other skills. He textures, he scripts, he does it all, folks. So I am blessed to have such a wonderful teammate. Thank you, Andy. Thank you, everyone. Well, I'm very grateful to be part of this conference. Thank you. I'm sorry that I have to, we have to end it, you guys. This was fascinating. Thank you, Andrew and Cynthia, for an informative and interesting presentation. As a reminder to our audience, you will want to check out conference.opensimulator.org to see what is coming up on the conference schedule. You won't want to miss our next session, which will begin at four o'clock p.m. in this keynote region and is entitled Resmala Composer, Approven Rapid Content Creation Application for Open Simulator. Also, we encourage you to visit the OSCC 21 Poster Expo in the OSCC Expo 3 region to find accompanying information on presentations and explore the hypergrid tour resources in OSCC Expo 2 region, along with sponsor and crowd funder booths located throughout all of the OSCC Expo regions. Thank you again to our speakers and the audience.