 We are going to introduce this instrument, this, this license that was acquired from the first museum a few years ago, and there's a lot of opportunities for research and archeology, so I'd like to introduce it to the community, and I think we'll know it's here. Well, thanks everyone. It's great to see you both the people in person as well as online. Thank you. I'm Chris Hoffman. I actually did a PhD here in the college department. I was telling some folks earlier, I wrote my thesis right there in that other room spent many years with with my advisor Ruth Tran who's actually here in the room. So thank you, Ruth, for coming and joining us today. Yes, as Nico said, what we're going to be doing is we're actually launching what I hope will become a new tool in our toolkit, a visualization resource that was at the first museum of archeology for a number of years, and is now available here in the archeological research facility. So we're going to talk a little bit about what we've done so far, how we use this visualization wall, what it is, how it's been used in the past, and then what some ideas we might have for future use of this facility. So I think we can actually have some discussion, both with folks online as well as here in the room about your ideas about how this could be useful for, for archeologists, for archeological research, you know, with water, you know, broadly. So yes, this is just kind of the outline here for Nico and I have been working on this thinking about different ideas for ways to use this as a resource, and then we'll have some open discussion. So what is, what is the Arcade? And by the way, the name Arcade is a placeholder. We can call this what we want, right? Part of this is to invent this, and so that name is not its permanent name, but I would really ask you all to help come up with the name that seems more suitable to this as a resource for archeologists, including graduate students, faculty, people within anthropology, people in other departments across the country. He stands for, and I think this is fresh, but this is a little bit of a guess, computer aided visualization environment. This is actually a technology that was developed in the 90s. In the early days of virtual reality, you know, computer scientists at, we have some colleagues at UC San Diego, we're really interested in both virtual reality, but also its large screen display environments. And I'll tell a little story there. One of the archeologists, one of the, one of the computer scientists, Tom Levy, is really motivated by getting away from really expensive proprietary systems. So, you know, if you work in the museum industry, you know that you can spend hundreds of thousands of dollars on a visualization environment. And one company owns it, they own the hardware, they own the software and they install it and they run it for you. He was really motivated by the idea of going to a store, buying a computer, buying some screens, and just making it work. So he was really kind of driven by coming up with affordable approaches to be very expensive. So this is essentially just a Windows 10 computer that has 65 inch TVs attached to it. The computer is a pretty BP powerful one. One of the features of it is it has two GPU cards, a graphics processing unit card that is really designed for computer games, video games. But it's also, it's just for visual processing of data and information, there are other applications that are really interesting within archeology where we have a lot of data that really you know, there's so much in the visual environment that we really love to capture and present in different ways. What this is, the thing is actually a computer, you know, cables sticking through that window there and it's in the back of our computer line. Right, so where did it come from? Well, UC Berkeley participated in a UC wide project of at risk cultural heritage, at risk world heritage and the digital humanities. I'm trying to speak to the microphone here without looking at my slides. So this was actually a partnership amongst four University of California campuses led by colleagues at UC San Diego. The PI here at Berkeley was Benjamin Porter with time was the director of the first museum of anthropology. And the idea of the of this project was to distribute these visual visualizations of cultural heritage sites at the different UC campuses, make them available in museums and libraries on the UC campuses, and have a repository of visual information of visualizations in one place down at UC San Diego. And they involve people in the libraries at UC San Diego to be part of the solution there so to be actually curated content from Egypt and Middle East, and especially you know cultural heritage at risk. And kind of tagline there was at risk from natural forces of erosion or in human forces of terrorism, one of the big motivations because we were archaeologists working in the Near East. Very concerned about sites they could no longer get back to. So, we brought one of these visualizations came environment this one here in fact to the first museum of anthropology and we installed it right when the first museum was reopening after its renovation. 2017 2018 approximately that we installed this in the first museum of anthropology, where it was actually really, you know, a wonderful experience where we work with students and anthropologists and the museum profession professionals to do a number of things. One of the main focuses was, was photogrammetry, developing three dimensional models of objects in the museum collection. So you can see here, we have a couple of objects that we've worked on. Now, this was myself and a number of students from the campus undergraduate students. We've never done photogrammetry for nor have the students so we work with people like Michael Ashley, who's a former colleague of current colleague of ours who has been training photogrammetry work with other people would develop this skill in photogrammetry but we were, we were kind of self taught, but I always kind of treated this as an experiment. We made a lot of mistakes. So you can see here's a picture is there's a picture on the left there with Ben Porter to add to this museum collection facility to some initial photography. I think there's a facet there, we work with other museums, staff being plenty there and students on a mass, and then we branched into other objects and corrections like Japanese. My engaging students here was actually turned out to be really one of the highlights in the work that we were doing. They were, they were so excited to be learning about museums, learning about the kind of work that goes on there, getting that kind of behind the scenes and understanding of what happens in the museum. There's a lot of space in the classroom area of the first museum of anthropology, where we have six or seven people there's Michael Ashley is there on the, on the right of that photograph. And we actually use this, not only to display the models, but to develop the models, because those, those two graphics cards, the GPUs that are part of this computer rush really good at taking all these photographs of these objects, and creating a rendering three dimensional model that was, that was one of the undiscovered features of having this particular point of view. One of the things that we discovered is that actually visitors to the museum were as interested in the work we were doing as they were in the exhibits that were showing in the museum. So we actually ended up putting, you know, we would put our hours of work on the calendar of events for the first museum, because people wanted to know, what are you doing, what are the students doing there, what's that object, you know, what did you do wrong that three dimensional model obviously not working very well. I think that was actually really interesting. Now, from that we actually launched another series of other projects in this kind of visualization state. So we became part of a non funded project called visual looks at visualizing visual scholarship in libraries and learning spaces. And this was North Carolina State University was the primary almost award and Berkeley was sub award that what we did there, we really worked with some of the other museum collections on campus, the Botanical Garden of Herbaria, the Berkeley Art Museum Pacific archive, Pacific Film Archive, and the library to develop three dimensional models of objects in other kinds of collections. And this is an opportunity to talk with people from around the country who are at similar goals of developing these kinds of visualization resources. We actually were able to install second one of these caves in a museum over on the other side campus in the Citrus building to target I call Citrus is the center for information technology research in the interest of society. And then we installed a very similar environment in their tech museum. So if you've ever been over the Citrus they have a tech museum where they show things that they built in their makers spaces. We have some other projects that leave them. And we have a facility like this is called there as well. So, now that's kind of interesting to me because that's engineering kind of context right so it's a different and they have a lot of visitors come here with a very high profile project over there. So, different kind of environment. We're also trying to figure out how we make that this resource useful in that context with the pandemic with a big introduction to the project. Now, the next project at this launch was a project with really really from studies here at Berkeley. They got funding from Citrus actually to develop a virtual reality application. And for those of you who know really really really really her specialization is understanding text, say from the book of the dead in kind of their material three dimensional contents. So for instance, she has done, she's developed three dimensional models of sarcophagi in the first museum collection, and other collections really around California and beyond. And she's interested in why certain parts of the text and book of the dead were placed in different parts of a sarcophagus or some other material representation. It's really interesting. That's, that's for an updated by developing a three dimensional model. So, I work with Rita, and then with another, a digitologist down at UC Santa Cruz Elaine Sullivan to develop this idea of a proposal for developing a virtual reality application that would start at the complex of the landscape of the region of where these objects were found. We're taking down into the light level of the landscape and on the right. From there you can descend into the tune. You would see this sarcophagus and then you could actually pull up conversions of the higher. These are some of the raw materials that we were using as we were signing the application. And these are some screenshots from the actual VR application which I can maybe show some some video from later. But we do start with kind of a splash screen from the upper left is kind of a, there's a credit level left there's a map of where the site of the car is located, and then there's super imposed over the sarcophagus model. And then there's a button that says enter so far. From there, you end up kind of at the level of the landscape. This is based on work Elaine Sullivan, who is developed when she calls a 4D GIS model of this landscape, and she visualizes what the site looked like at different periods of time. Right. So you start with that level of the landscape. And then you just send it underground. You can kind of see there on the right, this, these occurs in burial chambers to go 25 feet underground. You can enter the tune where you can see the doctor, the sarcophagus nickname doctor, and from there, you can use your pointer to click on part of the hieroglyphs and bring up translations. And we'll show you the track, the hieroglyphs and then you can also see some of the translation. So this is a really, and we did this also with students. So, and this was really challenging to actually get to a finished product. So, as we, and I'll say that Rita and Elaine are thinking about the next steps with this project, they're actually down in, I think it's Irvine right now at a conference and they're doing some demonstrations of this, helping make sure that the application still runs. So that they can do their demos, but Rita unfortunately is not even playing. But as we, as we thought about what we've learned so far in this project, it's been really interesting. First of all, we were dealing with materials that were excavated under different circumstances in around 1903-1904. Right. So the actual stone sarcophagus that we were using in our, in our experience is in the first museum of anthropology. When you are able to go back in there, it's actually in the entrance to the lobby of the first museum. But most of the other information that we're using this from publications in 1903-1904 that have some maps and some drawings and a lot of text, but really generating kind of a three dimensional understanding of the whole context is very challenging. So there was actually some kind of rediscovery of the archaeology of the context that we have to go through in order to develop this model. And then I don't read all of these. This will also be available in reporting that will pass later, but this is really challenging because, currently because the technologies that we're working with, like the wild wild west. The hardware is evolving with software is evolving there are no standards for file formats. And then we're working with each ecologists who are working in the same, same kind of site and cultural context. One is developing three dimensional models of sarcophagi. The other is doing GIS work and building three constructions. They're both creating three dimensional models, but they don't talk to each other. They're like, they're completely different kind of landscapes of their really challenging to bring it into one model. So we have to make a number of trade offs. And so for instance, Rita and Elaine were really concerned about authenticity and accuracy and representing what we know, and what was actually there. But to create a virtual reality experience where you're wearing a headset and you're using controllers to move around. So we have to, we have to make a few sacrifices to make a an immersive experience that that capture what was really there and also told the story that we wanted to tell. So interesting challenge there. I was struck by this recently when I was helping Rita get ready for the demo that she's doing next week. The examples that are out there to follow in terms of what a good digital cultural heritage, virtual reality experience should be. They're just aren't very good examples. I mean, what we did with a student programmer and me working part time and stuff like that on this fun side. It almost stands out as one of the best examples. Obviously, I'm not objective. But when I compare it to things that I've seen, both in things like an Oculus Quest or in the HTC Vive store. I'm not that much out there. I'll come back to that because I think what we're seeing is the best, best examples are in various kinds of computer games, where we've seen those kind of environments and cultures, cultures represented in a certain way in a game context. People obviously have a different agenda, right, they're developing games, they're trying to make money, and they're the ones that are really investing in, you know, representing cultures, you know, to the to the end of having having a game that someone that we really, we really have a responsibility to take this was written, I'm speaking, we have the responsibility to do this work in this space so that we can be using this information presenting it and having the kind of dialogues that we know we need to have to be responsible with the kinds of cultural information that we'd like to be able to share with others. That's a big part of my reason for doing it. Now, another project starting at 1230. Okay, another project that I've been involved with it's been coming out from from this work is we've developed an XR community of practice on campus. XR is kind of an extent for extended reality, virtual reality, augmented reality, and a colleague and I own the graph that formed the XR community of practice. We have some small pocket of funding for campus, mainly we're hiring students, buying a little bit of equipment, and trying to bring together the different pockets of excellence that do exist on this campus. So we have people like next door in architecture, who are really doing fascinating work in virtual reality. We have people in the College of Engineering and computer science, who are developing hardware and software in virtual reality and augmented reality, but nobody's doing anything kind of campus wise. And since I work in kind of a central group here on campus with my colleague Rick Jaffe, our perspective is campus wise, what kinds of services or spaces or tools would be helpful to the different people working in different disciplines. This is what we call the service design experience. What would you as archaeologists like to have available? So that's, that's what this is an experiment in that thing. We have four main things that we're looking at in developing this community of practice. One is accessibility. How do we make sure both definitions of that work? How can we make sure that these technologies are available to people with all sorts of abilities and disabilities. So obviously for something like this, somebody who has vision impairments, how can they still not be left out of this kind of experience. A classic example of this is if you think about revolving using Zoom a lot for classrooms and for experiences like this, there are now kind of virtual reality spaces that are available as well. How do we make sure that we're not leaving anybody out if we're using some of the social VR environment as part of our learning and public service missions here at the university. But that's about also access to content and technology that is very very expensive. How do we make sure that we're being included. So, I think we'll be a great interest curation, how can we make sure that the content that we're developing is high quality that it's reusable so that information doesn't get lost in a file format that that doesn't doesn't really work anymore where it's not supported. So one of the things that we can do to make sure that people can find the content that was developed 1015 20 years ago, or another excavation, for instance, how do we make sure that we're doing the right thing with the information that we have. I'm going to make sure it's not so hard to take something from a GIS application and something from photogrammetry and bring it into one experience. How do we make sure this whole host of ethical actual ethical issues about the virtual reality in particular the, the information that these headsets gathered by tracking and what are you actually looking at people are starting to build in other kind of biometric sensors Are you sweating, are you, are you getting nervous, you know those kinds of things. That's really, that is data that's very sensitive. Obviously, so what are the things you need to do to make sure people know as they use virtual reality on the reality that they understand what kind of information to gather the background. Okay. So we've got a bunch of kind of experiments, developing some applications that work on the iPhone and other devices. I won't go through that, but I'll be referring to that. All right, so let's look at what can we do with this now. So, we've got to talk about two things that kind of build on work that I've been doing. And that includes, we want to actually, then a demonstration of demo. For that catalyst publication for that catalyst project that I started talking about from this facility here, we have an application developed by the computer scientists down at UC San Diego. And in that application, you start at the level of low, and there are several sites that are actually where we bought in 360 degree images into this application. For instance, I can say that I want to go to the site of Luxor and use some 360 photos. This one's very dark, but get to a more well lit area. You can see that we were able to actually navigate around this site in 360 degrees. So this is the next site here in the next 360 video or image rather, and look around this location. So, but this is the original purpose of this, of this cave, he asked us to share these kind of images so that if you couldn't go to Egypt, you could go to a museum or a library and see this space. You can see we have, we do have some archaeological sites. This is a site that we did work on. There's a marine archaeology that he's doing down there. There's an archaeological context. This is a way to put somebody in that location. We can learn more about that. Well, theoretically, we can add more 360 images to this same application. That's one of the things I'd like to explore is how can we add to this application. As we get into the first museum, we can certainly do more photogrammetry, develop 3D models from other collections, and that's something we actually have the material tools right here at the art to do some of that work. There's also a photogrammetry of whole archaeological sites, but as a way to develop a three dimensional model of an archaeological site, this is a resource that we can take advantage of. I think we're going to talk about something related to that. And then I'm really interested in continuing to work on applications and experiences in AR and VR. I'm really interested if there are others that want to explore how to take some of these three dimensional representation and then tell stories through those and share our interpretations and use those platforms to then allow others to have a dialogue with us about cultural heritage. I know Meg, for instance, she has over there in the corner, she has a bulletin board about movies. And she and I talked a little bit about what could we do to have this be a platform for talking about reading, right, and then teaching people about our reading. Now then, I also want to have some conversation about how to do this appropriately. So when I was working with the students in the First Museum of Anthropology, we were trying to find some objects that would be a good collection for them to work on. They would also be kind of good to showcase why this was good work to do. And Katie Fleming, then at the First Museum of Anthropology, brought us to a collection of Japanese Mexico. And these are objects that are really wonderful. They're often carved in ivory or wood. They're part of typically man's garments that kind of used to hold a sash and some other things here that would be part of a little little external pocket that would hold things in it. So these Netsuke are very unique, very collectible, and the First Museum has an interesting collection of Netsuke. So we started, we developed 20 or 25 models, three-dimensional models of these Netsuke from the First Museum. And I'm sorry that Jiko Abu can't be here because when she saw some of these models, she was like, these are awful. They're clearly for the tourist trade. I don't really think you should show those here. So that really formed around it to me that we have to be having a dialogue with people. It's not just about showing fancy things. We have to be really thinking and talking with people about what we're doing and why. Because as we know, archaeology and museums are in such a special place right now. It's been such a hard and will continue to be so important that we have those dialogues. So I think that this platform can help us have those conversations because we all know just because something is digitized and is a picture doesn't mean that it's any less sensitive or culturally important to somebody. But how can we make sure that we as archaeologists are really making sure how to really advance how we have those conversations. Okay, so let's, those are the kind of things I'm really hoping to get out of having this, but bring this back to my academic home where I started here many years ago. Okay, so Nika, I'm going to now turn over to my colleague Nika, my collaborator. I will turn off the application. So I've been trying to think of research applications of this. And in addition to the photogrammetry potential that Chris mentioned this is here has to a gigabyte graphics processing units, which are 2017 that was pretty high. It has, but it has double graphics. Those can be passed by photogrammetry software for processing these hundreds and thousands of photos that are combined to calculate a point by three dimension. So that's one possible application we've demonstrated back today but it's something that one sort of obvious use for people in Google Earth. I don't know. We have a skylight in this room so it's great to write it here but I'm not going to show it to you. So I'm going to go ahead and do the same thing around exploring possible. You can explore your plan, your survey research, and the difference from just using your laptop or even virtual reality goggles is that more than one person can stand here. That's one possible use. You can see how constructed spaces look so let's fly over here. And in the case of urban settings, some places Google actually has the 3D data from building data. So we're going to fly down the corridor here, put them on Sony Center, that's one possible use. GIF software is a quick display of the GIFs, you've got information systems, and we have RGIFs along there. The evolution of RGIFs. And one thing that they're supporting now is more integrated three dimensional models. And this is what's called a voxel, which is, you know, in GIFs we have vectors or objects and raster, which are essentially a think of a satellite image or a future elevation model. So those are classic examples of rasters. So a voxel is a three dimensional raster. So instead of just rows and columns of equally sized grid squares, you have queues, equally sized queues. You have scientists who are using the voxel models and you can go to get a set of surface strata or calculate larger features that they've mapped onto physics. You can potentially see using this, you can cut across an archaeological site, but you can map it with the same ground country radar. So that was an example of seismic media. A third example I thought I'd demonstrated. This is almost like a profile, right, a stratigraphic profile, at an archaeological site. But I thought it would be easy to pull up some profiles. Here, I have an image from two of our scholars here. We've created some high resolution images from the power group. This is just explaining that, you see, this is a people of the migrant well depiction of life. And this is where the doorway of the wall off. I think I should give you some background. This is a profile in a different side of here. This is where the doorway, a post mold, when you can line it up and looking in front of me, see what I'm trying to do. We have this mitten layer below, which is a stack, and a little doorway that I can pull off. I'm going to go ahead and show you a little bit about it. This is a profile from Shibita, and where we have other sources. So it's another kind of potential use of this system. It's using a comparison picture software. This is a Java program that was written maybe 10 years ago. It's still run unfortunately. We can start with a comparison picture, so you can see it here in the comparison picture software. You can really zoom in and potentially gather around the profile like this and discuss the security and the sequence of the scene. One of the features of the software is creating geographic group. You can see the different sets of events or projects. It offers a potential for gathering around these large entities. Is there any questions here on this? Let's see. We have a question here online. Could this screen and computer be used to display what's happening inside someone's VR headset to make a VR experience more shareable? Yeah, absolutely. This is a little hard. It kind of depends on the equipment you're using. So the Oculus Quest now has a pretty good means of casting what you're seeing in VR onto a computer screen. So yes, that is a great idea and something I'm interested in exploring for sure. Yeah, that's true. That's true. Yeah, yeah, you know, some of these, yeah, it all depends on the particular application you're working with. Yes, Christine. Yes, I see very much the first slide. So if you would want to, if you have that one snap of a profile, if you want to make a 3D room like our left door room, what you're talking about is a desert. And one colleague would be talking about this for saying, between 30, 200 photos, what do you need to do so you can reach the crop? You can do those down to the right information so that you can get a better approach to both graphics and something more. Now, so for people on Zoom, I'll repeat the question. So the question is, how do we create these three-dimensional models? Say of an archaeological site, we just take a lot of photographs. So there are a number of ways to generate these three-dimensional models now, these photogrammetry models. In photogrammetry, yes, we would typically take some large number of photographs that have to overlap. My, I think it's 30 to 40% is kind of the ideal. And then what you do is you use software that I skipped over, like the whole difficulty of doing the photographs, really the exposure and the, you know, it's a garbage in garbage out situation. If you take bad photographs, you will not get a good model. So taking good photographs is really key. One of the lessons I learned a hard way early on. So taking good photographs that overlap 30 to 40% is really important, even more. Yeah, okay, six, yeah. And from different angles as well. So from straight on and from above and from below. And then you use software. And what the software does is it takes those, all of those images, and it uses that overlap to find points in common. So I think you've heard Niko refer to a point cloud. And what the software is doing is it's trying to say, okay, that, that fingertip is, you know, the same fingertip and this photo this photo and this photograph, right. And so it creates sometimes 10 or 20, you know, thousands of points in a point cloud. And then us as the user of the software can actually update and correct some of those determinations of software is made. So if you want to create another denser point cloud, and then at that point, good, if you will actually try to connect with us. We'll say, okay, those points are good, then I can create a surface of polygon, right. That has a certain color has a certain texture to it. And from there, it's able to create a three dimensional model. So that's how I'm familiar with creating three tomorrow, three dimensional models, but there are new technologies out there, not so wide R is a great technology for creating three dimensional models. And the very active area of technological development. I can now use my phone to create a three dimensional model because it has a certain actually my iPad is better than the software and hardware now out even at the consumer level. To create some of these models. Christine was actually instruction. But my question would be how difficult is it. If you just got all the stuff. Right. Okay, okay. Perhaps to be able to discuss alternative interpretation. Yeah, great question. How could we use this to actually reconstruct archeological sites from archeological remains, and then even use that use this as a format for discussing different interpretations of what a reconstructed environmental site or room might look like. I think that's a great question. And, you know, I know that this is the work that archaeologists do. I have seen, there has been some interesting work, just a few isolated, especially, I think classes is really interested in this right. So I've seen some Roman sites that have been kind of reconstructed, even in kind of a three dimensional block through, right to do a walk through one of one of those sites, but I think that's a perfect application. I don't think it's going to go into reality in some way or another. Yeah, I saw a very interesting talk a few years ago here where they were using augmented reality to to put artifacts back into site. And they were actually talking to the space, there was a combination and I think they were actually probably, maybe it was a foam, a foam pan around the geometry states the same, but not, you know, put a quick part in the heart. Right. Yeah, Sarah. Yeah, so the question that Sarah has was, is there, instead of, in addition to just having the the three dimensional videos, the three 63 videos, there's something we could do to actually then add some interpretive kind of component so that you can look at the three dimensional videos. Yeah, so the question that Sarah has was, is there, instead of, in addition to just having the three dimensional videos, the three 63 videos. So what we could do to actually then add some interpretive kind of component so that you can learn more, you could look at some of the data, the drill into certain aspects of the archaeology. Yes, that's where, you know, it would take developing some kind of application in order to take the raw content, and then add annotations or some kind of ability to navigate. It's really interesting to me, I've been thinking a lot and actually doing some some classes on how to design these applications. Because this is, this is again there's no standards for this right except what's maybe being developed in gaming contacts but for, yeah, which are expensive and take a lot of money. So before for the kind of application we would want to develop. You know what is the right way to navigate to provide some kind of annotation. How do we, since this is mostly about kind of immersive visualization, we in archaeology I think we're used to data and some kind of context as well so how do we, how do we provide enough context information, but also recognize kind of limitations of the technology and maybe, you know, who are intended audiences as well. I think a lot of this is being developed quickly, because now it's been to be here about Meta and Microsoft by Ubisoft. Okay. Acquisitions that Microsoft to compete with. One of the discussions I saw is whether we want to really give this over to these companies, in terms of establishing standards, or should this be more of an open forum for public so it's not just dominated by a couple of players. Open standards, you know, along the lines of each channel. Right. Okay, Ruth's hand and then your hand. I'm still following on from what the whole question of whether this is, this is a democratization of visualization basically that's why you're saying it is constructed so that it was easy for a museum to get one. And so it seems to me that that we have to ask a question is, does it always have to be so expensive to make a game that uses video or immersive visualization. Right, right, doesn't have to be so expensive. Fortunately, this is one of the things that's happened recently is the development of these kind of social XR spaces, their companies trying to create kind of the zoom killer. How could we have some other kind of more immersive environments that we could join in for work or for socializing conferences conferences. And those are kind of forcing development of some standards, right, because the companies need to bring in their own content for their own conference or something so, you know, we're, we're definitely experimenting with platforms like New Zealand hubs and spatial and Facebook has meta has its own kind of social XR environment, but those are, those are shown some promise. That's one of the things our students are doing is evaluating some of those to see how, again, they were they were developed for a different purpose. How do they work in in a higher ed environment. Yes. Who can use the art. Is there a system in place for, they say, your graduates to the point of visualized maps or say I'm a graduate who are associated with the program or the faculty doctor. They use the arcade for visualizing photographs that they take in my scope level. Yeah, great, but I'm so glad you asked that question, because that is kind of the next thing which is, how do we make, how can students, how can faculty others use this resource. So, you know, Nico and I have talked just at kind of a high level about how this community of facility that the people could reserve time, they could know check it out, so to speak, and then use it. So I think that's the next step is for us to actually develop, you know, the appropriate system for people to use the environment. What I'm thinking about doing is actually maybe having where I'll be here for a couple hours each week. So, we're going to find a time that where I could come over and be just be around and let people know so that they're here they could, and then I'm also also available. And so is my colleague Rick Jaffe here, we can come over when people are actually. So, we'll make sure my, well my email is chris underscore h at Berkeley.edu you can find me on the calendar as well, and you can go have all my information. Yeah, yeah. Yeah, you think something like that would work. Yeah, yeah. All the other guys. Yeah, that's. Very dark. Thank you for asking that question. No, we're at 104 would be great to link the models to TDR or ochre. Excellent. I am not familiar with ochre is that another repository for this kind of information. Yeah, it's out of the University of Chicago I think and they're doing a lot with, you know, creating a kind of multifunctional archaeological database, you know, like these other ones where you can have a field application and then it's automatically in your database that then you continue to work with in the lab and we've been connecting some of some ochre information from the Amachi project to things like as restory maps, so that they're they're linked between GIS data and the ochre database because it's all, you know, it's all on servers. So, it would be cool to have some of these virtual reality models, especially in sort of fixed setups like this that you are just sort of see a slight gesture towards, you know, there's more information here and click on it and get the whole sort of database entry for artifact. I love that idea. Thank you. Every time I talk to somebody about visualization like this review or I learned something so so thank you. Yeah, that would be exciting. Yeah. You could have a clickable object that takes you to some of the UI information as well. Yeah, absolutely. I love that. Thank you. This is I'd love to hear other ideas like this or examples of resources or work that people are doing because there is a lot happening it's just kind of scattered. There's a lot of essentially three dimensions. If you're modeling your kitchen, you can get quiffar from all the dishwasher companies and pop it into SketchUp and they've really made it a lot easier. That's true. Follow that. But it's not realistic about talking about sort of public presentation and not research. Yeah. Are there any other. Yes. You know, there are a lot of interest in in VR and augmented reality for education. And for instruction. In fact, there's so much interesting work going on in the medical fields where they're actually developing virtual reality and augmented reality applications for teaching people how to say do surgery or plan for surgery surgery. So in medical fields, they've done some incredible advances in using those technology, even to aid in an active surgery. Yeah, and robotics or three really interesting confluence between robotics VR AR and AI technologies. It's kind of shaking a lot of things out. So, so there's a lot of work going on training for medical fields also in a factory context and manufacturing as well, both for training and in the actual factory work. The other edge is a little behind, I would say, maybe not, maybe not surprising, but people are exploring. I'm part of a community group that's talking about these things. So again, you find an interesting example of there's actually someone using second life to teach Japanese. So there's a Japanese island in second life still. We hate second life now. Understood. No. Okay, we're just saying the second life doesn't like I read I, I, I, I totally believe it. Yeah. I don't have you know, but there are maybe some other projects that are doing something interesting. Yeah, yeah. This could be used for teaching. John. What we look for, you know, What we're looking for. Resolutions. In some ways it's no different than our overall. I think it's better to blame. And you can stand there. Yeah. Well, it's very high resolution LED screen. These are also mentioned 3D TVs. So you can actually use the glasses if you have a 3D application. So we have a list application that we looked at. There is a mode that it runs in three dimensions as well. So we haven't even talked about that. That's going 3D TVs were kind of the phrase but you can't even buy them anymore. So technology that did not catch on, but these are 3D TVs. So we are at 110. Let me just see if there's any other, I don't see any other questions, but thank you to everybody online, especially those who came to see us.