 All right, everybody. I'm Matt Cook, Emerging Technologies Coordinator at OU Libraries. That is my counterpart. Zach Lashikasi will be talking shortly. I'm going to kind of lay out the agenda and sort of where we are in terms of virtual reality applications for pedagogy and research on campus. Here it is. Here's the agenda. So this is going to be our talk in a nutshell. I'm going to start out by contextualizing the project, kind of telling you all where we are right now today as of April 2017. And Zach's going to jump in for preservation challenges and research initiatives. And then we're going to end on kind of a back and forth across talk, if you will. I'll take away this slide. Right, so here's some stats just to dive right into it. Kind of a state of the union for OU Libraries. And as far as I'm concerned, OU Libraries is OU full stop in terms of virtual reality. We started looking at this technology a couple years ago. So we're pretty well developed in a couple research programs and several course integrations. The hardware element, which we have deployed right now is primarily Oculus-based. Those are Rift Consumer Version 1 headsets, eight at three campus locations that are networked. We actually have three HTC Vive workstations and a ton of cardboards laying around. The important takeaway from the hardware portion of the slide is that all of these are publicly accessible, which means we're seeing uses and adoptions and student clubs, for example, that are emerging from this that we didn't predict, which is great. It's wonderful, it's organic growth that we wanted. And then in terms of sessions, because we do data gathering at each of the sites where these workstations are deployed, we've had 1,000 plus, not necessarily unique users, but people sit down in a chair and engage with a piece of educational software. And this is as of the beginning of the year. So you're looking at essentially stats from last year up until the early part of this year. Course Touchdowns refers to full class periods that took place in one of the three, and this is one in the main library here, of our VR Workstation locations. We also kind of try to onboard people with the project and what we've done with these intro to VR workshops. You can kind of consider what I'm doing now, kind of the beginning of an intro to VR workshop that we'll give on a weekly basis for the general public, for staff, for faculty. You can walk in off the street. And importantly, even if you're not a technical person by training, which I'm not, you can get the background to get started and we'll give you the basics of how to interact with the system, what software we have, where we're going. And then of course the big win as far as I'm concerned is these kind of in-depth course integrations, which I'll talk about now. Slide, please. Here we go. So these are kind of the more in-depth, well-developed projects that we've done in the course of, I would say the last year, because prior to that we were just kind of getting our footing. Architecture is the big partner here. Particularly interior design, which would be a way for undergrads prior to being 30 years old, 35 years old and constructing their first architectural design in real life, they can be 18 and 19 years old and inhabit or cohabit with their faculty, with their student groups, a building of their own design. And this is important because what you'll hear from faculty time and time again is students only see the floor plan of their buildings when they're sketching and doing blueprint work and reddit and stuff like this. So they can walk through, this is an example of one session like that, although that's not the architecture class. These are shared network workstations, which I'll talk about in a minute. And as of two weeks ago, we finished data gathering for a study that's ongoing with the architecture college. So they're integrating this regularly on a semester by semester basis with the regular coursework. The medical imaging example, which is actually what we're looking at here, they're walking through carcinoma, lung carcinoma actually in VR. So what we're kind of trying to represent with this part of the slide is the workflows that we're kind of pioneering, at least in terms of course integration. So we're taking DICOM data, which could be MRI, CT, CAT scan data, transferring that volumetric data into a surface mesh for deployment in VR. And then we have advanced medical imaging grad level courses coming in. And scale is instantly manipulable in this landscape. There's no such thing as like a limited perspective. You can fly inside around and look at for many angle this actual medical data. I'll go on to anthropology next. Right now this week, there's a required assignment for anthropology 1114. They're going to the library or to the research campus where we have these workstations and they're comparing hominid fossil skulls, which are taken from a series of different databases in virtual reality. They're exporting perspectives from screenshots that are taken within a sagittal crest of a Neanderthal. And then they're emailing back to their professor to prove that they can locate this part of the pre-hominated anatomy. And then just in terms of a research application that had both input and output mechanisms, structural biology, visualizing protein, specifically from the protein data bank, students in advance of this class would email or upload to a Dropbox, a .pdb file, which is an interesting file format. We can get into it in the reception. But we have workflows once again, like with medical imaging, to convert that into something that you can view in virtual reality. And then they were exporting narrated video for presentation with their classmates' slide. And then this is kind of the even more context, a little bit deeper dive. So two years ago, I went to UALR, University of Arkansas, Little Rock, and I saw they had a cave system. That was impressive. I also saw they had an Oculus DK2, which was the second developer kit. We took something like that to our campus. We started showing faculty what was possible, and we had a problem, which was every single faculty we showed this to one of their own virtual reality application. So we essentially developed and deployed our own custom virtual reality workstations that allow you to remotely upload your own 3D content through the library website directly to virtual reality. And then you walk in the door and your content is waiting for you. That's what you're seeing here, the hardware aspect of it. So it's a custom design sliding rail chair assembly with the PC built below and behind the user for cable management and range of motion purposes. And then the software, like I mentioned, allows for a remote upload of an arbitrary amount of data, 3D data. So that's kind of our contribution. These stations, once again, are located all over campus. They're publicly accessible, and they're now replete with data from a whole bunch of different classes. So that's gonna take over. Thank you very much, Matt. As you can see, we have a lot of great work going on VR. And I'm gonna talk a little bit about the data curation concerns that start to come up with that around reproducibility, transparency, integrity, and insecurity. And first I'll just say I'm a Zacletia Canada. I'm a clear fellow at OU, basically working on dealing with all these great new technologies that Matt is bringing in and thinking through all the curation issues. And I'm gonna talk about two parts of this. First, the VR platform itself, software and hardware and all that, and then also the VR content. And you'll see as we're starting to bring more researchers into using this technology, these curation concerns are becoming increasingly dire. For the platform, we really have things that we've been dealing with for a while, hardware, software, obsolescence, which poses risks to continued access to this material. Also, as we move forward, maintaining access to material that we've archived. There's issues of versioning. So as Matt's team develops new software or revises the software, we wanna make sure that we're keeping track of that so that if a scholar comes in and brings in, visualizes some data in our software, they can go back to the version that they used and document that. So issues of reproducibility are a really important document there. We're looking at all these various preservation approaches. So thinking about documenting all the changes as we update drivers or bringing new pieces of, we just adopted what's the Oculus Touch. So these new controllers, we need to document that. Thinking about emulation strategies. We're monitoring standardization initiatives. For instance, the Kronos Group has this open XR initiative. They're calling it XR for augmented reality, virtual reality and everything else. And they're developing standards for VR headset, so that VR headsets can communicate with software in a standardized way. We're looking at recording the experience of VR. So in 10 years, we're like, what was it like to actually use VR in 2017? So recording users or maybe even doing oral histories, things like that. There's the old computer museum approach where we just save all the hardware and try to keep it running for as long as possible. This poses an interesting problem. I'm actually working with a grad student at NYU who's doing the history of the 1990s VR wave. And she found that, in her experience, she hasn't found a lot of software that survived. For instance, at the Living Computers Museum in Seattle, they have 23 VR headsets, 51 pieces of related hardware, but they don't have any software packages or the computers to run them on. Slide. So directions forward. Thinking about, since Matt didn't go into too much detail about this, but basically all that software that we're using to visualize 3D models was developed by his team in-house. So we have all the source code. We're keeping track of all that in a GitHub account. So we can track that. We can document that. We can encourage scholars to start citing that, that our software releases. And I've been looking at using Zenodo, which is a great platform for archiving research data, as it has this great function that will just watch your GitHub repository every time you do a release. It'll actually take that, archive it, and issue a DOI, which is really nice. So on the tech side, having some sort of automated process for that, but then getting scholars to actually cite the software they're using when they publish their research slide. VR content, it's a huge universe. 3D models, this is basically what we're dealing with most of the time in our platform. There's point cloud data, mesh data, volumetric data. We're even getting starting at 360 videos, which we haven't really talked about, but it's, the journalism department, for instance, is really excited about that. And that can include sort of just flat videos that are 360 or also binocular sort of 3D videos. Talked about software a little bit. A lot of, we're actually using other software that's produced by third parties that are bought through the Oculus store. So there's a lot of interesting issues that are around archiving those things, which they're protected by DRM and things like that. So that's a whole other project. And thinking about archiving the source code, I talked about that slide. Other things we're thinking about in terms of VR content, sustainable file formats. We've done a little research on that. Metadata is a huge issue and all the derivative data that can be produced and tying this all together to some data repositories that can help us think through, that can help us preserve and manage all this content. So we've been playing around with the file format, Kulada or Kulada, I don't know how people like it. I was thinking of coffee and Kulada. But it was developed by the Kronos Group and it was designed for interchange between software, 3D models and software packages. It's XML based, it's open, be open enough. You can put in all this metadata actually into these nodes here, you can add your own custom nodes. The most pressing thing for us when we're modeling 3D models and having researchers work with them was scale data. So if you're bringing in a 3D model of a skull, you want to be able to have information about the scale of it and so it's an accurate size representation when you bring in and you can do actual measurements in VR. So we actually are able to put in the unit measure, this is, I believe, millimeters and then you can actually orient it until how the model should be represented in that space. So that's really, really helpful. And it's been good so far, we have any problems. So I'm actually, here's some feedback on that slide. We're like, there's obviously a variety of ways in which you can make 3D content. Right now I'm looking at photogrammetry workflows. We have some researchers who are scanning things and I'm observing them and asking them, okay, what are you doing here? What's the metadata? And then I'm looking at the software they're using to see what kind of technical metadata we can pull out. And the big concerns here are being able to document at each stage the decisions that are being made and how you're processing the models and editing them so that a researcher who's getting the 3D model down the road can trust the precision and accuracy of that model. And so this is sort of the big map. This is the big high level map that we're working on here. The data repository that we're trying to put together here has to fit into this larger 3D ecosystem that I've been describing for you. Metadata needs to be tracked throughout this. On one side we have, on the far left, we have the data collection content creation. Any sort of photogrammetry workflow or laser workflow. We need to collect metadata through that and bring that in. We're basically all those right into Dropbox where you're using some cloud-based solutions and the benefit of that allows our entire team to have access to those files and instantly synchronize our files across all our platforms. And that's actually pretty critical. We have actually, and Matt's gonna talk a little bit about our off-site partners at other places. This allows us to sync our content with them too and bring it into our VR visualization analysis platform. We're in which case, and this is something we're still figuring out, is that in VR we can actually create annotations and measurements. So we need to figure out ways in which we can manage that as a form of content creation and archiving. So the metadata schemas we're developing are trying to take into account all this, things that could be derived from that, like annotations, and keep that on the whole research life cycle. Also from the Dropbox data repository side you can go for publishing or actually convert them to STLs and do 3D printing. The part we're working on now is the data catalog which will allow us to take these files from Dropbox which is sort of an archival and bring them into our archive which right now is Amazon web services S3 bucket where we're managing this data catalog. And it allows us the flexibility to when we develop our institutional repository data catalog we can bring everything in very easily. That's sort of the more medium term solution. Short term, we can curate what gets archived and what doesn't. We're still developing those policies and guidelines. So for instance, if the students bring in 3D models for their class we may not want to archive those for the future. That may be just one time, one semester late. Then we take them off the VR system. But then if you have a researcher coming in modeling things then we may want to keep those things. So setting up those policies and having, there has to be someone curating that. And so this is sort of the map and we're still working on many of these elements. So current projects, data repository, I just explained to you tracking metadata for all that, developing best practices and figuring out other ways in which we're gonna access these things and make them available. Matt, do you wanna talk a little bit about partnerships we've been having? Yeah, I'm a last form grant application is still very early stage here. But the point is that we have partners at these various institutions, strong partners at Arizona. In fact, Arizona has deployed a workstation of our design, at least the software design, which is really interesting. Because we support live voice chat as well as remote uploads. So that means right now today we could have a faculty member at Arizona and they have their own specialties and specialized data sets guide a class or lecture for eight students simultaneously on the OU campus in virtual reality. So it's basically a VR classroom in the making. It's technically feasible today. There's a couple things that are happening. Essentially, as I'm speaking, they're gonna make this more and more scalable, which is interesting. But there are the infrastructures in place, the hardware and software is in place. And if people in this room are interested, the code is available. So if you have access to Oculus hardware and we're working on a Viaport to this or working on a mobile port to this, you can deploy this software on your campus, gain access to everything our students and faculty have uploaded and then add to that collection to the point where it becomes this giant cloud-based 3D asset repository, which is something we're gonna archive as well. And then this is kind of getting towards the end. This is very, very current thinking. Two things have happened in the very recent past. Number one, I got a really nice new work computer. Thank you, Carl. It's an Alienware machine. It's portable. It has a 10 series NVIDIA GPU. It's super powerful. But the point is you can now be in virtual reality for longer than five minutes with this processing power. You can, this is going to be a productivity tool in the very near future. And what I mean is we're not just gonna be viewing models five minutes at a time and taking a screenshot. We're gonna be taking measurements and that functionality is already in place in our platform. You can shoot a laser out of your hand at one part of a skull, shoot the other laser out of your other hand and it'll take a arbitrarily detailed accurate measurement of the space in between. So you can do science in the system. You can also annotate the models in real time and export those annotations. So you can, it's content creation and then productivity within the software. And because the latency is so low with this 10 series NVIDIA GPUs, you can do it for as long as a class period lasts, which is very exciting because up until very recently you have this like spectrum of users and at the low end they're running to the bathroom after three minutes because they can't handle it. At the high end you have a couple people that can do it all day, mostly young people like eight year old people. And then in the middle you have the rest of us. But it's getting more and more to be a flatter curve which is great, it's excellent. That's why we're kind of in a, we're well positioned right now. And Zach will continue with this. Yeah, and so a couple of the other takeaways I think from our initial stages of this research is the thing about what the library's role, a library's role, any library's role in supporting VR could or should be. And I think we brought these issues of infrastructure and technology. There's obviously a buy-in that is necessary. Cost is coming way down. So the cost of the actual hardware and software. Data storage I think is gonna be where your cost is gonna be if you wanna support something like this for scholarly use. Curatorial, you also need some data experts, some data curators out there to make those policies and sort of create workflows so that we can track the metadata through this construction. Through production, through annotation. And then also thinking, going back to the research integrity questions, a lot of these become domain-specific needs. So archeology has certain needs. Architecture has certain needs. How do we track metadata in those contexts? I think we can, the partnerships we're forming are really important. Platform sharing, that's kind of what Matt was getting at the end where we're able to create this tool. We make it available. We had a tour going through the other day and someone from the business school was like, oh yeah, this is the software they use and they sell it to people. Like, we don't sell this software. We make it available freely to everyone. The more people we have using it, we benefit from that as well. So that has been really great. And then developing standards that we can, for basically all this metadata that gets created through the process is, I think, a real need that we have. And then going back to the first point, we're not just playing around here anymore. We're really using a separate scholarship. Yeah, I'd like to comment on that just further. I was at ALA in, it was in November, I forget it was the Lita, sorry, Lita. It was in Fort Worth not too long ago. And libraries are still very much in this like wow phase, novelty phase with virtual reality. We're not much further past that, but like I said, the technology, the hardware and the software, is gonna support much more broad application and deep application. I think it's important to not just be satisfied with putting a headset on the first time user and having the ooze and the oz, which is great. It feels really good, right? But that's, a lot of times, that's like third-party gaming software. We can do a lot more with our own software. We can do a lot more with network software and built-in functionality that includes productivity and content creation. We've not yet scratched the surface. The wow factor is gonna get them in the door, but you start working with faculty one-on-one and students one-on-one, grad students one-on-one. You'll see that there's gonna be long-term sustained use. We already have faculty using the system semester over semester. They're committed to it for the long-term, which is good because when we started, we had no idea what we're doing and people were just freaking out because they were in the system for too long. But the fact that it's gonna be sustainable, I think, is pretty clear now and the tools are gonna be more robust within the system. The backup archive element is gonna be in place by the time Zach finishes the clear doctor ship. And I think that's towards the end. That's it. So we saved six minutes for questions or however long you can get.