 Hello and thank you for joining us. Today we're going to talk about improving access to hidden collections, a project that we've undertaken here at Washington University. My name is Bill Winston. I'm a GIS and data visualization analyst on the data services team in Washington University's Olin Library. Washington University is located in St. Louis, Missouri and is home to approximately 17,000 enrolled students and more than 4,000 faculty. The university was founded in 1853 and stands among the world's leaders in teaching, research, patient care and service to society. We are committed to learning and exploration, to discovery and impact. The project that we'll discuss today involves a collaboration between members of Olin Library's special collections, exhibitions and data services teams and Robert Morgan, faculty from the Department of Fine Arts. In our briefing, we will introduce John Izzelle and give an overview of his work and the materials in our collection that he has shared with us. We'll introduce the photogrammetry process and our specific processing workflow that we use to capture each model and create a digital version for access and preservation. We will also provide a preview of the outcome and share a bit about our plans for the future. My name is Robert Mark Morgan. I'm a scenic designer and I also teach scenic design here at Washington University in St. Louis in the Performing Arts Department. John Izzelle is someone I have kind of grown up as a designer knowing. I was a student at Webst University way back when in the late 80s, early 90s and John had been designing there for a number of years even by that point. In fact, there was one show production of Hey Fever by Noel Coward that I served backstage as a production assistant on and John did the set. In fact, I think you all have the model of that set and it's beautiful. Every piece of work he ever did was beautiful. So from afar, I've always been a long admirer and I'm proud to kind of call myself a colleague. John has done over 350 productions in his career. He's done theater work, New York and regional theater work. He's done some television work. He is well respected, well known as not just a designer but as an educator. He taught for a number of years at the University of Wisconsin-Madison until about the mid-80s and then after that he transitioned over to the University of Missouri-Kansas City just across the state from us here in St. Louis. And he ran the graduate program and design there at the University of Missouri-Kansas City. When it comes to using these models as a design tool, an instructional tool for students, it's really integral. When you think of every stage set, it is a sculptural piece that is inhabited by a live actor. Every model you'll see of John's of course has actors, cutout actors in the model. It's a sculpture around which is the playground of sorts for this play. And so for me and my design work and my teaching of students, they have to understand three dimensions from the beginning. I recruit a lot of architecture students for example from the Sam Fox School of Design and Visual Arts to take my class and essentially think about architecture for the stage. Think about a space where anything can happen. Rooms can float, there can be water on stage and it frees them up a little bit to be more I guess creative within the confines of that stage. That's the beauty I think of theater is we walk into a space to witness magic, to witness something on stage that cannot happen, not always, but cannot happen in real life. So John's models are really important instructional tools to showing not just how he sculpturally approached that play, and we'll show an example of death of a salesman in a moment, but how it is lit. Students immediately, and I think it's wonderful to see, get out their cell phone lights and begin lighting John's models because it doesn't really take life until it's lit, until there's lighting. So when it comes to your digitization of these models for my students, it becomes a really valuable tool for them to just be able to at a moment's notice look at something. At the moment, it's not a book, these are not books, these models, 250 of them, take up a great deal of space. And so it's not like they can go to the library and pull a book from the shelf and open it up and get the information they need or even necessarily click on a picture. What they're doing here is they're clicking on a three-dimensional model that they can look at from all sides. When it comes to a model like Death of a Salesman, two-thirds of the audience is seeing it from the sides. So that's important for them to understand, I'm designing not just for the good seats, the center seats, I need as a designer to make sure that the people on the sides get the same show. And so for me to use these as an example for them, in this case of a thrust theater design that's well done by a master of the craft is really great. I can send them a link and say, hey, I noticed you were working on a show that maybe as a thrust environment, you want to maybe look at John's model for Death of a Salesman as simply a reference point, an ideal for which they should want to reach perhaps. So it becomes a really useful, quick 21st century way for them to look at a model without having to physically go anywhere. So I'm Skylosert. I am the curator of the D.B. Dowd Modern Graphic History Library. I've been here for 15 years working in special collections. And so special collections basically is a primary source research facility that includes materials dating back from ancient times through the 21st century. And special collections has five units. They include the D.B. Dowd Modern Graphic History Library, modern literature, rare books, university archives, local history, and the film and media archive. So Johnny Zell has told me that he feels a really strong connection to Washington University and really felt strongly about having his collection and his life's work be preserved here and made available to the students. Currently, the collection is housed at West Campus, which is about a mile and a half, two miles west of main campus here. So there is a barrier of access because students and researchers have to travel to West Campus to see it. And we also have a hard time serving the materials because they are fragile, they are large objects, and you can't bring out as many as you'd like to see all the time. Another obstacle is that people don't know what they want to see. So in order to serve the materials, we can have them look at the scanned objects and then narrow down what they want to see if they want to come in and look at the real objects. For this project, we're using a process called photogrammetry to convert our 2D photographs into a 3D model. In this case, an object is photographed from multiple angles and those photographs are then stitched together to create a 3D model. The principle in use here is called motion parallax. This visual trick uses the apparent displacement of an object against the background to produce a 3D model and an estimation of the position of common points in the photographs. These common points are then stitched together to produce a digital model that we can then use to represent the original object. In order to perform the photogrammetry process, we use a software called Metashape. First of all, we bring in the set of photos that we've captured in our camera and load those into the Metashape software. Here's just an example of one of the oblique views of that stage scene. The software then examines each one of the photos and identifies tie points. These are points that it can find in common with multiple photographs so that it can identify that as a fixed position and use it to model the 3D object that it's trying to build. I'm going to close this photo and show you the sparse cloud that is initially created by the software. This is done with a cursory examination of the photos and the tie points that have been located on each one of the photographs. During the modeling process, the software identifies the camera position where each of the photos were taken from. In this case, our model is spun around and although it appears that the camera is rotating around the stage, in fact, the model is the thing that's being rotated. In this case, we generated four rows of photos at four different heights, capturing photographs completely surrounding the model. As I said, the software processes a sparse cloud. I'm going to turn these photos back off. Processes a relatively sparse cloud, identifies the camera locations, and then comes back for another round of more aggressive processing. In this case, it builds a dense point cloud. It looks almost solid, but in this case, it's made up of a bunch of points that are just tightly packed together. The coloring of each one of these points has come from the individual photographs. As it extracts a photo or as it extracts a tie point, it also extracts the color from the images. This generates a point cloud, but now in order to turn this into a digital model, we need to connect those points together. We do this by building a mesh. I'll show you the wireframe view now. What the software has done, and I'm going to zoom in very closely here, the software has captured or has taken all of the points and joined connecting points together or neighboring points together into a triangulated, irregular network. This basically builds a set of triangular facets with nodes, edges, and faces. This is a common methodology to represent a surface with changing elevation. As we have developed this mesh, we're now able to convert that into the final digital model. We do that by applying a texture to the mesh that's extracted from the photographs that then provides some color and more natural color to the final digital product. This is an interim model that isn't as detailed as some of our later models have been, but it points out some of the imperfections that might arise in this process. Areas that are very thin, for instance, the figurines that we see on the front of the stage here, these are basically flat cutouts that have been stood up on the front of the stage. The modeling software does not represent those very well without some more aggressive post-processing. Another thing is features that are underneath other features. These are sometimes not well represented because of the lack of a high number of photographs that represent them. As you can see, it's a little bit messy here in the interior of the model, particularly as we work backwards in the stage. The other thing I'll point out is this overhanging area here on the stage and the backdrop where we can see some missing points, basically. Those are made of a translucent material, and so capturing that type of material through photogrammetry is sometimes difficult and requires additional processing in order to adequately represent those types of features. But at this point we're done with the processing in the MetaShape tool, and we're set to export this out into a final three-dimensional model that we can then post up into a website or other source for sharing. I'm Jennifer Moore. I'm the head of data services at Washington University in St. Louis, and I'm one of the organizers of the Community Standards for 3D Data Preservation, or CS3DP. So WashU cares about this collection because Johnny Zell is an important collection that the libraries hold. It's one-of-a-kind. It's really cool, and people are always impressed when they get the opportunity to see it. The problem is it's also fragile, and so the opportunity to see it isn't great. Because it's fragile, we want to be good stewards and make sure that it has a long life of its own, the original object. But also the idea that it's a physical object and people can't necessarily come and see it anyway, even if they want to. So the idea of creating a digital surrogate is to provide the access that we as stewards want users to have while also protecting the original object, which is so special. However, creating a digital object is not enough. We have to take intentional measures to make sure that a digital object is preserved long-term. The way that we are approaching preservation now is a lot different from when we started. When we began working with 3D data, we didn't really know how to preserve it, and the existing digital preservation methods had gaps that 3D data didn't fit in too well. So the work we did with the community standards for 3D data preservation helped us work with others to figure out what the important pieces were when it comes to preserving 3D data. So one of the things that we learned through the CS3DP was about preservation intervention points. And you can read more about this in the best practices chapter of the CS3DP book. It's an opportunity for the creator to stop at critical moments in the model creation to make sure they're taking action for preservation. One of the first choices we made when we started the ASL project was to use photogrammetry instead of a 3D scanner. With a 3D model created on a scanner, you have less opportunity for stopping because a lot of the work is happening in an automated way by the software that controls the scanner. With photogrammetry, we had a lot of choices we could make ourselves about lighting, about cameras, about how many images we want to use, and how we want to process the model, specifically how we want to process the model. And every time we make a choice like that, we can make a choice about preservation. We might take many photos, the actual model that you see in a viewer might not have every bit of the model included, but the choices that we made along the way will make sure that that model could be recreated because we have all the parts and documentation and parameters that we used so that we could do it again. And photogrammetry is very cooperative in letting you make your own choices. Our strategy for long-term storage, long-term access, preservation in general, is threefold. First of all, we have our access copies that require no one to download anything. We're building a viewer for our purposes using the Smithsonian 3D team's Voyager platform that they developed, and we're going to put lightweight models into that so that anybody can come to the site, interact with the model, and walk away from it. They don't have to do anything else again if they don't want to, but they can still experience the model. In our digital data repository, we're also going to put data objects to make sure that copies of the data at different stages are accessible so that might be the images, the point cloud, for example, and those data will also be preserved using Archivematica, which is a separate tool that we will generate archival packages from our data repository and ingest into Archivematica to collect premise metadata to collect the data itself and any documentation in our curation process so that we have a relatively thorough chain of custody described in Archivematica. We also are keeping copies of these objects on our local servers, which are backed up to Tape Robot, and that includes our documentation, both the process information and contextual information. Some people call this paradata. It's the data that acts like a cookbook so that you could recreate the project. I truly believe that the definition of space has changed from when John built this to meaning virtual space and that in the not too distant future, we will have opportunities for audiences to click and show up in quotes in a virtual space to see a production thousands of people could across the world. The Royal Shakespeare Company did a Midsummer Night's Dream. They just called it Dream. I'll send you a link to the video. I think you did. Did I? Yeah. So they were kind of pioneering a little bit of this idea that what if our show was in this virtual space and our audience was anywhere in the world and they just had to show up at the right time, could be 2 a.m. wherever you are, but you could show up and see a piece and be moved by it. That's really exciting for me and to see these models be digitized is kind of like, well, maybe John's work can live in perpetuity. You know, this Macbeth could happen again. Yeah. In 2025 or something. Anyway.