 Good afternoon. Come on in, have a seat. We're about to get started. I'm going to lead us all in communal jumping jacks because this is the only way we will all together survive the session. We may have to do that. I'm Gunter Weibel. I'm the director of the Digitization Program Office at the Smithsonian. I'm joined here by my colleague Vincent Rossi, who's a 3D program officer, and we're here to talk to you about all the beautiful things we have going on at the Smithsonian in terms of digitization and just in case that's okay. Just in case you're not that familiar with the Smithsonian or you haven't made it down to the mall yet. It's a pretty big place. We have 19 museums, nine research centers and the National Zoo. We also have a huge scientific investment with 500 scientists, with boots on the ground in a hundred country, and the collection measures about 138 million objects. Most of them actually scientific specimens, about 127 million, are in the science realm at the Natural History Museum. Now, when you look at how we provide access to all these collections, it's almost a little bit disappointing, I would say. When you come and visit us on the National Mall, less than 1% of the collections are actually on display. It's actually far, far less. And when you look at what you can access online, about 2% of the collection are digitized, and by digitized in this instance, I mean there is some kind of digital image. But please don't imagine the kind of 3D wizardry we will show you in a minute for all of these collections. So it's a bit disappointing, but it's also a huge opportunity, because clearly we can do better in bringing these collections to a national and international audience and bringing them to every classroom and living room across the US. So these numbers I just threw out to communicate to you that we're facing tremendous complexity and scale at the Smithsonian. It's a very distributed system. The Smithsonian operates a little bit like, you know, the federal government and all the states. So we have a central administration as well as all the museums have their own administration. So we face that kind of complexity and we just face the daunting numbers that we have. And we don't quite have the resources commensurate with that kind of challenge in terms of digitization. So the formula we fit upon in order to make progress and allow everybody to see what is possible in this kind of a scenario is prototypes. Small scale investments that allow us to tell a great story about what we can do. And hopefully that will allow us to get the funding to do a lot more because we've demonstrated to folks that we can do it and that it makes a tremendous difference. And today we're here to talk you through two of those prototypes. One of them is Smithsonian X3D and that's what Vince will take the lead on. And then I'll talk you through our rapid capture prototypes, which are more about the traditional side. What I'm going to be forced to call the traditional side of digital imaging with digital cameras, digital photography. But you'll also see there's nothing traditional about it because we're using things like conveyor belts to digitize collections. So stay tuned for that. With that, I'll hand it over to Vince. All right. Thanks, Gunter. So I'm a 3D program officer at the Smithsonian. So if you think about the history of human documentation, the way that we've interpreted our world and studied our world, we've used measurement tools, right? And 3D imaging is essentially a new form of measuring. So instead of traditional point to point measurement, we might be taking millions or billions of points of measurement to describe an object or an environment. So 3D imaging has largely been developed on the backs of architecture, engineering, aerospace, architecture. All these worlds have been have been essentially transformed by 3D technology. And we think that museums are poised to go through a similar transformation as this technology develops and becomes more democratized. So our project, what we're calling Smithsonian X3D, is our experiment in using 3D imaging at the Smithsonian. So the Smithsonian, like Gunter explained, not one museum, 19 museums, nine research centers. And we had each one of these 19 museums and nine research centers nominate an object or an archaeological site where we can either tell a new story using 3D imaging or solve a problem. So we scanned objects like the 1903 Wright Flyer, the Abraham Lincoln Life Masks, and collaborated with the Smithsonian Astrophysical Observatory to take 3D scanned data of a supernova remnant and deliver that online for people to download and interact with. So we started this project, there were ways to deliver 3D content online, but it wasn't terribly compelling. You would be able to load the model using the WebGL browser, which will load a 3D model without any plugins, but you'd only be able to rotate the object around and that gets boring in about, you know, 10 to 15 seconds. So what we did was we collaborated with Autodesk, a software company to create a 3D viewer where we use the 3D models as a storytelling tool, right? So you could jump in and learn about the Abraham Lincoln Life Masks and we could pull up additional content and we just sort of clicked the next button. I'll be demoing that technology in just a bit. And also importantly, we're making all this data available for download, right? We've all heard about 3D printing, the 3D printing revolution. So 3D printers are available in libraries across the country. They're also, you know, they're showing up in homes. So being able to download that data and be able to sort of take down the walls of the Smithsonian and start to get access to, you know, some of the collection objects that people can't even see, right? All the behind the scenes stuff, being able to download that and bring it into your living room, we think has a lot of promise. So the Smithsonian X3D project was made possible with the support of many sponsors, most notably Autodesk and 3D systems. So this is the team, right? So how does a small team have an impact on such a huge institution like the Smithsonian? And Gunter already alluded to the prototypes, right? And that's essentially what the Smithsonian X3D was. It was a series of prototypes where we're using 3D technology to support conservation, education, public access. We've also produced a series of videos. So if you go to 3D.si.edu, we have, I think right now, we've got 12 videos up and we will have 18 in total in a few months where you can learn about how 3D is being used in education, how it is being used to support conservation, and also a number of individual project videos that you can check out. And I'm going to show a quick clip of one of these videos now. 3D in the museum world in simplest terms is measurement. Instead of point-to-point measurement, like you do with a tape measure or with a pair of calipers, we're taking thousands, millions, or billions of measurements that describe the geometry of an object. 3D technology really affords us the opportunity to see an object from all angles, to tell the entire story of an object front, back, bottom, top. You can create still renders from that 3D model. You can create video renders. You can even take the geometry and replicate it in physical form using 3D printing technologies or other rapid manufacturing techniques. We've scanned things as small as a Euclosine-B using micro-CT scanning technology, and we've also scanned entire archaeological sites in Chile, Indonesia. You can also use X-ray telescopes that can document objects in deep space, an incredibly vast cosmic scale. Of course, most of the Smithsonian collection lies somewhere in between, and that's what we do most of the time, but 3D technology can be defined very broadly, and what you can capture surprises on us all the time. We want to make sure that any technology we bring in is in service of our mission. So what we've done is we've partnered up with our curators, with researchers, with educators, with conservators, and we've put that technology at their service and we've said, how does this actually further your day-to-day work? And we've created an amazing array of use cases that actually showcase that in all these areas 3D makes a tremendous impact and can allow people to do more of the things they're already striving to do. Okay, so I'm going to talk quickly about two of those use cases. First, we're going to start at the American History Museum, the gunboat Philadelphia. So the gunboat Philadelphia was built in 1776 and sunk that same year, and it was remarkably well preserved on the bottom of Lake Champlain. So because of the cold waters, it was pulled up in almost one piece. We see it being pulled up here. It's an enormous vessel. This object represents the beginning of America's Navy. It's about 50 feet long. The American History Museum was actually built around this object. You can see it being hoisted in there. So over time, sort of the walls sort of encroached on this object, right? This is an iconic object, and now you can only see it from two vantage points, right? From the front and sort of from the side with a little catwalk there. So we thought that because of the limited line of sight issues here, that we could use 3D technology and let people see this object from all sides. That's something you cannot see in the gallery at the museum. So here's the raw data. We used a laser scanner that was designed to scan entire buildings. So we scanned the entire gunboat. And here we have the model. So now we're able to sort of spin it around. And the curators, any visitor, anyone who goes to 3D.si.edu, can now rotate this model around. So we're sort of solving a public access problem. But at the same time, we're also supporting preservation. So we've also done some extremely high-resolution 3D scans of sections of this boat. And we're going to come back every two to three years and re-scan those areas because it's actually deteriorating. So we're going to take a side that the pieces of the boat are actually flaking off due to an eyeline treatment that I think was done in the 1970s. So now we can use 3D scanning technology, deviation analysis, overlay those two scans, and monitor degradation over time. Now I'm going to jump into one of our fields like projects. We partnered with the National Geographic and the Chinalea National Museum, along with the Natural History Museum. So they're widening the parent American highway in 2011 and then covered over 40 complete fossil whale specimens. This is a remarkable find in the area of about the size of two football fields. So with only a few weeks' notice, we were able to respond. They had halted construction for research to be done. But normally for each one of these fossils, and each one is about 30 feet long, it would take two to three months to properly excavate. So in this situation, the paleontologists were actually having to pull these out of the ground every two weeks. So we had five days on site. We slept in the desert. We started 3D scanning. The original or the traditional method of documentation, which hasn't changed for hundreds of years, is drawing a meter by meter string line and sketching the approximate location of each fossil in the ground. So using 3D scanning tools, we were able to take that level of documentation much, much further. So this specimen in particular, we probably collected over a billion accurate measurement points that describe the surface of this object. And that's my colleague Adam Mattala. And I'm using a... This is the scanner used for architecture. So I can scan an object anywhere from three all the way out to 70 meters. So I'm getting all that contextual information and how these whales relate to each other spatially. So all of the fossils have been saved. None of them were destroyed. But they're encased in plaster jacket and literally tons of rock. So you're not going to be accessible for probably decades. The data we collected within five days from being back in the field, we were able to create 3D prints. And most importantly, it's the data itself. With the data itself, we can do things like one-click volume calculation, surface area, very accurate measurements. And a research paper was published in the Rural Society by Dr. Nicholas Pineson. And they actually made us co-authors, which was really surprising for the technical guys to be a part of that. So we're supporting research at the Smith Center. And fundamentally, that's what happens at the Smith Center. It's a research institution. At the same time, of course, we have exhibits. So while research was going on, we partnered with 3D Systems. And Ping Fu in the center of this picture is the vice president of 3D Systems. They helped us create that 3D print, which now hangs in the wall of the Natural History Museum. So for those of you who are familiar with 3D printing, this is sort of a big 3D print. So a 3D print that's 20 feet long and 6 foot wide is pretty remarkable. And we did that with the support of 3D Systems again. Quick, make no video. So it wasn't printed a lot in one piece. There's 40 individual tiles that were tiled together. And then, of course, there was some seeming done. And then it was also painted by hand. So that's something that is not computer numeric control yet. So I'm going to switch out and do a quick live demo here and show you the 3D viewer. Okay, so here we are at 3d.si.edu. And we're going to go through a tour of the 1903 Wright Flyer that we scanned. And this tour is led by Dr. Peter Jacob, who is one of the world's experts on the 1903 Wright Flyer at the Aaron Space Museum. So I'm going to simply click the Next button and we're going to get Walkthrough. We can zoom into different areas. We can do cross sections. We can do cutaways. We can show measurements. And we have this text panel that comes up on the right and gives us additional information. So it's almost like the next generation of PowerPoint, right? Only we're using a 3D model as sort of the scaffolding to tell the story. And, of course, we can make use of other digitized content as well, whether that's imagery, video, or audio. So right now, our team creates these tours. So what we'd like to do with this is be able to let anyone create these tours, right? Whereas, you know, a school teacher who's interested in the Wright Flyer or the Abraham Lincoln Life Masks, they can create a lesson plan using this tool. So that's not something we could do yet, but that's something we sort of see as a next step. Next, I'm going to jump over. This is an object from the Freer Sackler Gallery. It's the Cosmological Buddha. And it has low relief carving all over this object. So I'm going to zoom in so you can see some of that. But the stone texture sort of competes with that low relief. And also, the 500-word text panel that's in the gallery with this object does not come close to telling the complete story of this object. This is the references, Buddhist scripture, and it describes a Buddhist journey towards enlightenment, starting from the bottom of the road, all the way up to the top, on all sides. And this is pretty much what the object looks like in the gallery. So we're able to do a few things here. Okay, there we go. So I can turn off the photo texture. This is the raw geometry. This is still a faithful representation of the object. And then we can play around with the ambient occlusion maps and we could pull out a lot more detail. And this actually was really useful for the curator, Dr. Keith Wilson. He was able to see things that he wasn't able to see with the actual object itself. Now, again, we're not modifying this data. Ambient occlusion maps are essentially increasing, well, area of high curvature gets dark, and area of low curvature, flat areas, get lighter. Question? Sorry? Oh, it could, whatever. So are you... Sorry? Are you making the also a problem? Depends on the method of capture. In some cases, we've scanned the object using photogrammetry, whereas there would be texture. But if we only had laser scan data or we only had CT data, generally that there is no texture. There are ways to combine that. We have combined it in the past, but it's a very manual process. And then I'll show you one more thing. So we're able to turn on this hotspot area, and then we could also just navigate this object freely on our own and click on these regions and find out more about these regions. And this is the tool that the curator used the most. And now he can simply, you know, if he wants to share this object or notations with colleagues around the world, and he has, he can take a measurement or zoom into an area, and whenever we click the share button, a unique URL is generated, and we can copy and paste that into another browser, and whoever opens that URL goes exactly to the same view and sees exactly what you're seeing. Another functionality we built in here is that we can take this and embed it. So we can embed the 3D viewer the same way we would embed a YouTube video. So we can copy and paste this. Anyone can put this in their blog or website. So it's extremely shareable. Yes. Is the viewer open source available? Right now the viewer is not open source. So it's known between the Smithsonian and Autodesk. That's good, we've already done. So the public perception was pretty good and we were excited about that. And here we have, you know, I suspect a teenager judging by the language that he's using here, but he took the woolly mammoth, he downloaded it, he put it in Cinema 4D, which is a 3D modeling package, and he was creating his own renderings, right? Now, of course, this is research, but it's a cool use, right? It's very unexpected things with the data itself and that's exciting for us and our team. Let's see. So in Houston, they actually took the 3D models. So during the 150th anniversary of the Gettysburg Address, the college in Houston took the 3D models of Abraham Lincoln life masks and they printed them out and they put them on display. And this is also very, very exciting for us and you can imagine right now we have dozens of models that you can download online. If we have thousands or tens of thousands or hundreds of thousands of models online, we can imagine how this could be happening all over the world with many different museums. So you can download all of our content for free online. Yes? That's a really good question. In some cases, we always strive to provide watertight STLs. In some cases, that might be really difficult. The gunboat Philadelphia is an example because it's extremely complex geometry and we would probably have to have an expert modeler spending four weeks getting that watertight. But in general, we strive to create watertight models and make them available. Now while 3D printing has been democratized, what's also great is that we're able to use the 3D viewer and people can actually access this with their computer and internet connection. I'm going to skip ahead here because I'm running a little bit behind. I'm Savage of MythBusters tweeted out to us. I'm really proud of that. And most recently, we worked to 3D scanned President Barack Obama to create the first 3D scanned and printed presidential portrait. And here you can see that there's a limit in the resolution of the actual 3D print but the data that we captured probably cannot be replicated at full resolution for another 10 years. And I'll play a short video. They can see it. I totally go see it. At the Smithsonian Castle. Where's it at? At the Smithsonian Castle, so it's right on the mall. It's not at the Portrait Gallery. Actually, it is going to the Portrait Gallery. It's not accessioned yet, but it will be accessioned into the Portrait Gallery permanent collection. We're here at the White House working with the Smithsonian Institution on creating a 3D presidential portrait. And the system that we've brought to be part of this process is called our mobile light stage. It's right over there. And we're setting it up right now so that it can be used to record almost certainly the highest resolution digital model that's ever been made of head of state. The inspiration for the project of creating a 3D portrait of President Obama really comes from the Lincoln Life Mask in our National Portrait Gallery. And I have a Lincoln Life Mask with me today and they're called Life Mask because these were directly taken from his likeness. There were two little holes hooked where the nostrils were so we could still breathe. And seeing that made us think what would happen if we could actually do that with a sitting president using modern day technologies and tools to create a similarly authentic experience that connects us to history to connect us to a moment in time and connects us to a person's likeness. So the process should go relatively quickly. As the Vice President to sit down he will be surrounded in front by 50 custom-built LED lights. Eight high-resolutions works photography cameras and an additional six wider angle cameras. In about one second as he holds his presidential pose he will be illuminated by 10 different lighting conditions which will change the polarization of the light, the directionality of the light and will give us everything that we need to understand the shape of this face and how it transforms incidental imagination into the images that we see of him. 10 years ago it was just barely possible I think this could be done. So here we have a structured light 3D scanner and we're using this to scan the president they're handheld, they're flashing a fringe pattern of light and there are stereo cameras recording how that fringe pattern forms over geometry but in this case the president's face. The president getting his likeness scanned as cool as that is is also about a broader trend and that is the third industrial revolution. It's the combination of the digital world and the physical world that is allowing students and entrepreneurs to be able to go from idea to prototype in the point of an eye. It's been a few days since we've 3D scanned the president and we're looking at some raw data on screen right here. So this is the data that came out of the handheld scanners that Adam Mattel and I were using to scan the president. This is the first bus that's created ahead of state from objective 3D scan data. So this is an artistic likeness of the president. This is actually millions upon millions of measurements that create a 3D likeness of the president that we can have 3D print and make something that's never been done before. And the capture and the creation of the bus was also made in support with the support from Autodesk and also the systems as well. So at this point I'm going to pass it back to Gunter. Well, how do you follow up on that? This is going to be really tricky. No more presidents now. Just boring stuff. Sorry. Rapid capture of collections, photography, that kind of stuff, bread and butter. Actually it does get a little bit more exciting than that. So what we've done in terms of again what I'm now forced to call traditional photography of collections is we've organized one week prototypes that help us get everybody on the same page and understand what impact we can make if we have a dedicated workforce, we have well thought out workflows and we put all of those resources to the task of digitization. We also do that in a sort of quote-unquote Smithsonian public way. We have people stop by, see those and we have over 150 staff members stopping by for each one of those. And what these one week pilots do is they really allow us to work through all the issues that are standing in the way of scaling up digitization to sort of the kind of way that a lot of you are used to in a library setting where you have specialized equipment of very high throughput rates when you're digitizing book collections in particular. And so what we've been able to do so far is when we're done, we're done. When we're done with the capture we also have the materials managed, all the data automatically flows into our digital asset management system at the Smithsonian. This is an example from a glass plate of a from a rapid capture pilot capturing glass plate negatives in the Smithsonian Gardens collections. When we are done we have all the digital images synced up with our collections information system. So that happens automatically behind the scenes. This is an example from our rapid capture pilot at the Fierce Aquilar capturing ceramics. So we're also doing photography of three-dimensional objects very very fast now. When we're done the objects are online. Here's an example of who can guess who that is? It is James Brown howling publicly now on the Smithsonian collection search center. This is the pilot from African American history and culture. And when we're done the data also can flow if appropriate automatically to the Smithsonian transcription center which is a fairly new website we have where we can engage digital volunteers in transcribing information. For example here of historic paper currency that was printed before the Civil War that has a lot of information on it. Which municipality printed this currency? Who signed off on it? What denomination is it? And so on and so forth. And it's all information we currently don't have in our collections information systems. So all of this happens within almost instantaneously now. Used to take us you know weeks and months and sometimes years in order to finish up these projects you know somebody would do the digitization and somebody would upload it into the asset management system. Eventually somehow it would get into the information systems. We now do that instantaneously when stuff gets captured it automatically flows into all these systems. And the reason we can do that is that we've thought very hard both about the physical workflow as well as the digital workflow. And I'm not going to go into the detail here but what you see the red arrows are the physical workflow which is highly dependent on barcoding. So that's how we can track everything. This is our way of optimizing how things move from the collection storage area to the camera and move away again. And then when it gets to the green arrows you can see how the data flows between our systems and for in many instances it's now completely automated. In some instances it's still a little bit clunky but we're getting there. And what we could prove with these one week pilots is that we can do about 40,000 items a year for three-dimensional objects you know the ceramics and we can do well over 100,000 items for things that are simpler to handle like the photographs or the historic paper currency. And we can go from the shelf to the public in less than 24 hours. Somebody picks up the item and because we've integrated the back of the house stuff moves up to the public really really fast. In a museum setting that is unheard of. I've never heard a museum do that before. This is also not just sort of getting the work done. This is also an exercise of winning the hearts and minds and trying to spread the word Smithsonian internally about what we can do and how efficiently and cost efficiently we can work. And we're always trying to look for fun ways to communicate that. The open houses, the rapid capture pilots are one way to do that where staff can really see it and can ask the folks who are doing the work and we thought another cool way to communicate it would be to write a comic book. So we did. So you can download this. It's online. And it's a little comic book that basically talks about in a fun way about how these rapid capture pilots work and what all the things are that go into making them happen. So check that out. It's on the dpo.si.ddu website. Go to the blog and you'll find it there. But we're now moving beyond the prototypes. We also thought okay one doing something for one week that's one thing and we do it for eight weeks because we're trying to work our way up to of course doing these things year round. And we had an eight week pilot at the National Museum of Natural History in the entomology department capturing bumblebees. And we managed to do an entire collection of bumblebee for 45,000 bumblebees in eight weeks which is unheard of and a tremendous speed because you have to think about when you want to digitize one of those bumblebees they have to get unpinned, the label that's below that bumblebee that does not exist in any database needs to be pulled off of the pin, needs to be laid next to the bee just like you see here and that package needs to then go to the digitization station. So we had at the peak seven individuals unpining and re-pinning those labels and prepping each individual bee to go to the digitization station. Here's a Wayne G. Clough getting a tour of that pilot and getting detailed instructions on how to do that unpining and re-pinning which takes some expertise. So we got a good bit of institutional traction here also with senior leadership who were very impressed with the speeds at which we could move. But we can even do better than that. So at the Numismatics Collection we did a pilot project just with people moving this historic paper currency and we showed that we can do about 120,000 items a year that way. But we've got 260,000 items in that collection and we wanted to move even faster. So historically that collection was scanned using a flatbed scanner and if you do the math on that it would take about 20 years if somebody worked actually 24 hours a day on that. So that seems kind of unacceptable. Then when we did the rapid capture pilot we hit throughput speeds that allowed us to extrapolate that we could finish in about two years at the cost of $7 per item. That seems a lot more acceptable. But we can even do better than that and this is by the way our undersecretary for history, art and culture. We pressed him into service here so we had an expert crew working this project. And even better than that we now brought a conveyor belt that was used in the Netherlands to digitize the natural history collections at Naturalis and we are using it to digitize this collection. So now we will be done finished with a collection of 260,000 items in three to four months and it will cost us a little less than a dollar an item. So that's an incredible increase in throughput as well as cost effectiveness in the time span of 10 months is what we figured that out. And this is just a little sneak peek. So this is running right now on the National Mall at American History. This is where the currency comes off the conveyor belt so this video is sort of going backwards. You can see the operator looking at the screen. There's automatic quality control that happens. So this, every single image gets checked against the FATG guidelines because you can see they're catching a target here alongside the currency. The camera shoots every time it senses that there's actually an object there so if there were no object there because the operator on the other hand maybe lost their stride the image gets taken. So it's an incredibly sophisticated system. The conveyor moves every time somebody on the other hand actually takes them off so it doesn't move autonomously and at this point it moves every five seconds and digitize the field of image every five seconds. This system is single-sided capture, yeah. So you'd have to run it through again to get the other side. But there's nothing of scientific merit on the other side of these objects. We always check that with this security. We also make sure that the resolution is appropriate. We look at things under a microscope to determine what the smallest level of detail is we need to capture. In this instance, it's sort of the finest line and we're waiting and then we make sure we pick the appropriate resolution. This system runs at 700 ppm. We managed to jam that into the library space here in the front room of the Numismatics collection. I actually don't know off the top of my head. I think it's probably from here to the door and it's very, very narrow. It's not even as wide to the front row of the chairs. But it's really jammed in there. I wouldn't recommend doing it like that. Not much of a choice. But that's amazingly small for it. Because what is on the other side there is the vault. So there's a locked vault there that is very high security where all this material exists. We didn't want to carry that down the hallways. They had to shorten that conveyor belt in order to jam it in there. What happens at the end? I can see this Charlie Chaplin kind of where the stuff comes from. So remember that person right there, when he takes it off, the conveyor belt knows that he's taking it off because there's a sensor. And that's when it moves. And only then, if he didn't take this off, this item, conveyor would sit there. So it does not move autonomously. It only moves when something comes off. So nothing ever falls off. Because there's all those sensors in the conveyor. These things currently do not have any records at all in the collection of information systems. So once the images get taken, they get attended to a health record. And that's the first time they've had a unique record. Then they get uploaded to the transcription center. And then the unique metadata for each of the items gets created by the digital volunteers. So they have to not churn through all of that. But all of that is automated. All that uploading that's all part of the system and the uploading into our asset management system, all of that is automated. So as soon as the item comes off the conveyor belt for all intents and purposes, all of that's already done. There's nothing left. We haven't done that yet. There's another type of technology we could use for that that this particular vendor has that looks differently. Right, that's the carousel. But for this again we didn't need that. Okay, one more question. I assume you're digitally registering the object so that is a few squares in the lens. Yeah, so what happens is it gets shot the way it sits there and then it automatically gets deskewed and cropped. That's all part of the software. So again that's not a manual process. Do you have any problem? That it's not totally plain, you mean? If there's any buckling, is that what you're referring to? Right. So that's also part of the testing we make, you know because nothing is ever perfectly plain we make sure that we have that kind of enough depth of field. But for this specific collection it's not a huge issue because for all intents and purposes it is as plain as it gets. But if we did the bees for example with the system, which we could in theory, it's a very different game because you need a lot more depth of field and we would just the camera would be set up specifically for that. Actually each setup for the conveyor belt has a custom lens configuration that's specifically created for the kind of depth of field as well as the kind of resolution the requirements require. Sure. So now we're, so all the project I've talked to you about so far are quote-unquote simple in the sense that they have one specific type of collection object they're targeting you know, whether it's a bee or whether it's this historic paper currency it's all the same stuff and then we do the same thing over and over and over again. So here's what happens when you're trying to digitize an entire museum in one fell project. So that's what the Cooper U had asked us to do it's a collection that's up in New York it belongs to the, it's a part of the Smithsonian and it is a very, very collection, it's a design collection I've pulled out some of the different types of materials you'd encounter but you know you'd encounter anything from a now it's getting warm from a you know, from flat objects like prints to chairs to vases anything that's in that you might imagine encountering a design collection is there. So starting to digitize that collection in its entirety we're projecting it'll take us probably 18 to 24 months the conveyor belt will go up to New York and do part of the job as well we have five different pipelines running so we have basically five different setups that different types of collection objects then get routed to and get shot on and the cost might go up a little bit from the $7 per object that I've projected here but it's in that vicinity and I think it's literally us putting all the knowledge that we've acquired in the pilot projects to use and putting it all together now for one, one big project that tries to wrap its arms around an entire museum so watch us fail this is obviously just started so I can't tell you whether we'll succeed or not but this is this is the plan and last but not least I want to circle back to the 3D story that was told you so for 2D for traditional photography we're in pretty good shape with scaling up because it's kind of it's a known art quote on quote well you all have been doing it for a long time in a museum setting museums have started to do it so there's a lot of expertise there and a lot of knowledge in the 3D world it's a completely new game and we're just starting to try to figure out how to move really fast for 3D as well and we were challenged to do that by this guy this is the nations T-Rex or representation thereof which came to the Smithsonian earlier this year and it didn't come as a fully assembled T-Rex of course it came bone by bone by bone so it's over 200 bones and we 3D scanned every single one of those bones in a public setting so the public could come in and watch that and that really required a deep look at our workflows and how to try to move as fast as possible and as safely as possible for the specimen doing that and the reason why we're doing this is because the 3D data will then help the curators to decide how the T-Rex will be posed because they can experiment putting the bones together which you can't do with physical bones because let me tell you a pelvis of a T-Rex is something mighty heavy and you need like five people to move it so you don't want to play with the actual bones if you can do it in a digital way that's vastly preferable so eventually the T-Rex will come to a 3D printer near you where we haven't launched it yet online but we will and it will be accessible just like all our other objects for educational personal non-commercial use and will make the underlying data available there now also there's also a conveyor belt system for 3D digitization as you may have heard about it's built by the Fraunhofer Institute in Germany and Adam and Vince actually traveled to a test of it in Frankfurt and took a closer look at it and I'll have a and again the idea of here being how can we make this a lot faster and actually the magic for making it be captured faster isn't necessarily in a career post-processing or that can help us move faster and that's what we have for you today alright I think thanks for eating up the room