 I think it's about time to get started. I'm Cliff Lynch, the director of CNI and it's my pleasure to welcome you to this project briefing session which is part of our spring 2020 virtual meeting. The virtual meeting is now about midpoint. It started at the very end of March and it will run through the end of May so we have lots more interesting things to come. We'll have a presentation today from Yang and following his presentation we will take some questions. Diane Goldenberg Hart from CNI will moderate the Q&A. You have a Q&A button at the bottom of your screen and you're welcome to queue up questions at any point during the presentation as they occur to you so please feel free to do that and we'll answer them all at the end. So the session deals with some very interesting issues around the application of some sophisticated technology to digital preservation and I guess one of the things that's always struck me is how when we work with digital preservation we are in some ways able potentially to capture more than we could from the originals and we run into some very complicated questions about how to factor that into the presentation process and I think we'll see a very interesting angle on that today so I'm just delighted to have Yang with us from Penn State and he'll be reporting on some joint work with a number of colleagues at several institutions and at this point I will hand it off to Yi. Thanks for joining us. Okay thank you Cliff. Hi this is Yi Yang. I am an engineering faculty from Penn State Abington campus and my presentation is Advancement in Digital Preservation Spectral 3Ds which construction of Impressionist Oil Paintings. This project is a collaboration between among actually four universities or four campuses so we are Penn State but we are collaborating with Penn State Department of Art History and we are also collaborating with New Jersey Institute of Technology and the University of Delaware so we have two art conservators from the University of Delaware providing really critical consulting to us. So I'll start the first introduction so why 3D preservation? So when we originally thought about the project we first thought about well we have a technology and it was really used for our biomedical applications so the technology is called the optical coherence tomography and it's been it's I think it was first invented in the 80s and it has been very popular now for examining eye retinas and the the the nice thing about this technology is it cannot it can it can both image the topography or the surface profile and the underlayer information which is really rare for other technologies and so we thought well can we apply this technology to artworks and if you have a really nice 3D profile or capture the 3D profile surface profile of artworks basically we can create a 3D model and once we upload this 3D model with today's technology such as a virtual reality and augmented reality you can actually have a completed new experience in viewing these artworks and we never foresee this but actually this becomes more relevant for today's education system because if everything's taught online then having a having a more I guess interesting or a new method to present artworks would be something I think valuable to education higher education the second part we thought about this technology or the applications of this technology is to help the visually impaired to understand art because if you really think about if you most of these artworks let's say you know impressionist art which have a very unique brushstrokes among different painters and how would you convey this information to someone who is a visually impaired and can never see these art and I don't think they are they can are allowed to touch the art so what we could do is we can capture the 3D surface profile and then we can 3D print these artworks and so that someone who is visually impaired can actually touch them and they really get a feel of the difference between for instance van Gogh's brushstrokes and the pointillism by Strott so this is another application and the third application is because the OCT technology can capture the underlayer information of artworks so it becomes a very valuable tool for art conservation and this has been there's multiple papers on this but it would be one of the aspects we can apply this technology to and the last scenario we thought about is if you really think about the really valuable artworks we have today there's only one physical copy I mean we do have 2D images of these artworks even in high extremely high resolution the problem is 2D image only capture a fractional of what is really embedded into those paintings and having a 3D high resolution imaging system would give you a lot more data so for instance a 2D image of a picture my capture let's say a few gigabytes of data and that's a lot of pixels but for the technology for instance the OCT technology a painting of the same size can easily go to three or four terabytes so that's a lot more data than what a 2D image of like a camera can capture so we really think about capturing the really the high quality digital copy that hedge against the worst case scenario so if I go any further they are other optical methods to capture the 3D surface profile so I just listed a few of them right here so there's like a laser triangulation struck light laser scanner time of flight sensors and most of these sensors the biggest issue is a resolution so they're either lacking resolution or lacking the field of view which is the amount of space or size you can capture and another unique feature about OCT technology is that OCT can capture the underlayer information which none of the current technology is capable of so here is a I'm just going to give you a very short introduction of what is OCT so here as you can see OCT is essentially a laser beam split into two arms so one arm is split to send and is sent to the sample and another arm is sent to a reference so if we know exactly where this reference is we know that exactly what is a distance from our let's say a reference point to this arm and meanwhile if there's a small change on the surface of our sample then we can measure the time difference it took between our known distance and the distance we're trying to measure and by that time difference we can basically calculate what is the surface profile of this sample so that that's a very very kind of amateur explanation of this technology but that's a basic idea and interestingly this technology was actually used for meant for identify the gravitational wave so this is a essentially a microcent interferometer so you know the same concept was used you know here we use it to capture images but if you enlarge this to a couple miles it can be used to capture gravitational wave and this is a commercial OCT system you can see here and you can buy this off the shelf right now and most of these technologies are currently only used for biomedical applications and for most biomedical applications we are only looking at a very small sample size so the field of view of OCT systems are very limited and in most cases the field is about one centimeter by one centimeter so that is clearly not the ideal system to image the surface of the artwork and we did some preliminary study in the 2006 so that's our first proof of a concept system so we were able to capture for instance over here the surface profile of a one cent coin and here as you can see this is a surface structure image and then we also capture the topography image right here and basically we build a 3d model of the surface of the one cent coin just to prove that this technology can be used to capture surface topography beta and of course here are a couple more images they're probably not as intuitive but this is basically if you look at a painting this is a intersection between two brush strokes and we can also do what we call a cross sectional cut because once we capture everything in 3d we can do layer analysis we can do cross sectional analysis so we did a cross sectional cut on this layer and you can see the the two brush strokes have a have a height difference so these are like two paint two paint two paint brush strokes painted at different time intervals and one is higher than the other and what we also did is we look we did what's what we call a layer analysis so as you can see here we took this our we took this painting sample and then here we here's a very small region it's about one cent one point five centimeters by one point five centimeters we took this image and then we use oct to scan this image and as you can see here as we peel off so what we if you look at this and so cd e f these are basically where what we're doing is a layer analysis we're peeling layer by layer and we're trying to see um what is a height difference and as you can see as we start to peel in layers eventually they become the color becomes relatively close to each to each other which means the the layer between these two are essentially the same and as you can see here we also build a surface topography and as you can see um this layer this brush stroke on the left hand side is much higher than the the one on the right hand side so this is our first try to build a proof-of-concept system however the system we built has a severe limitations and the most important one is a limited the field of view because a system we are really designed for image and eye retina and we just kind of you know put in some samples to art painting samples but um we couldn't really scan anything that's larger than 1.5 by 1.5 centimeters and we also have a huge amount of data need to process and the system was relatively slow so this is what our original um original system as you can see we have our very small um painting here and the only samples are very small size and this is our OCT sampler so what we need to do is we need to enlarge our scanning area and what we did is we built a we we purchased this off off the shelf um robotic scanning system and it's a x and x y z system so we can scan and we basically increase the size from 1.5 to 1.1.5 by 1.5 centimeters to two feet by three feet so this is our maximum scanning size we can we can scan however in our experiment we didn't really scan a painting that large because the amount of time takes to scan that um a painting with that size is would take us almost a week at this point so we only scanned a very small sample size to prove the concept and uh this is our first sample so we add we asked a New Jersey based artist to paint uh oil paint on a canvas and the size of this paint is um can send no the size of this paint is 10 centimeters by 10 centimeters um the style we choose was um based on we want something that looks like a van Gogh because we are very very interested in um the unique brush strokes of van Gogh's works so the artist was um you know very happy to you know have this these brush strokes and tried all different painting different colors of paintings and painted at different time intervals and once this painting was created we wanted to scan part of the painting so this dotted box here the white dot box is the part that we actually scan and the red box here is the field of view of our OCT system so as you can see our strategy is to um every time we scan a red box then we move the red box to another location and scan another red box and we would move about nine times in the vertical direction and we have to move about 10 times in the horizontal direction to be able to scan the image of this white dotted box um but the problem is you know we we it also created a lot of problems and I'll talk about this in a second but I'll show you the first preliminary data so the preliminary data so for OCT system basically we get three different scans we have what's called the A scan which is just a vertical signal and then we call the B scan which is essentially the cross-sectional image so as you can see here the image in B here shows the cross-sectional image and the the surface here is basically the surface profile and we are also able to see the underlayer information so for instance if we have a older artworks which have um like multiple layers we should be able to see like the layer information but because of this paint is so new the paintings didn't really have time to when one layer of painting didn't even have didn't really have enough time to dry before the other painting was applied so we weren't we weren't able to see the clear um you know layered information yet and the the figure C represents the on-face image which is what we see from from top to bottom so this would be like a very typical 2d image but just in gray scale and of course we can now construct our 3d surface and this is what it looks like for one OCT scan so what happens is we need to we we have to capture multiple of these surface scans and we have to stitch them together to make a larger image um yeah I'll talk about this so this is a surface profile so if you look at this original painting here so the white dot here is an area we want to capture and uh basically what we have to do is we have to use the we have to use that OCT to capture multiple areas in circle with the red in red triangle and then we have to digitally stitch them together but here's the problem because our system our uh the scanner is a mechanical system and whenever you have a mechanical system you it introduces error so now it's the problem is when we move these steps there's gonna be um overlaps and we cannot really control the exact uh the exact amount of the movement movement of those overlaps because of the just because a simple mechanical error so what we have to do is we have to come up with a software to actually identify which part was overlapped and digitally train the image so that we can make a perfect stitch so that was actually almost 50 percent of our work is really coming up with that algorithm that allows us to do digital stitching so once we finish digital stitching as you can see um this is our um finalized product so here you have um this is our stitched picture and on the right hand side is what what we call the topography image so the topography image here represents the actual um height information in grayscale and as you can see um we still have some um artifacts like between one so we're doing vertical scans so from one vertical scan to the other ones you can actually see there's still some artifacts and we're actually currently working on to remove this artifacts digitally but um this is a much easier problem than remove the overlapping images so now once we created this topography image which contains all the height surface height information what we can do now is we can transform that information into a almost like a mapping so we actually used a very similar technology as how they do like how google maps um topography basically if you think about um how the height information is mapped from earth right so basically a lidar a satellite shoots shoots like a lidar waves onto the ground and depending on how much time it took the lidar or the laser signal to return to the satellite you create a the the earth as a surface topography so that's pretty much exactly what we did we basically create a surface topography of a surface of a painting and then we were able to create um the very the same kind of image surface topography image as how google maps create for earth and once we have this topography image what we did is we also took a 2d picture of the um of the image which is um basically a spectral picture or blue white a blue green and like red blue and green images and then we kind of projected the color image upon the surface topography image and to create our 3d spectral and digital reconstruction so now this is our model i we do have a 3d model on my computer for the we just tried this and the problem was zoom has zoom and the both the 3d engine it takes so much resources and it just froze my computer so i can't really do a live demo but this is what what you can do with this on digital reconstruction you can basically rotate this thing rotate the sample you can zoom in and zoom out and uh yeah so you know this is a very interesting viewing experience because if you imagine in future this this 3d model can be directly sent to a visual virtual reliability or augmented real reliability device and the users can actually rotate them look at the details and zoom in and zoom out so again just to reiterate we started off with a painting about 10 centimeters by 10 centimeters and we choose the region of interest just because of the time um we we have uh because the oct system we are using are actually borrowed from the biomedical um optical group so they only gave us a very limited time window that we can use the system for uh art application so we were only we did only have enough time to scan the image in circle in the white dotted area and this is our digital sample so then we thought well we can now actually do 3d printing so we took the digital sample we we collected and we used a 3d printer to print the sample and as you can see we print a couple samples to see there's a couple variations we need to adjust but this is one of the samples you can see it's exactly the same size as the original um the area on the painting and you can see the um you know the surface topography data here and now you can actually touch them so if you really want to you know have if someone is really visually impaired and want to really have a feeling of what this what Van Gogh's painting feels like well now we can give that person the sample and they can actually touch them touch this um so i think that's pretty much all for my presentation and this is our references and uh oh yeah so acknowledgement and so we we really want to thank Dr. Christine and Brian for their really critical support because both of them are art conservators and they really provide us a lot of information and we our group is an optical engineer group and we have no experience in art and they really helped us to understand a lot of the um concerns or the problems and this painting the painting sample was produced by a New Jersey based artist Katie Wallace so she she did a really good job and uh again i really want to thank our sponsor the Crest Foundation they really took a chance on us because we had no track record in this area and without their funding we would never have this product or this experiment and with parts of our funding also comes from an SF um so thank you so much and i'll take any questions thank you thank you ye thank you that was absolutely fascinating quite an interesting process that you shared with us and we're really grateful to you for coming to CNI to talk about your project um amazing implications for preservation and accessibility uh just incredible thank you and i also want to take an opportunity to welcome all of our attendees you are reaching this session as part of CNI's 2020 virtual spring meeting and we're so glad you could join us thanks for making time out of your day to be here with us and at this time i'd like to invite you to type in your questions in the Q&A box and you will be happy to answer your questions or if you have any comments if uh you are thinking about some kind of applications for this technology or you're wondering about some of the specifications this would be a great opportunity to share those with ye and get some of his first hand knowledge about this kind of technology um while we're waiting for those questions to come in i just want to share with you on the via chat there that um as i said this is part of CNI's spring 2020 meeting which will go on through the end of may and i've just shared with you the um direct link to our schedule for the project briefings yet to come there are many many more to come so check that out and hope you can join us again um i was wondering the um what how what's your estimate for the amount of time uh per artifact that it would take to create one of these um scans yeah so the current uh the area we we did um i would say it's about um it took us about let me think um about 30 minutes but if you really think about scale up that actually takes a lot more time and uh it really i think there's a lot of mechanical improvements we can improve on and which really increase the speed so really um speed wouldn't be the biggest concern for us i think the biggest concern for us is data um we are really looking at terabytes of data and when you're talking about that kind of a scale um processing them in real time or even post-processing takes a ridiculous amount of resources we are already using independent graphics card to process these data like gpu's so yeah processing data would be a very resource intensive problem yeah i i can imagine that and i'm curious to know is that those resources are they um are they centrally located on your campus or is that something you have in your department or how where are you getting those resources from um well for most for the for currently we can handle in our lab so we have a really couple really powerful computers and uh also like the i think one of the revolution is the how the independent gpu's have the price has dropped because a couple years ago those things are very expensive but nowadays a very powerful gpu with three thousand five thousand cores can cost like two three thousand dollars so that's actually a lot more affordable and for image processing um most of the processing is actually pretty standard so it doesn't really it's a very repetitive repetitive processing but gpu is actually a very ideal tool to process image information i see okay and the and the data you're storing locally as well even though that's in massive amounts of data i can imagine it multiplies very quickly and what level of um expertise would be needed to uh create these kinds of files um for yeah so that's actually another project we're working on is right really trying to come up with a um a software that actually simplifies the whole process because the system right now it's a we build everything in lab so in house and it's really you know we need to have a technician or a grad student who has been trained on the system for at least a year to be able to carry out all the works we wanted to carry out um yeah we definitely need to work on the software and the hardware optimization but um yeah but we are looking for like for future collaborations we are we are thinking about just you know driving up like move the system to the whatever the location and we will you know do all the scanning for them uh huh okay i see so you don't see this as necessarily being a system that institutions could adopt and deploy yeah it will be very hard because anything anytime whenever you have optics if something could go wrong and that you would take a technician base just to um readjust everything so optics is a very interesting feel that it's yeah it's very hard for optics to be um popularized I guess so interesting well I don't want uh it's fascinating as this is and I have a laundry list of questions I'd love to ask but I don't want to um make anyone feel like uh they don't have an opportunity so please if you have questions I see several folks are uh still with us here even though we're slightly past time but if you have any questions for you please feel free to type them in and uh we also have the ability to turn on your to unmute attendees so um I think what I'm going to do now is with thanks again to Yi and to our attendees I'm going to go ahead and close down the public portion of this presentation I will turn off the recording and if you'd like to hang around a little bit Yi and I will still be here if you'd like to sort of approach the podium and um have a one-on-one chat with Yi or make a comment please feel free to do that and thank you again everybody