 Great. Hi everyone, thank you for tuning in to our presentation for C&I 2021. This presentation is entitled, Supporting Multidisciplinary International Research through Collaborative Development. My name is Rachel DeCresche. I am the project librarian on this particular project and I'm joined by my colleague Jessica Lockhart, who is the project manager for this project as well. Today we're just going to lead you through some of the work that we've done collaboratively. Jessica is going to talk to the scholar always side of the project and then I will come in and talk about how we develop strategically for this particular initiative and how it kind of fits into the greater infrastructure at the university. So with that I'm going to turn it over to my colleague and thank you again. Thank you. Next slide, Rachel, please. So I'm speaking on behalf of the Book in the Silk Roads, a two-year internationally collaborative project funded by the Andrew Bellin Foundation and with a team directed by co-principal investigators, Alexandra Gillespie of the University of Toronto, Mississauga, Sean Meekle of the University of Toronto Libraries, and Suzanne Iqbarri of the Institute for Advanced Study in Princeton. As Rachel said, I lead our scholarly research projects and Rachel, my counterpart, is project librarian who leads a team at the University of Toronto's Information Technology Services or ITS department. Next slide, please. The Book in the Silk Roads is a large-scale project that seeks to tell the story of the book in a new way. Global book history is often represented as a simple triumphalous narrative of technological and societal progress, from the tablet and scroll to the biblical codex of late antiquity in the Middle Ages to the early modern printing press of the Gutenberg Bible to today's digital age. By contrast, as our co-PI Alex Gillespie has put it, we are working with a wide network of collaborators to tell many stories of books from multiple regions and periods within a wider, less teleological story of the past. For those interested in medieval history and culture, I will just say that this project is our way of responding to and engaging in what is sometimes called the global turn. As co-PI Suzanne Iqbarri has said, many people talk about the need to be global, to be inclusive in the history of the world that we relate. None of us, however, is able to do that work alone. It requires a network of researchers working in many different fields with specific language skills and knowledge of cultural and material history, working together to share research findings and methodologies. Next slide, please. That's what the book in the Silk Roads is all about. The Silk Roads of our title both describes the networks of trade and exchange that link to the pre-modern world connecting Asia to the Mediterranean and also our own research community. Next slide, please. Unlike the pre-modern Silk Road, however, our project includes materials not only from Europe and Asia, but also from other parts of the world. We use a case study method and pursue research in five research clusters. Rachel, can you hit the spacebar? Bringing together scholars working on the history of the book within a wide range of cultural environments. And if you can hit the spacebar five times, our clusters include Roman and South Asian codices, Dunhuang bindings from the end of the first millennium, Islamic bindings and their influence on European decoration, 15th century Ethiopian bookbinding and the bindings, the earliest Hebrew books in Ottoman Istanbul. Beyond this, the distinctive methodology of the book in the Silk Roads is to bring together humanities researchers with a number of other types of collaborators. Our closest partners, as represented by Sean Mika and Rachel DeCresche, are digital librarians. We work together in an iterative way to develop our project methods with particular emphasis on data management capacity, interoperability, lightweight tool development and user experience. We also work with scientists, including engineers and computer scientists working on non-destructive analytic technologies such as micro TC scanning and peptide mass fingerprinting. Conservators are among our most important partners in this work because they too bring together field-specific research with technical expertise in the materiality of the book. We work with rare book librarians and curators who safeguard and advocate for the books and their care, and we work with local and diaspora community members whose cultural heritage these books most closely represent to ensure that the research questions being addressed are significant and prioritized according to their needs. When my team are able to connect with each of these stakeholders, our goal is to cultivate momentum in a virtuous cycle whereby a project on one book generates knowledge and new connections that can benefit other projects. Before I turn things over to Rachel, I will just briefly illustrate with one example of this process by explaining the way a single one of our research methodologies, our experimentation with micro CT scanning of book bindings, is unfolded and gained new unexpected interventions over time. Next slide please. One of the earliest discoveries of the BSR project happened thanks to collaboration with Western universities Andrew Nelson of the Department of Archeology and rare books librarian Deb Myrt-Welliston. After Nelson and Myrt Welliston scanned a 15th century compuch in February in Western special collections which have been rebound, our team discovered that micro CT scanning can reveal ghost bindings, that is empty sewing holes and other traces that were left by medieval book bindings that have now been lost. You can see that in the image here. This evidence is ordinarily completely inaccessible to book scholars, and this technique offers an untapped opportunity to solve cultural ecological puzzles and gain new insights into the early histories of medieval manuscripts. Next slide please. For our research cluster one on Roman and South Asian codices, we turned to this manuscript in the Thomas Fisher Rare Book Library now known as Fisher MSS 0106, which was shown to us in June 2019 by Rare Book Librarian Timothy Perry. Fisher 0106 is an example of what we might call a lost book in that apart from its care taking by the Fisher librarians, it had no wider presence in the scholarly community. The only thing known about it in the Fisher catalogue was the information of a pasted note on the other side of the cover, Bhagwat Gita of 600 years old with pictures. The manuscript hadn't even come to the attention of the Sanskrit scholars and South Asianists working at the University of Toronto. We wanted to bring this manuscript back into public knowledge, both in its contents, its cultural context and its construction. We pursued our research on two fronts. Next slide please. With the Fisher Librarian's permission, I took digital photos of all the pages of the manuscript and we reached out to Luther Obrock, assistant professor in South Asian religions at the University of Toronto, Mississauga. Professor Obrock immediately identified the script as Sharada script, indicating that the book was likely made in Kashmir. With further work, he was then able to identify the contents of the manuscript, not just the holy Hindu scripture, the Bhagavad Gita, but a range of devotional texts devoted to both Vishnu and Shiva, at a poem in honour of the river that runs through Kashmir. The artwork and style of the book further identified it as being from the Mogul period. We think from the 17th century though possibly later. Next slide please. Meanwhile, my team was particularly struck by the beautiful fabric which seemed to have been the original covering for the boards, which are made of paste for it. You can see it here under that extra fabric. We wanted to know whether the beautiful fabric extended around the entirety of the binding structure, even the flap, which is now completely covered with this protective layer. This was a question that seemed solvable with MicroCT. Next slide please. My team worked with the Fisher and the engineers of Giovanni Griselli's Geomechanics Group at the University of Toronto to develop a protocol by which we could safely scan the manuscript. My colleague Alice Sharp designed and built a foam cradle and we were able to scan the manuscript in February and March 2020 just before everything shut down. Next slide please. The scan created many more questions for us, which I won't get into now, but it did reveal that there were layers of fabric underneath what we could see. Next slide please. And yeah, you can see that there are additional little layers of fabric kicking around in there. And last October, I was able to get into the library and confirm our findings using a digital microscope. Next slide please. Which, as you can see, showed traces of colored threads through cracks in the outer layer of fabric. Textile expert Rosemary Krill of the Victoria and Albert Museum has now identified the fabric as Indian mashroom fabric, lightly from Gujarat. What has been most exciting about this project for me is the way that it has led to other things. On the one hand, our interest in the fabric of this manuscript has led to our organization of an upcoming two-day virtual workshop with videos discussing the global uses of textiles and bookmaking, with sessions with experts on books from Ethiopia, Armenia, Syria, Dunhuang, and Kashmir. Even more excitingly, our work with MicroCT and the connections we've made with librarians and scholars of South Asian books through our work on this project has led to a new challenge that we hope to embark on starting this summer. What you see here are three extremely fragile birch bark manuscripts in Sharada script of unknown date from Kashmir or Punjab in Northwest India and Pakistan. The one on the left was donated to the Cambridge University Library by the family of an English missionary in the late 19th century. The one on the right was recently discovered in a family attic in Vermont and was donated to Williams College Chapman Library in Massachusetts. Based on fragments that have broken off the manuscript, we have now succeeded in carbon dating it to the 16th or early 17th century. The one in the middle comes from an archaeological site in Pakistan and was rescued in 2020 from antiquities hunters by an archaeologist form called the Association for the Archaeological Study of Ancient Societies. Over the next few years, we hope to work with collaborators at each of these locations in order to use MicroCT to study these extremely fragile books without opening them. Just today, that is March 8, 2021, we've managed to secure a partnership with Williams and Harvard University to CT scan the Williams manuscript to begin this process. This new project presents tremendous technical, logistical, and scholarly challenges requiring many different forms of expertise. But ideally, it will allow these books to share their stories once again. I hope this has been an illustration of how our research questions and methods have developed unexpectedly over time. I'll now turn things over to Rachel to discuss how her team has managed the challenges presented by our scholarly work. Thank you so much, Jess. Hi, everyone. Again, I'm Rachel DeCresche, the Project Library and for the Book and the Silk Roads Project, and I'm responsible for managing the development for the Information Services Department, Technical Service Department in the library, so essentially to respond to all those exciting things that have been going on that Jess has just gone through, the kind of developments that have been happening over the course of this project. I want to spend this part of the presentation talking about kind of how we've developed these tools and workflows to support this type of work that interests Jessica and our scholarly PIs and what is unique about this particular project from a development point of view and what challenges we face and what we're hoping to go and what things we want to so before I jump directly in, I just want to say that, you know, all of this is made possible by our development team, so I wanted to acknowledge our developers in particular, Shiba Leo and Imran Askar, who are the ones who are responsible for building all of these things that we ask them for, and as well as senior application developers, Bilal Khaled and Andy Wagner, who helped to advise us with all of their wise experience, give us service feedback and be there for us in general when we have questions, as well as our larger network services team that is responsible for supporting all the things that we need in the terms of technology, as well as helping us to sort of integrate into the wider technical department at the library. So without all these kind of pieces moving, we would never be able to do the types of work that we're doing right now. So for this particular project, we had a sort of defined amount of technical deliverables, as you might imagine, so we wanted to create some sort of data management tool and workflow that would allow all the data that gets produced from this project to have someone to live and access to be used. We wanted a viewer for this type of data, so the data is very heterogeneous, it kind of comes from all different places, it has different formats, different types of data, but we wanted a way in which we could sort of bring all that together in one space for the user. And we also wanted to work on a book binding visualization app, something that we could sort of do a 3D representation of binding. And all this we wanted to be able to do that was pluggable and interoperable with sort of other aspects of the department, but also with other standards and frameworks and things like that. So the scholarly project characteristics, you may have noticed listening to Jessica kind of the way in which this sort of thing evolves and moves and changes that, you know, there's a lot of unknowns when this project started and it was very much a learning process. So, you know, as you know, you scan a book and all of a sudden you see something of an interesting thing, hey, we never thought about that thing, we didn't even know it was there. What else can we do to learn more about that object? And that's kind of the way in which this sort of unfolds and moves and carries on. And from that, you're going to get a lot of different types of data. There's importance of being able to share and collaborate. I think Jessica named like seven people if you were listening there that had a hand and just telling another piece of the story of that one object. So it kind of requires this ability to share and collaborate in ways with people who are all over the world. And there's a focus on user experience for us where we're trying to build things that are easy for a scholar to use, but also, you know, sophisticated in giving them something that they could never do before or do in the same way as before. So because of this and sort of because we had to kind of grapple with the fact that we didn't know everything we were going to find out at the very beginning of the project. We didn't know everything we were going to have to accommodate at the beginning of the project. It made it slightly difficult trying to think about how are we going to plan that technical development when we don't have sort of a very, very solid rigid scope and a very rigid like idea of, okay, we're going to use this and we need it to do this. But sometimes we might actually need this thing. And it's going to have to do a couple of different things. And we don't know right away, we're going to have to learn as we go just as the scholars are learning as they go. So we had to sort of think about this in a way that we wanted to the goal was to understandably like not become experts in every other field of analysis or technical technique that they were using for the books, because that would be ridiculous. But we wanted to be able to create this sort of experience that we can make the data understandable and cohere in a single place for an everyday user. So we kind of were focusing on that from the outset rather than trying to make sure we knew all of the information up front because we just weren't going to be able to know all the information up front. So we had a few guiding principles for development that we followed and we've been following this entire time. One of which, given the nature of this, was there a flexible and iterative development because it became obvious early on that we were going to move into unexpected ways. There's going to be a breadth of materials, a breadth of new learning, all these different scholars that we're going to become involved in this thing. You're trying to, for example, think about how would we develop a data model to describe all sorts of objects from different regions and different traditions. They're all very different looking. They are all books, but there's a lot about them that are very different. And there's a lot of different scholarly traditions around those objects, sort of like we don't have a one-size-fits-all. So it became possible, rather than trying to do that, we're sort of imposing this structure on this project or on these objects. What we were able to do is sort of develop a data model and a back-end infrastructure in a way that could move us forward towards our development goals while also allowing us to change and evolve at all levels of the project as it moved ahead. So this sort of flexibleness, this iterative nature of development where we were making small changes in testing and moving, being much more agile became important. And one of the things that we did was using a microservices approach. So you may have heard of this before, a microservices approach becoming a little bit more popular now hearing about what this is. It's sort of approach to architecture that's in contrast to a monolith, which is sort of what you might be more familiar with, something that happens a lot with service. So I use this department as an example, like we currently use Islandora in our department for some of our digital collections, actually for all of our digital collections. And that is very much a monolith. It has a predefined stack of Fedora, Islandora, and Drupal. Something's taking care of the front end, something's taking care of the back end, something's doing the middle part, middle wear stuff. But it's sort of a wholesale thing that's doing kind of everything in one for that particular application to power that service. In a microservices approach, you're actually breaking apart all those smaller functions into their own piece and creating connections between them, all to power a user interface somewhere else. So there are pros and cons to both of these approaches. But we felt, given the nature of the project, we might need to alter or swap out pieces as we've learned more and grew. And so it's connection rather than having to sort of say, oh, we can't wait to throw out this entire back end and redo it. What if we could, like, oh, the one thing we're going to have to read what changes our indexing? Could we just like swap that piece out, plug in a new one that made it sort of met our needs better and then kind of fix that connection between everything and then move on? We thought that might be easier, more agile way for us to kind of move forward given the situation. So I have this architecture drawing here. This is the drawing of what we have so far as our architecture. And I'm saying that because things can be influx, things will by nature maybe change, update that kind of thing as stuff happens. I'm going to go through some of the parts of this diagram to make it a little more understandable. But I just wanted to show you sort of the one one when I say kind of all the pieces that are going into this, it is quite fractured and broken apart into sort of like a very small defined function that each piece is supposed to be doing rather than having a piece that's doing a lot of things at once for us. We kind of have broken apart to make it more manageable. And in this way also, what we were able to do when we did this is we could say, well, at the department, we have an upload server that can handle video upload. Why do I need to build a new upload pipeline? Can I just like take that little server and plug it into the greater things and connect it to the things that I'm using and then we can carry on. I don't actually have to build my own server up or sort of create that new pipeline completely from the ground up. So it allowed us to kind of also fit in nicely with other things that are happening at the department while also kind of building our own little bit of unique things because they are built to function for this particular project. In keeping with the goal to maintain technical flexibility as the project progressed, we felt that using a graphing database as opposed to a relational database would be a better fit for our needs. Our scholars are interested in such a wide range of materials whose physical attributes and scholarship are very different. So for example, a Western book, a codex, something you might think of as a book, has its own identifiable attributes and a pool of scholarship around that. But it's very different and then something like a posty book or a scroll or a concertina book that's sort of like folded accordion pages like all of those are very different looking objects. They're all from different parts of the world and there's all different types of ways in which to understand that book, that object in its own sort of environment, its own context that's very different from how you understand a codex. So we wanted to be able to describe these objects and model them and regardless of these differences. So we need to want in our database to be flexible and malleable as we started to learn more about different types of objects and we could kind of insert them into the database. That's some of the reasons why we chose the graphing database, you know, it's structured around like entirely around the data. It's more just nodes and edges rather than all these tables and how they relate to one another. And the relationships are treated like as the data itself, it's no longer the schema that a relational database is have. So you can kind of define and redefine any relationships as you go, not that that's, you know, it's still work to do, but what we thought was if it gave us enough of breathing room that if we needed to make changes, we could do so without worrying that we would actually have to kind of redo a lot of work because we got so far along in our development that the app's not going to work if we make a change in the database instead of breaks and connections like we can kind of go in and and sort of rework it much easier than we could with a traditional relational database. Our second guiding principle is around data management. So we, again, these are some of the characteristics of the project, you know, heterogeneous data, there's going to be a lot of run and process data. So it's micro CT scan can generate like three, three and a half gigs or more of data per scan just for the scan. And then you'll have like maybe you want to process some of those images and videos to highlight a specific aspect of that book, but you still have this stack of images and data associated with just that scan that could be quite large. We want to preserve this data. People don't have to do it again, or we want to make sure that we could find it again. So we also want to make sure we followed fair principles of findable, accessible, interoperable reusable data. So that became sort of a very important process for us. So I've highlighted some of the pieces there that come into play in our architecture for the sort of data management aspect. So and I will tell you right here that so we have D graph and min IO and those two pieces, for example, are dealing with our processed images, our process videos, some of our scientific analysis results, book made metadata binding descriptions, that kind of stuff that sort of process stuff that you want to present to the user you want to show and highlight from the work that you've done from the mountain of things that you've created, we want to zero in on what we think is particularly interesting at that point. Then dataverse. Dataverse doesn't actually exist within our department, but it is part of the University of Toronto Library Scholars Portal. Scarls Pro runs dataverse at the University and it's a fantastic service that you're not familiar to put in open data sets. So we want to use that as sort of where we would put in those raw, large data sets and scientific analysis sets and metadata images, all that kind of stuff that we could live there and reference and can be findable and reusable and all of that. Finally, we have an in-house data asset management system and so we would want all of the data in the admin interface, so the things that are showing up to the user to then be preserved there and while all the raw data is in dataverse and perhaps one day we will have a pipeline which will push all that back into the dams from dataverse into the dams for sort of a longer term in our department. But as yet to be decided but just sort of an idea of how we wanted things to be spread out but also kind of reflect back towards each other. So we have sort of a we've got all our bases covered for the data. The final guiding principle is open source and open interoperable standards. So because we are doing a project that is funded by the Mellon Foundation, one of the big things that we always want to make sure that we're doing is that we're making things available open source making the data open as we can and so everything that we build that we think could be reused we will be putting on our GitHub. We want to use open standards and open source softwares as much as possible and integrate that into what we're doing and we want open access for our data as much as it makes sense as possible. You know there are situations where that might not be possible for many different reasons but we sort of go in with the idea that we would like to do that and we address the situation as it comes. So I will highlight two sort of specific pieces of this infrastructure that have to do with image and video and so these pieces are dealing with our IIIF infrastructure. So if you're not familiar the International Image Interoperability Framework is used to sort of standardize the distribution or sharing of images across the world so that they're all open and now most recently with the recent update with their standards is now also includes video audio official material so it's very exciting for us because we have both video and image data and so now we can kind of manage them together. So one of the things that our team did was build a IIIF resolver that was going to work with RAFQL server that works with graphing databases that was going to call back and create with the Loris image server with the images with the videos IIIF manifests that allow you then to share them openly and also load them into viewers and all sorts of things and sort of keep them open and shareable and you can you know the URI that you can reference that will always kind of be there. So that was sort of one of the things that we were particularly excited about and the way in which we built the resolver actually was because we were in between versions of the specifications it is both two and three compatible so we could switch it one way or the other depending on how that went before the end of the projects we kind of wanted to make ourselves able to go one way or the other so we're very excited that we can do three now so all of our sort of images and video content will be served out through this specification. In addition we are implementing annotation features and we will use the web annotation specification as well to do so so again all of our annotation data will also be open to the public. On the other side as you can imagine we're talking about a lot of objects that will now require description they need to require metadata they need some data models to kind of understand them to search them to sort of organize them in some way so again we took the same approach to this as we would to technology where we said okay there are things that exist outside of this department outside of this institution created by people who really know what they're talking about when they're talking about these kinds of things why would we like where there's no makes no sense for us to read to that we're not going to do it better than them what would make more sense would be to so to integrate it and build upon it if we could into the systems that we already have so in in particular you may be familiar with the language of bindings or the Legatus data model sort of these two things together they give us a descriptive terminology for bindings particularly Western bindings in this case as well as binding visualization data model that was created sort of see how we could sort of strategically describe these objects in a way that we could then inform our visualization tool so we record if we're not going to sort of build from the ground up if we already have this terminology we have this idea of how to structure this data in a certain way we just want to integrate it into the way in which we're describing these objects and hopefully in that way we sort of can be more you know collaborative in our development and also more useful to the greater community that we hope would find the things that we're building useful to their to their scholarship so we don't want to run parallel we really want to run together on this and kind of you know see if we can really integrate what we're doing with what's already going on in the environment around us and the last few things I just want to highlight are these three pieces here which is actually the front end so the things that the user would go to and see so we have a sort of admin UI that's where you could upload metadata images video scientific data all that kind of stuff a viewing app where you could see all that stuff together and also the visualization app which is to be named we haven't fully decided on the name on that point yet but just to give you an idea every single thing that we built is built upon open source software and frameworks and I've listed them here if you're interested we again wanted to make sure that whatever we're doing we're following that sort of open standards open source philosophy guiding principle for our development so all of the stuff that we're building is built from that we're not really building things that are sort of special on their own we want to again feed back into the communities whatever the whatever they are so I'm going to do a very short do not really demos or just gonna be pictures and things like that I think even idea the type of environment that from all that work that we're doing all that thinking around how we're going to develop this how we're going to structure the the backend how are we going to implement this data modeling how are we going to manage the data how are we going to describe things how are we going to create a good user experience the things that we've been working towards sort of end up looking like this where you have a sort of an interface where you can look at oh I want to describe a binding and and this is a very basic one it's not the whole thing going on here that you see but we've sort of extrapolated out like every binding has every book has a substrate you have to write on something right so that thing that you're writing on what is that thing and although we kind of extrapolate that the intent is to sort of include as many things as we can underneath this binding description we are using things like the language of bindings to inform the way in which we call the what we call these things the terminology that we're using for your server sort of consistent with that that control vocabulary also as we learn more about other traditions we want to include that vocabulary in as well right we don't want to sort of apply this wholesale to things that it really isn't meant for so we're doing that we have you know tabs free to enter basic metadata about the object upload images video data scientific data that kind of thing encourage keep it all in here as a way in which you can manage your own workflows your own data and sort of look at all the things and sort of hone in on our things that are of particular interest to you as a scholar which we end up with in the viewer is you can do things like this where you know we have sort of this very close up we have a like a digitized image of this end band here on one side but then we also have a micro CT scan of it of the same thing and also microscopic image of the same things you're getting sort of this whole all these different pieces that tell the same story it's sort of contributing to the full story of the object so you kind of can have all these things work together so these are all images the same thing here where you can annotate you've you've found okay these i've done this micro CT scan i want to point out in particular this these this interesting aspect i can annotate to show that as well um what you could also do which um is because that we have this year we can have one tab with the micro CT scan image or just a regular digitized image but also a micro CT video where you can see sort of going through the binding and sort of a pause and rewind and see it and and sort of that coupled with all that other information you're getting sort of this much more robust understanding of what you're looking at and what the object is something like this where we're taking swabs or samples from a book and never do scientific analysis on it the processed output data of this would be some sort of graph or something that's telling you sort of even particular here is telling you what kind of proteins you're picking up from that glue or that particular piece of the of the binding that you've taken a sample again wanting to put that in as perhaps an annotation over the part of the object you took the sample from or just simply there as an additional information people can know like this is what this material is this is where this this particular thing comes from it's sort of trying to create a space where all of action exists together and the last thing I want to talk about is sort of this binding a visualization of binding structures and that's sort of creating this web application that can generate these visualizations that is being built in conjunction with the information and the expertise that already exists out there so we're trying to incorporate all these aspects the language of binding to the data model Alberto Campagnolo's expertise integration within the rest of our infrastructure that already exists because you know we have a pipeline we have a backend we have all these things that we can kind of plug this in and say now that we have all this data we can view it we can put stuff in we can whatever manipulate it but now we can also visualize it so how trying to sort of take that sort of all that back in and adding a new function on top of it through the front end and you're able to do some interesting things we're excited about this application in particular because it's sort of a all the steps towards I mean a new teaching tool or an easier way in which to visualize different types of bindings and potentially all types of bindings right now obviously you can see here this sort of looks like a regular codex you're starting from where we know the best and working from there but we would love to see this sort of expanded out and and sort of you know encapsulate even more types of objects but sort of this is the type of thing now where because we have all that infrastructure laid out we have the way in which we've peace nailed it out in microservices the way in which we've made our data data very malleable the way that we can be very agile in the way we make changes or additions we can do things like this we can and then scrap it and do something else and very quick amount of time comparatively so we're very very excited about this is sort of a new way for us to do development for these types of projects but it's it's showing to be very fruitful and we're learning a lot about the ways in which we can be strategic and how we build and how we change here in the department for future planning so we have sort of other tools we've built in the past that deal with manuscript studies things like this codex which is a collation tool that would work well with this you see like now we have a thing for the collation we have something for the finding they sort of can all work together so the idea would be that these things could kind of work together we want to we're going to move into the large data transfer we're going to put things or we want them to last the long term metadata profiles for those and looking again towards other other fields of study that already know what kind of metadata what kind of things that they want with this data set that makes it important for their particular study and integrating that into how we manage those data workflows as well as well as fully integrating what we've done within the within the department and the library so bringing it back to how are we strategically planning not just for this particular project that has a definite end but how can we also contribute to the way in which we're serving our community and the international community in the scholarly capacity and so this in this way we find that we are strategically moving in towards a place where we can be a more agile we can support more users and we can offer new ways in which to do research that may not have been possible before thank you very much I hope that was interesting and I blessed our emails here if you'd like to get in contact with either of us thank you very much and thank you again Jessica for all the information as well thank you