 Hi everyone, I'm Sheila Rabin. I am the Community and Communications Officer for IIIF, the International Image Interoperability Framework. So before I begin, I always usually start by asking how many people have heard of IIIF? OK, and then I'm going to ask another question, which is how many have actually implemented the IIIF specifications at your institution? Cool. Just for my own purposes, really. So thank you for coming to our session today. We're going to be talking about digitized manuscripts and all sorts of great things surrounding that topic. I'm going to start it off just by giving a little intro to the IIIF community. Basically, the IIIF is an international community-based initiative for web-based image delivery. And so we have participants from all sorts of different institutions, national and state libraries, research institutions, museums, software companies, and just really any other groups or individuals who have an interest in digital image delivery and interoperability. And so out of this very active community, several community groups with shared interests have kind of organically formed around different specific topics. So we have, for example, a newspapers group, an audio visual technical specification group. There's a group for software developers, museums. And then last but not least, we do have a very active manuscripts group. And all of these groups are open to anyone who is interested in IIIF and whatever the related topic is. And so you can find more information online at the URL there, IIIF.io slash community slash groups. So the manuscripts group specifically actually started working together around interoperability for medieval studies before IIIF actually formalized as an organization. And then the group was brought kind of under the IIIF banner about a year ago. And so the manuscripts community around IIIF brings together cultural heritage, institutions, scholars, software developers, all for the purpose of leveraging IIIF for both medieval manuscripts studies, but also for the huge body of handwritten materials that exist across all different periods. And so this slide just shows a subset of the different institutions that are involved in the IIIF manuscripts group. And they're sharing tens of thousands of manuscripts via IIIF. And so there are still several projects that are making digitized manuscripts available without using IIIF. So just based on paradigms of access and use of individual manuscripts. But IIIF has allowed really major collections, especially, to increasingly make their content available in a new way that kind of encourages usage of outside of institutional sites. So sharing across repositories and allows scholars to really start asking new questions or to start looking at questions that have been around, but maybe in a new way that they hadn't been able to look at before. And so IIIF for manuscripts really starts to lower the barrier for scholars and really anyone with an interest in these materials. And so now I'm going to turn it over to Ben. Thanks, Sheila. Hi, I'm Ben Albritton, curator for Paleography and Digital Medieval Materials at Stanford. And I'd like to preface what I'm going to say with the fact that all of the technologies I'm showing are completely interchangeable. So there are a number of IIIF viewers out there. I'll be showing one, which is Mirador. But the universal viewer and the viewer that's been developed to Johns Hopkins and others can all stand in for what I'm showing you. Likewise, the discovery interface as well. So the story I'd like to tell is one of change at the moment. We are in the IIIF community, or have been, focused on serving content out to users so that they can do stuff with it. As Sheila mentioned, we've got dozens of institutions. Now, actually making material available this way. But it's all being pushed out. So it can be consumed in a user interface. But what we're not doing is getting data back from the users. So we've seen some interesting talks in this particular CNI on crowdsourcing and what happens once the data gets created. I'd like to take us through a couple of use cases of pushing content out, sort of that one way street out. And then talk a little bit about what we're looking at in terms of bringing that material, the newly developed material, back into repositories to enhance the digital objects. So I'm going to start with a fairly common use case in the IIIF world, and really in the scholarly world, which is comparison of digital objects across repositories. And I'm going to use Chaucer's Canterbury Tales to tell the story. And particularly the two earliest manuscripts that we know of, the Ellesmere Chaucer at the Huntington Library and the Hingert Chaucer at the National Library of Wales. So if you wanted to look at the Ellesmere Chaucer, the Huntington Digital Library allows you to access that material. They've got wonderful metadata. You can navigate through the book. If you then as a scholar want to look at the Hingert Chaucer, you have to go over to the National Library of Wales, check out their site, navigate through their particular set of tools. What IIIF has done, because both institutions support that, is make it possible for a single user interface to bring those images from the two repositories together. And you can see that a scholar, in this case using Mirador, can compare these two witnesses to Chaucer side by side and start to do things with them. In this third party interface doesn't have to be supported by any one institution. That scholar might do a close reading of a single word in the first line. In this case, April, spelled A-P-R-I-L-L in the Ellesmere Chaucer. And then move over to the Hingert Chaucer, compare that same word, and see that it's actually maintaining the French orthography. So Avril is kept here. So this is an example, a fairly common example of what we've seen in terms of scholarly use of these materials out in the wild. Sort of close reading, focused details, in some cases creating annotations that would be used in a classroom or in presentation of the scholarly conference. Essentially what IIIF has done, though, is just ease that navigation from one website to another. The institutions are still pushing the content out, but it's now consumable in a single interface. So we've eased navigation, but we haven't done a whole lot in terms of round tripping materials from repository to scholar, scholar back to repository. A second use case would be transcription, and as we probably all in this room know, handwritten materials present an interesting challenge. They are notoriously resistant to OCR and other machine processing. And particularly when you get into 15-centric hands where letters are dropped, it's very cursive, things have been scratched through. This really requires a specialist to get in and read it. We do have tools for that, and we've seen those presented many times over the last two days. In this case we've got Mirador, and here we have a scholar moving through and transcribing line by line this object that they're working with. In this case, even including the scratched out Lee, which is the second word there. But they could be using T-Pen, they could be using Scribe, they could be using Transcribus, they could be using from the page. It really doesn't matter. It's that interaction that's being captured here. And what that leads to then, is bunches of regions of interest with text associated with them living around the world. With AAAF and open annotation, we can at least make the linkage to the underlying resource through linked data, so that we can keep the text somehow associated with the image, even though they might be living in different locations. So here the image that you're looking at is served from the bay in F. The transcription data is being served from Stanford, but it's still that push out. It's being brought together in an interface, but we're still pushing it out, not getting a whole lot back from our users. Another scholarly use case then would be basic annotation. And so we have here an example of some training data that we've produced for a project I'll talk about in just a minute, where we had students go through and capture large initials, rubrics, and a couple of other navigation features within a manuscript. This was done by hand. They would draw a box around a region of interest and tell us what it was. That doesn't scale very well when we're talking about tens and tens of thousands to hundreds of thousands to millions of images across the corpus of medieval manuscripts. So I'd like to transition into a discussion of a larger scale project that we've been involved in. Using material from the Parker Library, Corpus Christi College, Cambridge. We had a group of scholars who asked a fairly basic question. If we send 70,000 images to a computer process, a computer vision lab, what can they return for us in terms of these navigation features? So you'll see on the right hand side the things we were looking for were those enlarged initials, rubrics, and other features which the lab said that they could detect automatically. So we sent that corpus to them. We asked them to work on four different features. Very large decorated initials like the one you see in the picture here. Enlarged capitals, rubrics, or that red writing that you see. An intertextual space also represented there by sort of the gap in the middle of the page. We had a bunch of students markup examples, and then we had the problem of how to get all of the source images to this lab in Montreal. And this is where interoperability in IIIF comes in at scale. Here's what we sent them. We sent them about 70,000 of those URLs. And they were kind enough to work with us and figure out how to use the IIIF APIs. Each one of those URLs represents an image in the Stanford Digital Repository that's accessible via that URL. It can be manipulated. So here you see we sent them the 40% size. They experimented a little bit. We started off with 100% and then worked our way down until they had something that was useful to them. And here's what they sent back to us. About 60,000 of these. All expressing a region of interest via the IIIF API, which translated when you stick that into a browser to that image. So all of the data lived at Stanford. They processed it, sent it back, and all of the regions of interest are now just pulled from those images still at Stanford. Because this is live, well this isn't live, but because this is live, you can see that all we could, what we could do for the scholars was throw this into a webpage so that they could quickly browse through tens of thousands of annotations and see sort of the quality that we were getting back over time. So we iterated on this a number of times. But their results were pretty good coming back from the Computer Vision Lab. And we now have regions of interest across 70,000 pages that we can start doing something with. Before we started thinking about the repository uses, the project had to do some work. So they produced their own navigation pages so that they could actually go through and curate this data, throw out the things that didn't work. And they did a 10% sample. We certainly haven't covered all of the data yet. But they were able to go through and curate this subset of data where they were sure that these were actual examples that they wanted to work with. We experimented a little bit, excuse me, with other things we could do for the team to help them, including image search. And here you see we just moved to searching for queues of that particular shape and semi-successful so far. I think there's a lot of work to be done there. And certainly institutions like the BSB in Germany and others have done better with this. But this is the sort of thing you can do with Tripoliath data. We also pulled that data out as Tripoliath annotations again so that they could see the material in context. So they can go page by page through, see where that annotation is, and we actually credit them for the creation of those annotations so that when we start to use this data later, the source of the data is available. And we can say this came from the Global Currents Project. So one of the first things that we get asked frequently is how do you find material that you can work with? And right now it's been word of mouth, people involved in the community, people reaching out to other institutions and finding out where the material is. In 2017, Tripoliath has really focused on discovery of these materials. This is a prototype example of a discovery interface that simply pulls in material from all of the participating institutions. You can do a search on the composer, Guillaume de Macho, and find eight manuscripts so far where that name is mentioned in the descriptive metadata. Go to that object, colleges for the slow loading, had a hotel wifi as I was doing this. Go in and take a look at that with an embedded Mirador viewer, make sure it's the sort of object that I want to be working with, and then I can do other things with it. So if we grab the Tripoliath drag and drop badge, pull it over to the Mirador viewer, that will open it up in an environment where we can actually work with the material in a little bit more detail. And again, we can take a look at a specific page. We can start making transcriptions on that page. And if we were serious about this, we would say something other than look at that big rabbit in the background. But in this case, that's all we're gonna say. Big rabbit. It's a really big rabbit. So that's all the one way stuff. We can push it out, people can do things with it. What happens next? Okay, so how do we get to the two way street? Aside from pushing material out to repositories, how do we bring it back in? Tripoliath again gives us an opportunity to do this and we've been experimenting with this in the manuscript space. Because it's a fairly constrained space and we know a number of the scholars working there. So we can push it out. We can use those same Tripoliath APIs and open annotation to bring that material back into the repository and then start building discovery interfaces on top of it. So moving back to that discovery interface that we saw just a moment ago. If I wanted to find all of the annotations, or subset of them, produced by that global currents project, I now have those annotations in our repository. We can take a look to see how many are available in this demo instance and then we can actually navigate through search results to go in and see those annotations in context on the page. So we've moved away from just navigating through a book to actually being able to discover across a fairly large corpus of annotations that are being produced. Likewise, if we're interested in full text search of an unreadable manuscript, we can take those transcriptions that we saw earlier produced by a project that was run out of the University of Virginia and start navigating those as well. So if we are interested in the city of Paris, we can go in, we can see all of the annotations on that page and then with a little bit more searching, we can find where on the page Paris exists and then zoom into those regions to actually find the word and that would be the last word in that line which is indeed Paris, even though it doesn't really look like it from afar. So we can do that, but also if you happen to find a manuscript in this discovery interface, you might wanna know if anybody has already transcribed it or annotated it. So by bringing those annotations back and curating them and then exposing them back again out of the repository, we can tell the user in the interface if there are transcriptions on a given page and let them go in and explore those a little bit further. So again, moving between views, all of this data is coming from a single repository now and we can present sort of a unified search and discovery interface to our users. So that's what we're looking at in the year ahead and I think our next presentation will probably bring us full circle with tools and repository uses in this space. I wanted to close off by just pointing out that none of this happens in a vacuum. So the Mirador project, you can explore more there. Drew Wynget and Rashmi Sengal at Stanford and Harvard have been the primary developers but it's an open source community and they invite developer participation. The discovery interface you saw there was supported by the Mellon Foundation as a prototype. It was developed by Anusha Ranganathan at Oxford. She's now working independently as a developer. You can explore more about tripleIF at tripleIF.io and then the scholars who produced the data that we were looking at are listed along the right-hand side. The Global Currents Project, which was a digging into data grant and then Masho and the book, which was supported by the Mellon Foundation. So thank you and I'll turn over to the Toronto team. Thanks man. Do you want me to do this? Yeah, sure. Great. I'll take your presentation if you're good. And away we go. Thank you so much. Ben and Sheila. And thank you very much to all of you for coming here and listening to us, especially after the very exciting night that we all had. I'd like to introduce Sean Meekal, Director of Information Technology Services at the University of Toronto Library. I'm Alexandra Bolinton-Anu. I'm an Assistant Professor of Digital Medieval Studies at the University of Toronto. The project we're here to talk about is Digital Tools for Manuscript Studies, which is a collaboration between the University of Toronto Library, the IT team specifically led by Sean Meekal, and the Old Books New Science Lab, led by Professor Alexandra Gillespie. Alexandra Gillespie sends her very fervent regrets having been detained by unavoidable obligations as Chair of the English Department. Ours is a collaboration between medievalists and librarians, generously supported by an Andrew W. Mellon Foundation grant. The medievalists focus on reconstructing the book collection of a 16th century historian named John Stowe. The library focuses on the tools that enable this research within IIIF. So I'm going to talk a bit about the medieval and user community side of our project and then turn over the microphone, the invisible microphone, to Sean Meekal, Director of ITS who will speak to the technical side of the project. So this is John Stowe. He was a tailor by day, superhero by night. No, he was a 16th century merchant tailor by trade, but by interest he was an antiquarian and a book collector and a historian of the 16th century. He wrote the survey of London, his most famous work, a detailed account of the city in the time of Queen Elizabeth, and he was the most prolific and best-selling historian of the two-door age. But what our project focuses on is not the books that Stowe wrote, it's the books that he collected. Stowe's library was full of the remnants of the English Middle Ages. He collected manuscripts of Skelton, manuscripts of Chaucer, Litgate's poetry. He collected historical materials. He went around as monasteries were dissolved and he bought books left over from the dissolution of these monastic libraries and this nearly got him into extraordinary hot water. He was investigated not once but twice by the authorities. They suspected he had heretical books and by heretical they meant Catholic and lucky for him they didn't find anything incriminating. Alexandra Gillespie looks at Stowe's library to trace how he used these pieces of the past to create a more coherent sense of the English past and of English identity. As she notes, Stowe used his books to make editions of Skelton, of Chaucer, of Litgate. He used them to write his best selling historical chronicles. He lent books to Matthew Parker, the Archbishop of Canterbury, to the alchemist John Dee, to the dramatist Ben Johnson, to the historians Raphael Hollinshire William Camden and through these, through this group of intellectuals, he was able to influence writers such as William Shakespeare and Edmund Spencer. So in the stormy wake of the monastery's dissolution, Stowe gathers up the scattered fragments of the medieval English past to recover a sense of English identity. At his death, his own library was dissolved and scattered. His books are now held at more than 10 libraries in England. Sometimes they're digitized, sometimes they're catalogued in online catalogs and sometimes they are neither. And this is where we come in. Our intellectual project is very much like that of Stowe. Our idea is to bring together the scattered leaves of Stowe's own library, bring them together in one digital space within the framework of IIIF, so we can trace the contours of Stowe's own intellectual interests, the way you trace, say, the shape of a person just by looking at the coat that they've been wearing for a long time, and also to trace the communities of practice that were formed by the Tudor intellectuals who read these books and who wrote in their margins. So in short, we want to collect Stowe's books in one virtual space, we want to annotate them, tag annotate transcribe, we want to search and organize both books and our annotations on them, and we want to be able to exhibit and narrate our discoveries. What kinds of discoveries does this kind of work enable? I'll give you one example, actually two examples. When we bring all of the books with Stowe's handwriting in one place, it allows us to identify Stowe's scribal hand and his library more rigorously. So Alexandru Gillespie and Jessica Henderson were able to conclusively identify books as belonging to Stowe, two more books, and they also noticed that there may be at least one if not two or more mysterious, closely related hands working alongside Stowe in these manuscripts, and the way they're working with Stowe is really weird. It's clearly a different hand, but sometimes they continue the same annotation. We're not sure what that means, we're looking into it further. So to this end, we work with the International Image Interoperability Framework, the emerging international standard that enables consistent digital image delivery across multiple digital libraries. IIIF is currently supported by R1, such as Stanford, Yale, Harvard, by major archives like the British Library, the Vatican, a host of national libraries, and non-profits such as the Internet Archive and Artstore. To hold our collection, we are currently using OMECA, the Roy Rosensweig Center for History and New Media's content management system. How many of you have had a chance to work with OMECA? Fantastic, this is what? I was hoping OMECA holds collections of digital items and also enables scholarly narratives and exhibits centered around these digital collections. The main thing, the main reason we love OMECA is because it is already quite well known, it's really easy to use, it's well documented. It has wide traction, not just in humanities scholarship, but also in pedagogy. Around the world, many projects centered on digital archives have invited undergraduate students into the data curation process and exhibited their work, enabled their work for OMECA. So this is OMECA, this is beautiful IIIF, and we marry them together. I want to look at this marriage as technology in practice. It's a concept defined by Wanda Orlikowski. She defines it, she suggests that technology is not just a machinery, digital or analog, but on the one hand, the organizational culture around it, and on the other hand, the needs and practices of its user community. So it's not just the technology, it's how people use it, how people adapt it depending on their organizational context. It is, quote, what people actually do with the technological artifact in the recurrent situated practices. In a digital humanities context, technology in practice is software and data that's used by scholarly communities within institutional and disciplinary concerns and constraints. So for example, in our case, it would be IIIF as it is used by people who study manuscripts as Ben and Sheila have demonstrated. So how does IIIF function as technology in practice? It provides a set of robust protocols for navigating the landscape of digital archives for collecting data around digital archives. It maintains scholars access to digital archives without sacrifice, without moving those archives out of the repositories where they reside. So scholars have access to these high quality, authentic sets of images. IIIF has an active user community that collaboratively configures both the specifications and the open source tools built to implement the specifications. And as Ben has noted, one of the concerns that's coming up now is around making the way between, the IIIF paved way between archive and scholars more navigable both ways. So in other words, IIIF is a great way for scholars to access data. What is a way for scholars to feed back into IIIF framed repositories? Our work at Toronto seeks to address one aspect of this problem, which is, while IIIF works to de-silo the digital archives, how can it also enable the work of smaller libraries, the work of less well-resourced individual manuscript scholars who don't have either significant computing support or expertise or need any computing support. So what our work does is we focus on individual scholars who work with digital images and for whom IIIF provides this useful productive intellectual framework, but who have limited technical experience and little or no institutional support. And once again, the technology in this view is IIIF, the organizational constraint is limited support. Think, for example, of a library that can stand up on Omega instance, but not a whole lot else. And the user community is one of scholars who are interested in the same things that IIIF enables, collecting, comparing, annotating images of manuscripts, but also making the scholarly process more visible, sharing, taking the scholarly raw data, their curated collection of images, the annotations that scholars make on these images and then sharing this data, presenting it in a digital scholarly narrative. We traced these interests through usability interviews with Canadian scholars mostly who work with medieval and early modern manuscripts. And here's a shout out to Rachel DeCresche, our project librarian who masterminded these usability interviews. So the group of scholars we looked at all work with images, but they do so in various ways. And I mentioned Canadian, because Canadians are in an interesting bind. We work with manuscripts, but our stuff is either in England or mostly at either mostly in English libraries or mostly in U.S. libraries. So a lot of our work has to be digital, but of course it's a much wider predicament. So here are the folks that we've spoken to. Obviously the names are fake. We've anonymized our interviewees. There's Pat. Pat temporarily lives in England. Pat goes to a major archive in person and works with the physical manuscripts. But when the archive closes, Pat goes back home and writes up their research by consulting the same manuscripts through triple-I of compliant archive grade digital surrogates. Sometimes just by blowing up the image, you're able to see much better what you were looking for. And of course the digital images are there at midnight, whereas the manuscripts are there at midnight. You just can't get at them. Then there's Nat. During their PhD thesis, Nat collected hundreds of DIY images that they took over the years in Turkish and Venetian archives. These archives are not digitized. So Nat's only access to this material is through their DIY images. And then there's Robin. Robin has a collection of manuscript images from a variety of repositories that they're storing in their Dropbox. In some repositories, the manuscripts are digitized, triple-I've served. Robin took screenshots, I know. The only, in other repositories, the manuscripts are not digitized. The only facsimile exists on microfilm. So Robin wrote to the library, got a copy of the microfilm facsimile, had that microfilm digitized and is now using those digital images. Robin organizes these images in nested folders with notes and metadata in a spreadsheet, but Robin really would like a more robust platform for this material. Finally, there's Julian. Julian wants to teach historical archives in the digital age and get their students to think through and experiment with triple-IF as a way to think through digital collections and the data standards that make them possible. But Julian doesn't have the resources to focus on software installation and troubleshooting. Julian has 200 essays to mark. So they need an easy user-friendly environment for their pedagogy so that their students can focus on archives and data curation more than on technical overhead. I'm giving you these generic names because we've anonymized our interviewees and some of these people are composite persons because their particular circumstances represent a wide range of projects we've been talking to. The manuscripts they wanna work with are not always digitized, let alone triple-IF compliant. They want to build digital arguments with images but they want really simple platforms that even a not very well-resourced library could stand up. Or the tools they want to work with are not supported by their institutions. So how can we configure tools in the triple-IF space to support these kinds of projects under these kinds of constraints? How can we make triple-IF easier for these particular circumstances without losing the rigor of triple-IF? And I invite Shawn Mika who will turn to that portion of the project. Thank you, Alexandra. In thinking about how we bring triple-IF to individual research projects, I wanted to just start by taking a really quick look at the triple-IF software stack. So to serve triple-IF compliant images, you store them together with the manifests that describe them in a repository. You pop a triple-IF compliant image server on top of that repository and you have a triple-IF image client which is embedded in a website somewhere retrieving that content through the image server from the repository. So the stack includes three components of software and it also requires the generation of those manifests that let us ship the triple-IF compliant content around. It's a really well-structured and robust stack. It has enabled all kinds of hosting institutions to stop siloing their image repositories and stop duplicating code. But as we've discussed, it's not always that permeable to individual researchers for use in their own work environment, particularly where they're at institutions where the stack has not yet been fully implemented. And we've heard about some great initiatives of the triple-IF community to work together in many ways to support the widespread adoption of the standard. Ben's talked about discovery initiatives and possibilities of exchange of data. The scholar's desktop is really, as Alexander has said, the area where we're choosing to focus. We're asking ourselves how can we assist researchers who want to use triple-IF capabilities, particularly to develop their own research, present their own research. How can we make it possible for the researcher to interact easily with that growing pool of triple-IF compliant resources? So as Alexander has said, Omeca was hugely attractive to us. On top of its established use in research and pedagogy, there are two other great virtues. It offers researchers a built-in repository where they can store their own content, including their DIY images for presentation. And it's locally implemented quite widely already and it's straightforward to implement. So we decided to adapt an available triple-IF compliant image viewer as an Omeca plugin, which would extend that triple-IF capability to any researcher who can have that Omeca plugin installed in their local Omeca environment. So that's a relatively lower bar, we're hoping. Our first task was to select which triple-IF compliant viewer to adapt for Omeca and our researchers shared their workflows as Alexander's covered with us. And based on their needs, we chose Mirador from a field of some really strong candidates. So our developers got their feet wet with Mirador by doing a little documentation and interface work on Mirador itself thanks to the kindness of the Mirador team. And we've now gone ahead to adapt Mirador as an Omeca plugin. This is an actual real screenshot, it's not smoking Mirador's, Mirador's really. That was an accident. So here we're looking at a manuscript via the Mirador plugin embedded within Omeca. We've also developed a triple-IF items plugin which handles the import and export of triple-IF content from Omeca. And coming back to Alexander's discussion of STO, let's think about how a scholar might use those plugins and how that relates to research data management. Let's say, for example, that we've got a scholar who wishes to present a thesis about tutor networks of intellectual practice. And using the triple-IF items plugin, they could bring in selected volumes from STO's library, dispersed in multiple different collections. And using the Mirador plugin, they can present them in Omeca and now they can build an Omeca exhibit that provides evidence for their thesis by showing samples of the hands that annotated the books owned by STO. And because their Omeca exhibit is using triple-IF content resident in its native repositories, the reader may examine that evidence as closely and completely as they choose. So as we think about research data management, a core research benefit of putting a triple-IF compliant viewer inside an exhibit builder is that it enables humanities researchers to directly share the evidence supporting their research. Because the Mirador plugin pulls the triple-IF content directly from its host repositories, the Omeca exhibit is in essence built with live data. Now as I've said, the new triple-IF items plugin for Omeca, and I should just pause for a second. I apologize to say that all of these things will be open sourced on GitHub. They're all supported with a generous grant from the Mellon Foundation for which we are deeply grateful. So the triple-IF items plugin for Omeca handles bringing triple-IF content into the Omeca database. And triple-IF items can also batch import images either from a repository or from the scholar's own desktop. And this is particularly helpful for that scholar who has those DIY images because once imported our Omeca triple-IF items plugin creates a triple-IF manifest for those images. This leads us to an important research requirement for our researchers. They need to be able to shift data really easily between their web environment and their local desktops. And so our developers have also whipped up an Omeca plugin for CSV export from Omeca, which is compatible with the existing import CSV plugin and plays nicely with the Omeca metadata format. We built the plugin because like so many of us when our researchers are working with mounds of data, they like to use Excel to munch the stuff and the plugin makes it possible for triple-IF metadata. And thinking again about RDM, Omeca's use of standards compliant metadata formats I think really increases the likelihood of longevity of the scholarly work that's built in the framework. So it's helpful if we can encourage the use of frameworks like Omeca to build resources. You could in fact think of the triple-IF manifest as an essential packing list for machine to machine exchange of these compound image-based objects. And it was conceived of to be used by machines. So it's understandable that the existence of that manifest has not always been foregrounded for the user of triple-IF software. But we think that the manifest itself can be directly useful to the researcher when they're shipping manuscripts between their own desktop workspace and a web space such as Omeca. And it's practical to support this workflow because if a researcher can maintain their specialized tool sets and workflows in their own local environment, then their web environment is free to focus on more generalized and public tool sets. So that again helps rationalize what's available and where it's available. So in addition to being able to grab and move complex objects around, the manifest can allow researchers to perform basic manipulation on their triple-IF object. They can, for instance, change the image order. Sometimes that's handy. They can also communicate and exchange additional information about their objects via that manifest. And Ben's presented some really exciting ideas for this exchange this morning. And Jeff Wett and Raphael Schvemer have also presented and blogged on the possibility, proposing an extension of the triple-IF API to manage this exchange of information. So I wanted to spend a couple of minutes looking at how we're working with Omeca. And when we were planning for the adaptation of Mirador as an Omeca plugin, our library and medieval studies teams, and when I say we, I really mean Rachel, and her partner Laurel in the medieval studies, Laura, sorry, in the medieval studies team, carried out some usability studies to understand how scholars create and use annotated images in their scholarly workflows. And essentially we determined that to be really useful, ordered collections of images in their annotations, triple-IF objects needed to function both as fully fledged triple-IF objects to allow the reader to examine a manuscript in its annotations in context, and also as Omeca items so that the researcher can present specific image snippets and selections of annotations when they're building a scholarly argument. So this informed how we mapped the triple-IF data structure to Omeca. And our overriding goal here was to maintain that triple-IF functionality and not mess with Omeca while we were at it. So now I'm gonna show you the obligatory, hard to read schema diagram of Omeca, which if only you could see it would demonstrate that we've in fact inserted triple-IF into Omeca, mapping to the native triple-IF native Omeca data schema. So let's just romp along a little bit now. I wanna show you a quick few screenshots. Here we are ingesting a manifest into Omeca, ingesting a collection by a manifest. It's really super simple. Just plug in the URL for the manifest. And here we're seeing the manifest getting ingested. Up to 100 images of this Chaucer manuscript so far. It's in progress. Now we're gonna start working with the manuscripts. We're viewing annotations inside Omeca here in the Mirador viewer on the first screen. And we're actually creating and editing annotations and tagging them on the second shot there. Got some options for browsing lists of manuscripts. We're using the native Omeca browse collection functionality to browse manuscripts at the top there. And in the bottom we're searching items within Omeca, searching by annotation tag, which would be super useful for concepts like pulling together all of the annotations that track a particular topic like say a particular scribe's hand. And here I've got a super small prototype exhibit which embeds an annotated manuscript in an Omeca exhibit using the Mirador viewer. And to quote Alex Gillespie, who couldn't be here today, my, she said, you could imagine using Mirador in an exhibit to peel back the layers on a manuscript page. Unlike PowerPoint slide shows, these Mirador and Omeca exhibits make the scholarly evidence available to the reader and they can examine and evaluate it for themselves. So we're also planning some additional development work outside of Omeca to provide more support for manuscript studies. Here's a diagram that illustrates the components of our work. We've already talked about the Omeca pieces up in the top left there and how they connect to the scholar's desktop on the right. Now we're going on to develop an API layer using the IIIF API specifications with the goal of pulling and pushing information between Omeca, other content repositories and the scholar's desktop. And in the new year, we're gonna be developing into 3D work. So DotPorter, excuse me, DotPorter at the University of Pennsylvania's Schoenberg Institute for Manuscript Studies developed a tool called VisCall. Have people heard about VisCall? Is that something that people are familiar with? If you want to visualize the 3D structure of a manuscript, VisCall is your tool. It's an XSLT tool that provides a 3D visualization of a manuscript using a collation formula or information about the manuscript structural information. And so working closely with Dot, we're gonna take that XSLT tool and make it into a web application. So I wanted to give you an example of why that's kind of interesting and important. Here we're looking at the structure of a particularly interesting manuscript, which is Chaucer's Canterbury Tales. It's the MS152 held by Canterbury Christ Church. And we see that the first five choirs, four on the left and the top one on the right, have the same structure. But then, choir number six in the middle on the right there is an odd little thing. It's only got two leaves. And in fact, this manuscript of the Canterbury Tales is the only one that contains a Plowman's Tale. It's an apocryphal tale, which sure enough starts on that inserted choir six. So now I wanna jump to a standard manuscript view, which shows the last page in choir five and the first page in choir six, where the Plowman's Tale starts. Alex tells me that if you look at the manuscript page images, you might notice that there's a difference in ink color between the left and the right pages. And if you examine the manuscript itself, you may feel what the digital copy can't yet convey, which is that there's an actual difference in weight of the parchment between those two pages. But if you take a look at that 3D structure that I showed you, it helps to make it clear that the Plowman's Tale was actually inserted later into that manuscript. So Viscall acts as an essential partner to Mirador and Omeca in this work. With Viscall, the scholar can assemble evidence about a manuscript's 3D structure and with Omeca, they can exhibit that evidence and they can explain the theories that they've built upon that evidence. So we plan to develop Viscall first as a standalone triple-IF compliant web application and next, if we're able, we're going to import that too into Omeca. And a third goal for our new year will be to develop what we're dubbing as triple-IF to go because building on all of the wonderful documentation that's already been developed by the triple-IF community and learning from all of the work to promote triple-IF that's going on in that community, we're seeing triple-IF to go, not so much as software in a box, although it might contain some. It's also more of a toolkit and a roadmap, a wayfinder for scholars and institutions who want to implement triple-IF in their own workplace so that they can bring together and share the resources that are meaningful to them. And so with that thought, I'd like to give Alexandra the final word. Thank you so much. I want to return to our motto. The passage that opens our presentation appears in the final candle of Dante's Divine Comedy. Dante the Pilgrim has just been through hell and through purgatory and through the heavens. And at the very end, Dante the Pilgrim is granted a vision of God, a vision of the heart and the intellect. He has this breathtaking vision of the whole of the universe's knowledge. At the end of Dante's book, God binds the whole universe into a book. And we keep thinking of this vision as magnificent breathtaking vision as we go about our very much humbler endeavor to collect the scattered remnants of a past whole to bind together John Stowe's scattered library, a whole that is itself a collection of scattered books that John Stowe bound together from an even more distant past. It's a really small collection. Used to be 64. Now, thanks to this project, 65 manuscripts. But there's another perspective to this magnificent vision of the world bound together in a single volume bound by love. Dante's universal vision comes to us through the eyes of one specific person. And he's not a generic figure. He's very much Dante Alighieri, the Pilgrim, with his own loves and his own enmities, hating a whole bunch of people in Florence and beyond. He's very much enmeshed in his own historical circumstances. Just like John Stowe. And just like our own individual scholars who come to IIIF to the universal framework enmeshed in their own particulars. And as we develop platforms for knowledge creation and dissemination, this is the perspective that our project seeks to inhibit, seeks to inhabit. We want to meet scholars where they are. If they're interested in research or teaching, working with DIY images or Czech microfilm or IIIF-served archival great images at the Bodleian, technically inept or technically adept. And where they are, we want to build an initial space so these scholars can bring together in one framework the scattered leaves of the micro universes of their own domains of knowledge. Thank you very much.