 Okay, so it looks like that we're right at two o'clock. So let's go ahead and get started today. So I'm going to bring up my presentation. Can everyone still see my presentation with my intro slide? Awesome, thank you. Okay, great. So hi and welcome to this introduction to IIIF. I'm Meg O'Hearn. I'm the community and events coordinator for the IIIF Consortium. I recently started working with the consortium in February. And so I work alongside Josh, Pedro and Glenn Robson to help arrange IIIF various events like the one that you're attending today and to coordinate and communicate with the community. So I'm going to give a 30 minute intro to IIIF, but before I get started, I'm just going to start with a little bit of housekeeping. So we're using Zoom in webinar mode just to avoid any disruptions and to make it a little bit easier to deal with the large number of attendees that we're thrilled to have here for IIIF week. But this means that the Zoom might be working a little bit differently than something that you're accustomed to. So just all the attendees are muted and we've disabled the chat and instead we're going to use the Q&A functionality. So you should see that at the very bottom of your Zoom window. There's a little icon that says Q&A. And this is where you can ask questions after the presentation. So I'm being assisted by Frederick Zartt who is one of our IIIF ambassadors who has kindly offered to help manage your questions. So you might be hearing from him in the Q&A if you have any questions about technical stuff or you can't hear or you can't see a URL, that sort of thing. So thanks again to Frederick, much appreciated. So the questions go into Q&A. The third note here is just something that I mentioned as many of you were joining this session. We have a Slack channel that is set up for the event. So if you go to the Bitly URL that's up on this slide that will take you to join the Slack group. And then once you're in there you can search for the channel which is called IIIF Week and join the conversation. And we'll be posting a number of things there. So great, I see that Frederick shared the link. That's really helpful, thank you. So just a reminder that we are recording the session today and we're gonna send the recordings out after IIIF Week happens. So if there's anything you miss or you wanna share with colleagues or friends those will be available. So I'm just curious before we get started and really get into the meat of the presentation today could you use the raise your hand feature and let me know how many of you are totally new to IIIF events. So if this is your first event that you've attended, wow, okay, this is great. Wow, so about half of you are new, that's wonderful. So welcome, we're so glad to have you here joining us. I'm glad to be able to give you an introduction to IIIF. So I think this is appropriate then that we're starting at a pretty high level just answering the question, what is IIIF? So at its most basic it stands for the International Image Interoperability Framework. That's kind of a mouthful. So that's why we refer to it as IIIF. But more broadly, it's a global open source standardized model for delivering many types of image-based resources on the web in many different formats just so that audiences can interact with them. And it's something that provides a lot of benefits to the institutions that use it. And also that when it's implemented across many different institutions, it provides a lot of additional benefits across institutional boundaries on top of the benefit to the institutions that are directly implementing it. It's also more than a standard. IIIF is really community-based. So we're a global community of many different folks of software developers, librarians, researchers, museum collections managers, creative agencies, pretty diverse group that all work together to develop open APIs and implement them in software and expose images and AV files, et cetera. So it's really something that's a grassroots effort between many different institutions of many different types to solve their shared problems surrounding their digitized images. So this is a map that we put together that shows a number of different IIIF implementations around the world. There's hundreds of different implementations used in various ways. I should note that these are just the ones that we know about. So given that the APIs are open source, really anyone can implement IIIF and they don't have to tell us. So if you'd like to take a closer look at this map and see specifically at what institutions IIIF is being used, you can view it at the bit.ly link at the bottom of this slide. And actually, if you know of an implementation that's missing here, you can feel free to edit it and add it to this map. It's openly available for editing and we really appreciate the community involvement on updating documents such as this one. So this slide shows just a few leaders in IIIF adoption is not an exhausting list in the least. We cannot fit hundreds of institutions on a single slide but you can see that it includes a number of different museums and galleries, universities, state and national libraries, content aggregators. So folks like the British Museum, Ghent University, the British Library, Europe Piana, the Internet Archive, et cetera. Again, not an exhaustive list, but just so you can get an idea of the types of institutions that are implementing IIIF. So seeing that list and that map really might make you wonder why so many different institutions are coming together to join this joint effort. So stepping back quite a bit, I think, and taking a look at the big picture, it's really because images are digital images, especially are fundamental carriers of information across a number of different fields. So in cultural heritage, in STEM disciplines, et cetera. And they're really important because they document the past and the present and they preserve it for the future in digital form. There are things that really help us understand kind of complex processes through visualization. They grab our attention, they help us quickly understand very abstract concepts. And they're also just ubiquitous. We interact with them in incredibly large volumes in the web or on the web in both scholarly and other settings. So I actually should stop here and just clarify that when I say images, I don't necessarily just mean things like photographs or paintings or stuff that might immediately come to mind. Rather, I'm more referring to a more broad set of image-based resources. So things like what you see on the slide here, scanned newspapers, books, AV materials, maps, manuscripts, sheet music, and of course, photographs and paintings as well. So of course, we all know that institutions have done a lot of work to digitize and share image-based resources and put them on the web. Many institutions have really, really robust collections that are available, but a lot of these kind of effectively exist in silos. So meaning, as is illustrated on this slide, users kind of have to go from site to site and conduct a search, take a deep dive into that site to find all the stuff they need, come up and go to the next silo, search that, grab all the stuff they need, move on to the next, et cetera. So that's something that can be challenging for users to not have an easy and consistent way to find the materials they need. And on the institutional level, it's also challenging to just develop and maintain this infrastructure. And it also requires considerable staff effort in addition to the money that's required to maintain these infrastructures. So here's just one example, an illustration of how this works, kind of very similar to the last slide showing all these different silos. But I think let's look at this illustration and think of a particular example. So one thing that we often see today is that the different repositories and applications that provide access to digitized medieval manuscripts function with access to each different repository provided through these one-off applications, which include the tools for end users to work with, to work with the images. And there are five or six very similar repositories that exist. And although they exist in these very separate silos, like they really don't have to, if you think about it from a technological standpoint, each of these sites is working in a very similar or even identical way. But despite this, they are in silos and each different institution is investing their own time and money into that silo. And this is something that's really kind of a microcosm of the wider access, the wider world of access to image-based resources. So that's the problem, but let's take a quick look at how IIIF solves this problem that's kind of shared across institutions. And I think that one of the best ways to understand this is to look at how IIIF helps through different demonstrations. So I'm just gonna go through a few different slides and show you some recorded gifts of live sites that show IIIFs different capabilities in the wild. So again, these are all live sites that we're going to be taking a look at. I'm just gonna double-check the chat here. Okay, great. Just wanna make sure you all can hear me. Okay, so here is our first example. This is showing something that's actually a very common need and that is the ability to make available very large image files for users to view as a whole image and then in Zoom detail. So this is one of the really wonderful things that IIIF allows the Zoom into the very, very fine detail as you're seeing on the slide here and kind of delivering just enough of an image without loading it in its entirety. So if you think about the way that Google Maps works, when you look at Google Maps for a view of a city and then you're able to Zoom into a particular neighborhood or a particular intersection or the restaurant that you're going to. So that's how this is functioning. And the example on this slide is actually a really interesting one. So this is a Japanese tax map. It's held in the collections of Stanford University. It's pretty old, it's from 1837. And it's a map that's meant to be read in the round by someone standing in the middle of it, which of course we would not do to very old archival materials. But it's actually, it's huge because it's meant to be stood on. It's about 11 by 17 feet. And in this recording, you can see that there's a person standing next to it. That's Wayne, he works in the library at Stanford. He's actually 6'4", just to give you an idea of how just how large this map is. So the actual image of the map, the digital image is a composite of around 150 different individual images. It measures about 34,000 by 23,000 pixels and the file size is 1.27 gigabytes. So it was just a massive image. And without AAAF, an end user might have to download a really large file to view it. It's really challenging, takes a long time to load. But thanks to AAAF, there's this really smooth and easy viewing experience. And so in a technical sense, AAAF solves the problem of how to make this really huge file available to an end user. But from an end user perspective, it also really lets them almost in a way have that experience of standing in the map where you can stand back and view the broader image and then kind of digitally lean over to view different areas. So it's a really nice way to be able to interact with this particular image. And I should add that the idea of enabling scholarship is something that's really been baked into AAAF since its creation. So one really common method of working with images is comparison, which AAAF enables and what you can see an example of on this slide. And so we know that comparison is a core analysis practice in art history, history, a few other fields. So this is a very handy thing to have to enable that digitally. This particular example on this slide is a letter that was written by Alexander Hamilton. And it's from the collection of the Library of Congress. And this particular comparison is looking at a regular scan of the letter and comparing it with a multispectral scan, which highlights the underlying text that had been scratched out or kind of overwritten in the process of writing the letter. And comparing these two images really allows you to see how the text was changed as it was written. And so I think that many of us could probably see how functionality like this could also cross over into other disciplines. So comparing different medical specimens, et cetera. And so one of the benefits of AAAF is that comparison, like what we just saw, doesn't have to be something that's really limited to objects from within a single collection. So this is where we really start to be able to leave the silo and you can work with these images across different sites. And so one example is on this slide, we have an image of a manuscript, which at some point in the past had the illuminations cut out and sold separately from the body of the page. And this particular case, the text and the images are owned by separate institutions or sold all around the world. And those two particular examples that we just saw are both owned by separate institutions in Paris. And so they're just a cross town. Geographically, they're not all that far apart, but in terms of digital collections, they're really existing in different universes. And typically, you'd have to start at the top of that one silo and dive down, find what you're looking for, come up, go to the next one and dive down. It would be difficult to work with them together. But because these institutions that have the text and the image from this particular manuscript are both AAAF compliant and these images function in all the same ways, it's actually pretty easy to reunite the two images as you're seeing in this particular demonstration from the Blisema. And this allows us to create an environment where someone who's researching one of these manuscripts can pull the images into the text and view the manuscript the way that it was made to be seen, despite the two images or the image in the text being held in different collections. So searching within text and AAAF. So AAAF has a number of capabilities that really allow you to work with translation or transcription like OCR in different texts and then search within that text. So this particular example is from the Welcome Library. And this is a book on germ theory that I found. And you can search the text of the book for Puster's name and you can find all the different instances where he is mentioned throughout the book and navigate to those and it actually highlights it on the page. It's a really nice functionality. AAAF also allows annotation. It's another kind of core component of AAAF. It can be used in many different ways. I really like this particular example here. It's from an edX course at Harvard University. And so here we have a really high resolution image of a cell that students can zoom in on to see all the tiny little details and bits of a cell. I think that's the technical term for them bits. But each of these different parts are annotated with the name of the bit that you're looking at and some details of them. So it really is a nice experience for students to be able to see every different part in their original context and to scale as they move around the image and start to learn all the different cell components. So AAAF annotations actually unlock all sorts of other capabilities as well beyond just annotating a single image like what we saw there and zooming in on it. So this particular example is from the National Library of Wales. And they did a really wonderful crowdsourcing project where they asked the local community to identify the people in photographs from their collections and add their name, associate them with different geographies using geotags, et cetera. And actually in another example from the National Library of Wales, in this particular example, AAAF annotations are being used to connect to Wikidata to show standardized information about what is depicted in an image. So this particular case that I have up on the slide is a location called Constitution Hill. It's located in Aberystwyth in Wales. You can actually go through and click the annotation that's happening on the GIF right now and go through to the Wikidata property for this particular location and see it on a map, see the coordinates, zoom out, see where it is within Wales, et cetera. And actually the usefulness of AAAF annotations don't end there. So on this slide, I have an image of a tool from North Carolina State University and this tool is designed to guide you through the different AAAF annotations on an image. And we've actually pulled in that image from the National Library of Wales and because everything is standardized, you can actually view all these annotations in this tool from NCSU, which is meant to be a guided annotation viewer. So it kind of leads you through all the annotations on an image and shows you what they are. And similarly, similarly, here's another example of guiding through annotations from a company called CogApp. They do a lot of work with the different museums in the AAAF community. And this particular tool is called Stories with three eyes, of course. And it provides a similar experience to the North Carolina State University annotation viewer that we just took a look at. And it guides a viewer through the different annotations on an image, but it also kind of gives them the freedom to zoom and pan around from there so they can really see the annotated area in relation to others. It gives them that freedom to explore and then kind of get back on the guided annotation track, which is a really nice feature. And in another guided annotation version or option, this is a similar tool from a company called Dijiradi called Canvas Panel. And this particular image is of an ocean liner from the V&A's collection. And the guided annotation tour allows you to go through all the different sections of this ship and see details about them. So it's interesting because these tools are things that have often been used by museums to create these kind of supplementary online resources in support of their exhibitions. But AAAF can actually also be used in galleries to enrich an exhibition and kind of provide a level of interactivity for the audience that might not otherwise be there. So here is an example of a couple manuscripts, physical manuscripts, not digital ones that are on display in a case. And they're open to a single page so the viewer is kind of forced to look at that. But they've done this really nice thing which is to put an iPad on the wall behind these manuscripts. And so those who are curious can go through and look at all the different pages of the manuscript. And they've included scholarly annotations that were added by the exhibition's curator so they can really take a deeper look rather than just engaging with that single page of the manuscript that they have open for the exhibition. And because all of these different examples are open and interoperable, this sort of thing can be done across different readers and tools. So this particular example that I have up is from the Indigenous Digital Archive. You can check out their site at the URL in the bottom left there. And this is a project from the Museum of Indian Arts and Culture, the New Mexico Library Tribal Libraries Program and the Indian Pueblo Cultural Center. And they are allowing different researchers to open their resources that they've made available in two different viewers. So if you look up under this viewer here, you can see Try a Different View. And so they can use a viewer like Mirador, which allows comparison and opening documents and kind of a book view. Or they can use the Universal Viewer which provides deep zoom. So for some of the images that are in the collection, that's really nice to take a look at the different details. And I should also add that, you know, we're really looking at a lot of examples that are working with just regular planal images, but IIIF goes beyond images and also supports AV materials. So this example is a prototype of what can be done with version three of the API, which will be launched soon. And it's just to explain what's happening before I play this video. What you're seeing here is the viewer is pulling in a YouTube video and it's using the video's timestamps to align the sheet music by highlighting sections of the sheet music as they're being played. So it's a really wonderful use case that's going beyond images. And I hope you all can hear this. I'm gonna play the video and just let me know if you can. All right, we don't have to listen to the whole thing. But yeah, it's a really wonderful use case and it really shows you the power of the different IIIF APIs to make these resources available and usable in different ways. So speaking of the APIs, now that we know what IIIF can do, I thought it'd be nice to take a high level look at how it works through those APIs, which stands for application programming interfaces. So APIs, like many people on the phone are likely familiar with them, but for anyone who isn't, there's something that they function as a kind of agreement between two parts of a system or different systems. And the agreement is just that data will consistently go in one way and come out another way. And when that is something that's agreed upon, it allows you to easily switch out different pieces of the system without too much effort. So you can swap out the front end, the back end, both the front end and the back end. And doing that just enables really easy image delivery and reduces cost in many different ways at the institutional level. That said, the full power of IIIF is something that really comes into effect across institutions who have adopted it. So this is kind of where the normalization that's brought by the IIIF APIs allow them to be worked with across repositories in many ways. So searching across repositories, seeing other institutions, annotations, opening an image and institutions images in a separate app, that sort of thing. And so there's two kind of core APIs that make IIIF work. One of them is the image API and that delivers the pixels. And the other is the presentation API and that delivers the presentation of the object. So let's just take a look at exactly what that means. So this is kind of an illustration of the image delivery API. And what this slide is showing is that that is just kind of a URL that delivers the whole image or parts of the image in different resolutions. So you're seeing here what the image API is taking this large, full original image and producing a detailed version in different sizes, flipped, rotated in grayscale in different file formats, all from that original image. And this slide just shows you a detail of what is happening in that URL for that image. So the second API is that presentation API that I mentioned. And the presentation API takes this information and combines it with structure and some very basic metadata to just drive the viewing experience of the image. I should add that it actually works with all metadata standards. And so I think it's helpful to look at this like kind of as an illustration like we have up here to see how the two different APIs come together. So this is a view of a IIIF object. It's a sketchbook. You're looking at a single page but you can see the other pages of the sketchbook are in order underneath it. And on the right, highlighted in blue, we have the content that is delivered by the image API. So this is just a deep zoomable image. Then on the left in red, we have the presentation API. So that's just displaying that small amount of metadata. And also the order of the sketchbook pages down at the bottom of the page. Those thumbnail images, the images themselves though, those are actually delivered by the image API. So if you think back to the previous slide showing how that image was broken down into other versions to create smaller versions, that's what's going on here. It's kind of a joint effort. And then so the use of these can really go beyond views of the image collections of someone's image collections. So this is an example from the Art Institute of Chicago and they are reusing the image APIs. So this is on their mobile app. So here where you see thumbnails or pieces of an image. So in this first image on the left, you see they have different images representing different tour options that you can take or on the views of works on the map view of the tour which is the image on the far right. Those are driven by triple IF. And they actually have a wonderful simple tool that uses the image API that their staff used to crop images for a variety of uses across their websites. So that's a handy added bonus. And I should also add that there are two other triple IF APIs, one for search and one for authentication. So the search API allows for searching within the text of an object like a book. So if you think back to that book example that we saw from the Welcome Library, the book on germ theory where we searched for Pasteur's name that drives that experience. And the authentication API allows materials to be restricted by audience. So for example, if you had sensitive images in your collections that were only supposed to be viewed by certain people or materials that were under copyright that only certain people should access the authentication API would make that possible. So that's a lot. And it might leave you asking, well, what doesn't triple IF do? Cause it sounds like it does everything. But right now, one thing that we're really working on is the discovery of materials. This is something that we've been giving considerable attention to lately. So we've started a discovery technical group who are looking into ways to make discoverability of triple IF materials possible. We've also started another group that has more of a UX focus. It's called discovery for humans. And their work is really complimenting the technical work that the discovery technical group is doing. And so their work is really exploring how researchers are discovering and working with triple IF materials across collections like very broadly. I should also add that this is actually a group that's just starting to take off. So if you're interested in getting involved in that, feel free to reach out with us. And that said, there's actually a number of other ways that you can get involved with triple IF. And you've actually completed the first step. So congratulations, you're doing an awesome job so far. Aside from attending this conference or virtual events, you can take part in the triple IF community. So there's a number of different community groups that you can join. If you go to our website and look at the list of community groups, there's things like triple IF for museums or for archives, for newspapers, et cetera. And they have regular calls that you can join and share your work or learn from other folks's work as well. We also have a weekly or bi-weekly community call where you can join just to get updates periodically to see what other folks are working on and kind of get inspiration from their different triple IF implementations. You might also want to join triple IF discuss. So this is a Google group that we have. You can get on the Google group and just receive regular announcements about things that we're up to, see the different conversations that community members are having and stay in touch that way. If you also wanted to, if you're a Slack user, you can join the Slack channel, which I mentioned at the beginning of this talk, and join the different channels for topics that you're interested in. And those actually go beyond the different community groups. So we have stuff like triple IF for education or triple IF for beginners. So there's a lot of different ways to dive in there. If you want to go a little bit deeper, some next steps might include implementing triple IF at your institution. So that's something where, you know, the community is available to help by sharing experiences or helping answer questions via the channels that I mentioned before. We have some training resources and getting started resources, available on our site that you might find helpful. You can also choose to use triple IF compatible software. And I'll show you a list on the next slide that I'm going to show. And if the software that you're using to manage your collections isn't triple IF compatible, you can reach out to your software provider to request that they work to become triple IF compliant. Finally, last item of consideration is joining the triple IF consortium. You can join as either a full or associate member and help fund or work, and help lead the community's conversation around the different developments in triple IF. So this is a list totally incomplete of some triple IF compatible software providers. I'll give you a second if you want to look at this. I can also share this with you later if you can't jot down everything in, you know, the few seconds that we'll have this up. But just to give you an idea of who's working with triple IF, it's nice to see. So as I'm closing, and before we move into the Q&A session, you know, I came to triple IF recently after working for an organization that has been, that had been implementing triple IF. And in the time that we were working on that it was so amazing to see how quickly triple IF grew. And now we're at a point where, you know, there's hundreds of adopters, there's actually over a billion triple IF compliant images available online. And it's something where the future is looking really bright and we're really hoping that you'll join the community and, you know, take the next step forward to the next, for the next phase of triple IF. And we really look forward to see where it's going. So I'll pause there. I think we have a few minutes now for questions. I can pull up the Q&A to see what's going on here. Let me take a look at the chat. Okay, great. So are there any questions from anyone? It looks like they are starting to come in. Okay, so I, the first question that we have is for the AV demo, did the sheet music automatically align with the video or did someone have to manually set those times? I am actually not, Josh, I'm not sure if you're familiar with how that works. Yeah, I guess I can, the risk of being wrong. My understanding is that it's using, it's connecting music XML, which does indeed have the parsed timestamps, I think for the beginning of the measure that's incorporated into the triple IF manifest. And then that, using the presentation 3.0 spec is aligning the start of the measures to the timestamps on the video. So I don't know precisely who did that. If I had to venture, I guess I would say, I think it was probably a graduate student at McGill. But I don't wanna say that for sure. So the next question that I have is, regarding the search API, how can the image text be recognized as human readable text? Is this automatically done by triple IF? It's not something that's automatically done by triple IF. I think it's a separate effort for with OCR, which is then searchable by triple IF. Are there any other questions? Okay, great. Okay, so do we have any other questions coming in from folks? So I have another question here. What if your institution has different kinds of images? Chat's moving. Let's say not all raw. Can you still make a repository? And how exactly the metadata is at, how exactly is the metadata added to the images can it be made automatically imported from Excel, for example? So I'm not entirely sure. I could use some clarification on that question. Maybe I'll answer what I think. Is that helpful? What I think it may be. I mean, I guess what we've heard from other people ask about is like, can triple IF work with lots of different image formats, like separate from raw? So the answer is yes, triple IF is kind of a layer on top of all these different formats. And that's really handled usually by say repository software or content management system or something else like that. So that's really where the triple IF image API comes in is just handling the pixels and it can work with any number of formats. So it's sort of a separate thing from the formats of the stuff you're storing. And then as for the question about metadata, that's another thing that triple IF is really very purposefully agnostic about the way we touch metadata. So the specifications really are meant to mesh together with basically any metadata standard that's out there. It doesn't specify any of its own proprietary or specific kinds of things. So in that sense, it does need to kind of work with a system that could work with an online system. So Excel is probably not gonna work but some other kind of more basic content management systems I think could probably supply the things needed by the presentation API. I hope that answered the question. It looks like it was. Okay, so the next question that we have is can you clarify the distinction between triple IF as a standard on API definition and the software that implements it? So Josh, correct me if I'm wrong on this one, but I think the API uses the standard to, I guess I'm confused by this question as well, I'm sorry. I think, I mean, I guess I'll take a crack at it. I would say like we sometimes kind of mix the vocabulary here, but to be, I think if we were being like ultra, ultra clear, triple IF is really a specification for a set of APIs. So there are rules that are required and then that's the distinction is triple IF is the like, yeah, the abstract kind of structure and requirements and then the software that implements it can interpret that in any variety of ways that satisfies that same set of requirements. Okay, and so then I have another request to review the positioning and regional info in the URL. So I'm assuming go back through my slides here and get to that. Is this the slide that you wanted to see again, Kari? Or, okay, great. And I can also post these slides into the triple IF Week Slack channel if you'd like to take a look at them after the session. So do we have, so another question is my research is with audio visual images. From what I understand triple IF can help me with tracking and in fact, watching those images from the beginning of a certain, a given century. I mean, I think I'm not sure that that's something that triple IF could directly help with. It could deliver those particular images from the time period that you're looking for. However, it's not something yet that can be searching across different, that works to search across different collections. So while you can pull those images into different resources, once you've found them and work with all the different images from the time period that you're working in, it's not something that can help you find all of those particular images through triple IF. Okay, so I believe we just have a minute left before this session is scheduled to end. If anyone has maybe one more question, we have some time for it. If not, let me move this to the slide that we have for the end of the presentation. So if any of you would like to stay in touch, these URLs that I have up on the slide are a great way to see a list of the different community groups if you'd like to get involved, say with the museum group or newspapers. The URL for triple IF discuss, it's just tripleIFdiscuss at Googlegroups.com. That's where you can see the community conversations that are going on and take part in them. And of course, the newsletter is just one way to stay in touch with new implementations and announcements from us. So if there aren't any more questions, it looks like we are at the end of the session today. Thank you all so much for attending and I hope that we see you on the Slack channel and at sessions later in the week. Great, so we're going to go ahead and end the session for the day. Thank you all so much for attending. It was great to see such a good turnout.