 Hi there, my name is Josh Hadrill and I'm the managing director of the triple AF consortium, and I'm really grateful for this opportunity to be able to give you a short project briefing. And I'll be talking about the use of triple AF beyond the standard deep zoomable image presentation that most folks might be familiar with. So, in particular, I'll be covering two related components or two aspects of community work that's happening. One is focusing on the 3.0 version of the triple AF specifications just published in June, really focused on the use of triple AF in terms of time based media so audio and moving image materials. So shift gears a little bit to talk about discovery work. So the idea of how people are able to work with these materials in an aggregated or with an aggregator institution, and some of the work happening in a group called discovery for humans which is looking at the user experience of how people encounter and interact with triple AF materials. But I think it's worth starting at the beginning if you're here you're probably familiar, at least in concept with what triple AF is but just to be explicit it's an acronym on it stands for the international image interoperability framework. And that is is a set of open standards for delivering high quality interoperable and attributed digital objects, mostly images and audio visual files, and doing that online in a way that's a good user experience for the end users. And of course, there's other benefits that accrue in using open standards like these, especially with now a really growing body of institutions and cultural heritage centers, making not just use of the triple standards to publish huge amounts of materials but also increasingly working to add new software components into the ecosystem so just continually growing and vibrant community that surrounds the triple AF standards. And this is a quick view into some of the places that we know of that have implemented triple AF though by no means all of the places that are making use of it. And the red are just majoring implementations that we know about. And then the dots and blue are actually members of the triple AF consortium that help fund the work of making this a sustainable project and and building buy in and putting on conferences and doing all the education work to make those open standards really thrive in the global web community. So, the first piece we'll talk about here is the presentation API the 3.0 version which was just published in June of 2020. You can see here listed some of the major components that came with the update to 3.0 of that presentation API. So what I want to focus on and really kind of the feather in the cap of this new release is this idea that it adds a duration dimension to what we call the canvas so for a long time since the beginning, we've had this idea of an X and canvas, in which you can put images and and other materials to present them to end users and now with the addition of a time based dimension. We can juxtapose images, as well as time based media on the same canvas and then, of course in doing it with the same standards, you then accrue the same benefits of being able to do things like annotations and other supplemental material all in using the same set of tools in the same basic principles that should lay out has been working on for a number of years. So I think, maybe the best way to demonstrate this is just to go through a couple of examples of how people are already making use of this. This first example comes from your piano, which is just doing a really stellar job of aggregating materials from all of its member states and institutions. And in this case, using a custom developed media player that is a triple AF media player to present all of the elements that they have aggregated that platform and so this example just shows basic playback but they've built in some really interesting tools for annotation into the your piano media player, which then unlock things like captioning and other scholarly annotations that might be added as a layer and an interaction layer on top of the image but just to give you a sense of the basic interaction here's here's kind of the most most simple case from your piano. And this next example takes this idea one step further and shows a little bit of what's possible so this is an example that comes from the McGill from McGill University and demonstrates the ability to juxtapose streaming video assets in this case the videos coming from YouTube, being able to juxtapose that streaming video component with annotated musical notation over there on the right and one other interesting thing about this is that it's using music XML and you can actually navigate through the video through different timestamps in the video by clicking on the musical notations there or using that's navigation mechanism up there at the top so I'll play this and you'll be able to hear and see how triple AF 3.0 is able to bring all these components together. So this next example goes even one step further and this is a tool called the Timeliner developed by Indiana University in conjunction with a digital agency called the digerati. This is really geared toward a classroom context where an instructor might use a piece of audio. And then uses this tool and can use this tool to, for example, describe things like motifs and other recurring elements and other sections, breaking up the sections of a piece of music, and then being able to tie that to a visual indicator to this interface is using triple AF media components and then using that visual indicator to help to help guide students and other learners in education contexts to see how those different pieces of recurring and be able to manipulate and add annotations of their own if they like. So I'll play this example and you can get a sense of that. There's other great work that's happening out there in terms of putting even more audio material out in the world and then building additional tools to work with those materials so lots of institutions like the British Library and others are working on adding many, many thousands of audio and moving image materials in triple AF specification formats out there. And then this example I wanted to cite this just one of many current efforts happening so a big melon grants for $450,000 going to Tonya Clement and her crew at UT Austin, working with folks like aviary and AVP and the Library of Congress and others to work on workflow tools and other components that allow for scholarly and critical annotation of audio materials in particular things like oral histories being able to tie other commentary components to particular points and elements within oral history recordings. And the last thing I'll mention on this front is this notion of the triple AF cookbook so this is really designed to be a set of best practices, so that people don't have to reinvent the wheel and so we've been building this up incrementally over the past several months and the idea is that from the simplest use case of presenting just a single image all the way to more complex modeling like complex opera sequences in audio and moving image presentations or other complicated mechanisms. Those can be modeled and so that people don't have to reinvent or struggle with the basics they can kind of take what we've given them in these cookbook recipes and then build on that and spend their time doing more innovative and more useful things with new and interesting presentation of triple AF components. So switching gears just a bit for the last few minutes here, I want to describe the works happening on the discovery front so this is kind of the next frontier for triple AF. There's now a huge ecosystem as I mentioned earlier of institutions and a body of content that is triple AF enabled but how do we then help users find and make use of those triple AF materials if they aren't already aware of them. And that's really kind of the animating question for a lot of this work. So we have something called the technical specification group a TSG. And there's a discovery TSG working on exactly this question and working on specifications that can make solving these problems a little bit easier so they're looking at things like crawling and harvesting and import to viewers. Change notifications is coming down the road. So crawling and harvesting is actually pretty straightforward it's a relatively simple specification but just a standardized way of making changes and updates and new new publications of manifests a standardized way of making those those mechanisms available to something like an aggregator who might be presenting materials around a particular domain or a particular region or a set of institutions. So this is in a pretty stable place at 0.9 and just seeking some last comments and implementations before 1.0 has its final push. We've already seen implementations from the Bodleian and from OCLC doing some interesting work aggregating all the content DM platforms for example. And I will also mention that triple AF the triple AF consortium itself will be hosting a centralized registry of the different activity streams that we're aware of. And then the other component of this, the other work happening in the PSG is this idea of being able to port the viewing state so we call it the content state API. And this is really on two fronts where it's working and it's currently at an early version at 0.2. So what this is working toward is on one hand the consistent the ability to consistently index not just the digital object itself but actually being able to to dig deeply and be able to deep link into the content state so particular annotation or region or subset within a complex digital object and then related to that the ability to be able to port that view and that interaction across different viewers and across different workspaces. So I think to illustrate this, there's a couple good links here and thanks to Richard Higgins from Durham University for these examples. And the common interaction. The most specific thing we can do is link to something like a an aggregate digital object. So this one is from Durham University is just images of a mountain climbing expedition but Richard calls this exercise kind of find the mountain climber. And when given that link. It's a relatively difficult task and you know you'd have to use some other descriptive material to do it, but using the content states specification that we have in draft. And to actually embed in the link itself, a much more precise and specific way of describing what we're getting at so this links directly to that component and describing this, this gentleman here on the Everest glacier in 1924. So I'll be zooming out a bit that that reveals the same element of the, the overall complex digital objects but the utility of being able to link directly to that more specific component, you can see that becoming really valuable and something like a set of search results and for example. So the last piece I'll mention here in conjunction with that technical discovery technical specification group. We also have a community group that we call discovery for humans. And this group is really focused on the user experience of how people are finding materials and how they're making use of them and and and looking at that across different domains and genres and institutions. And so the current work that they're doing. Over there on the right is just compiling some of the, the most common interaction patterns and looking at how different institutions make those things available. And then all of that is moving toward a larger examination of common interaction patterns and ways that institutions are making these materials discoverable in a large, large scale and large sense. So with an eye toward publishing best practices and some guidelines for how those things might be done more consistently across the industry and across different institutions. So I'm going to leave it there. Of course, with this recorded format. It's a little bit difficult for me to take questions, but I am very happy to answer questions very happy to put you in touch with people doing this work or happy to provide examples. That may be useful to your particular institution or your particular use case so please by all means get in touch my email address is there. And we have other ways of getting touch getting in touch with the trip live community. And I appreciate, I appreciate your time. Thank you so much.