 Okay. Me again. Hello. Rob Sanderson from Los Alamos National Labs. So this is a project which is using open annotation in a slightly different way, but with related content to what Bernard was talking about. We are looking at digital facsimiles of medieval manuscripts. So medieval manuscripts these days being increasingly digitized all around the world for many reasons including preservation. So there was a horror story recently where some African manuscripts were completely destroyed in an attack, in a fire, and thankfully they had already been digitized. So we still know what those manuscripts said. For scholarly access, for digital exhibitions, for general population, and just for the good will that's generated by digitizing these beautiful objects and putting them out on the web. The problem is that while there have been many projects working in this area, they have trouble sharing the image resources, they have trouble sharing code, they have trouble sharing expertise and best practices for what to do. So when you have 14 standards and we have to merge all of those standards and create a 15th standard, if you know the XKCD comic. We have an interoperability project run at Stanford in order to try and build an overarching framework using standards such that we don't have to continually reinvent the page turning wheel just to display a medieval manuscript on the web. So we have several partners including some of the largest holders of medieval manuscripts including Stanford, Cambridge, Oxford, Harvard, Yale, the British Library, the Bibliothèque Nationale in France, UK National Archives, the World's Art Museum, Ecotasies which is a Swiss consortium of manuscripts, L'Otterlermos and the Mertens. So the asterisk there are people who are actively building tools and I'm going to show you the technical proof of concept towards the end of the presentation. So when we started the project we sat down and tried to rethink what it meant to do a digital facsimile of a medieval manuscript in the web sort of world. So the key points were that it should be distributed in this global space because there are images, texts, audio and so forth distributed around the web that you would want to bring together to create a rich experience for the user. It should be interactive, this consumerist producer notion again, crowd sourced transcription would then be one possible mechanism. There's not enough scholars in the world to transcribe every single copy of every single manuscript but as we've seen with things like Zooniverse, Galaxy Zoo, there are a lot of very interested and talented lay people who are very willing to help out with that sort of thing. Interoperable, it should be seamlessly or intelligently seemed and open sourced, open content, shared development, all of that good stuff. So yeah, 100% buzzword compliant. So just displaying images on the web isn't particularly interesting or challenging. What we want to do is have a rich environment where all of the resources that we know about for these manuscripts can be brought together. So the first point, of course, is transcribing the image so that you can read it even if you don't understand the language or can't read the image as clearly as you could. So being one of the co-chairs of the airpin annotation work and with a lot of past history in annotating, it was obvious that we had a hammer and manuscripts looked like a pretty interesting nail. So what happens if we just annotate the image? Well, which image are we talking about? So this is a manuscript where there's a fold out that has some text that runs in a semi-circle but if you fold it back down the other way, that same piece of parchment appears in a completely separate image associated with a completely separate page. So we needed to get over some of these naive assumptions that image equals page. Some other examples, there's only parts of pages that could be digitized like fragments. The image may not exist. If it hasn't been digitized, it might not be interesting enough. This is a particularly interesting case of multi-spectral analysis of the Archimedes palimpsest. So the top text is the 10th century, the bottom text is 9th century. You can only see it under certain lighting conditions. So we have a canvas paradigm like Photoshop or HTML that represents the page and that's pretty easy to implement as it turns out. So what we need to do in order to populate that canvas is to have some way of associating resources with the canvas, which seems like an annotation. Maybe you want to associate multiple resources if there are multiple digitizations and this could be implemented in a relatively straightforward way. So this is where it becomes more interesting. If there's more than just an image, if you have text, then you want to associate some text with part of the canvas where it should be displayed. So you need to be able to select the part of the text if it's in a long file, the part of the canvas, and then with the annotation for the image, they can be brought together. If you have more information such as the line-by-line bounding boxes, you could have lots and lots of annotations, each of which transcribes the individual line. That could then be used to bring everything together in an overlay environment such as this where you can see the layout, the mise-en-page of the original and the text overlaid on top of it, or it could be a more traditional side-by-side view where you can see all of the image and read all of the text at the same time. This is the web and we don't need to be limited to what we can reproduce from paper. So there is a lot of medieval manuscripts that have musical information in them such as this one which is a flyleaf in the Cambridge Parker Library. So this was a 400-page manuscript and this is the only page that survives because it was used to protect another page as a flyleaf. So people have transcribed this music, people have performed the music, people have transcribed the text. It wouldn't be great if we could bring all of that information together to provide a rich environment where scholars and interested layfolk can see and hear what that text would have been performed as. And we can do exactly that with HTML5, you simply overlay the segments of the audio file on top of the canvas which then gets displayed possibly like this. So you click on the play button and it will play the appropriate part of the music. Let's go all the commentary, of course. If you want to say something about the guy in the picture there, you have to say here it is in the picture on the canvas. This also comes back to the motivation motion from earlier. So we have some annotations that are painting the canvas and some that are describing it. A particularly terrible implementation of an annotation viewer, you can do something like that perhaps. So, I have not yet received the red flag. Yeah, there we go. So thank you all very much. If you go to Shared Canvas, you can see, you know, play with the technical demos. We are currently in the project working on a much nicer interface for rendering and commenting. So, we'll take questions at the end. Thank you.