 So, Simi and I are two of five editors, so I'll just mention Mike Appleby, who is here somewhere. He can raise his hand if he's in the room. Maybe he's not. Oh, there he is, way back there. And then Tom Crane from Diderati and Rob Sanderson, who both unfortunately could not be here. So, this is a little bit different from what we've done in past years, where we've done a walk-through of the APIs in detail, in that we're just going to provide sort of an update and a bit of a delta relative to the last time we talked to you about the APIs and our progress. So, just briefly, I'm going to talk through the current specifications, what they are, changes, and the beta specifications that Josh mentioned this morning. We'll talk about the auth and search APIs briefly and the changes to come with those. I'll hand it over to Simian, who will talk about different discovery initiatives, our relatively new technical review committee, and then we should have plenty of time for a few questions. So, our current API specifications, you've heard a lot already about, I think, the image and presentation APIs in particular. As you can see, everything is at least two years old here in terms of their last, at least minor releases and search is going back almost, I guess it's over three years. They're, I think, pretty well mature in terms of what our goals were at those places in time. And as you'll see, we've done a lot of building upon them. So, the biggest news, again, to repeat what Josh was saying this morning is that with generous funding from Mellon and support from the British Library, we've moved the presentation API and the image API from the 2.1.1 releases from two and a half years ago to this 3.0 beta. And that's exciting. But, of course, there's a lot to talk about there and there's implementation work that needs to be done. This is by no means an exhaustive list of the changes we've been through, but I think it's a good sort of high level overview of our motivations for moving from 2.1 to 3. The single biggest motivator you won't be surprised was the addition of audio-visual material support. Up to this point, and I think most of what we've seen today is around image-based resources, but those use cases, we're not just working with AV, but integrating AV material with images, with annotations, made this a clear target for development for this next major release. Internationalization was something that was always there in our specifications, but not particularly consistent and not everywhere. So if we did it correctly, internationalization should work in any place that you can provide a string or a label in different languages. Any place you can provide a string or label, you can provide it in multiple different languages. And I would say internationalization is just one example of what we've colloquially called developer happiness. So our specifications, again, these two in particular, image and presentation have seen a lot of implementation and we've heard a lot of feedback, not so much, we've certainly heard feedback around the functionality and things people would like to be able to do, but also what they're like to work with. And so finding patterns that were not just more consistent but idiomatic, if you will, to the way developers, JavaScript developers in particular like to work. And I would say that's a space that's been developing a lot over the time that we've been developing these APIs as well. So we took a pretty close look at just, you know, what it would be like to be a developer and consume these documents that are being published in the triple AF namespace. Of course, our standards don't stand on their own. We rely on really three big upstream standards. The first is JSON LD, of course, and there's a 1.1 community draft of that that we take some patterns from activity streams, all of our paging and various idioms come from the activity streams standard. I should say all these are published in the W3C. Activity streams hit 2.0 since our last major release. And then, of course, web annotation moved from the community web, yes, thank you, open annotation data model to the web annotation data model. And then finally, when we sort of did all this writing and we're, you know, a good way into it, we realized, wow, we have a lot more examples and we have a lot of things that aren't really exactly normative things we want to explain, but there are sort of implicit ways to do things and we can't cram all of these into one, you know, really, really getting to be quite gigantic specification documents. So we moved, we came up with this sort of notion of a cookbook, moved that out of the specifications and then did the same with an extension's registry. And these two sort of parallel registries, if you will, are really the place to go for developers when you want to see how to do something or how something is done. That also has an added benefit of pushing some of that work back out to the community because we know there are use cases and patterns and things that we may have talked about, but haven't fully seen through. And this is something anybody in the community can create a pull request for and have their recipe following the metaphor vetted. So there are two other APIs that we haven't heard mentioned much today and I haven't mentioned really at all. That is our newer, relatively newer search and authentication APIs. They're fine, but right now, once these 3.0 releases get out of beta, there will be some of that pattern and idiomatic stuff I was talking about developer happiness, et cetera, will be inconsistent. And it'll be not the greatest experience trying to do search and off 1.0 while trying to do presentation and image 3.0. Not impossible, but not terribly tidy. So assuming we get over the sort of beta and into the release finish line this fall, which is kind of our goal, then in 2020 we'll be taking on this work on search and off to sort of bring them up to the same place. And again, there's some upstream standards we want to deal with. A few minor bugs and additional use cases that have emerged, but these will really be more like maintenance releases and that we're not planning to add a lot of features, but because there will be breaking changes, there will be properties that were renamed, things that were singletons that are now lists, there will be breaking changes and so by our own convention, they'll be called search and off 2.0 and there's sometimes some scariness that comes with a major release, but rest assured that won't be the case here. It should be a fairly straight path to migration. With that I think I'm going to move on to the next slide. I'm going to turn it to Simeon to talk about change and everything else. Thank you, John. Which button do I press? I don't want to break the slides. So a couple more bits of API work that I want to talk about, both of which are not 1.0 releases, they're development releases, they're something according to our nomenclature and the first goes by the name of discovery. In this case means discovery for machines. How do I wielding one system, could be say a search engine or an aggregator, discover all of the triple IF resources on some other number of sites? This is a key output of the discovery technical specification group and it's been through currently three stages of revision from 0.1 starting in the middle of 2018 through to 0.3 now and I think probably this is feeling quite mature. I think 1.0 release will probably happen maybe early in 2020 if I would guess based on the fact that probably people will be a little busy with the 3.0 image and Prezi in the immediate future. This specification follows on from the ideas of resource sync but instead of using that format, leverages the W3C activity streams which provides us with a sort of convenient JSON representation to match with other specifications while still supporting sort of different levels of synchronization. The first level being here's a list of all my resources, the second level being here's an update of changes to my resources and the third level being here's a complete log of all changes that have happened to all of my resources over time. It's important to understand that the documents you transfer as part of this API are not actually metadata documents themselves, they're metadata about the AAF resources in the sense of dates of change, what they're your eyes are and within that you see also links to get out to structured metadata that might be indexed say for search or something. So I hear you ask, that's discovery for machines, that's all well about discovery for real human people. There was a productive workshop last summer held at Stanford and there is an intention to start a new community group to explore the use cases, do experiments and look at what it means to have shared metadata and transformations of metadata to provide a discovery experience for people. This will rely on the discovery for machines part to do that aggregation from perhaps many AAF sites but it provides another layer of interoperability necessary to provide a human search experience for example. Then there's another API that's been under development which used to get called import to viewers but is now called the content state API. This hasn't actually been formally published in any of its two versions on the AAF website, you can see it, there's a link at the bottom and I guess we'll be sharing all these slides to a preview of it which will hopefully sometime soon be merged into the public site. This is another output of the discovery technical specification group and the hope is that we can both replace the old drag and drop pattern for that sort of functionality with a more accessible and general solution but also in that generality provide the ability to say okay I've got my current browser in this state zoomed in on this part of this resource, how do I save that in an interoperable way? We've for a long time been able to do that in different viewer environments but not in a way that you can take that state from one viewer environment and put it into another so if you think about it that information is the same as the information you need for the key use case of I've found my resource perhaps the face that was identified by AI in this image I want to click on it on my search result so the spec will cover both of these use cases. Anyway I want to spend a few minutes now talking about the technical review committee which I'm kind of excited about I'm afraid. The editors ever since the creation of Zrabaya have had a responsibility to try to incorporate understanding of the community's will and community feedback and a sense of community consensus into the specifications that we've created. We've now formalized this process with the creation of the technical review committee and I think one of the very valuable parts of this is that it will increase the engagement in the creation of the technical specifications I think one of the things we've sometimes seen in the past is that we haven't had engagement at the right level before a spec has been released and then you get a release and there's a flurry of looking and obviously things are found that could have been spotted earlier. So the technical review committee is charged with reviewing all new specifications and updates to existing specifications. These updates come either from technical specification groups or from the editorial board. The committee is also charged with reviewing cookbook entries and technical registry entries and implementation notes. As John mentioned we are hoping to get greater community engagement in the creation of those so this provides a nice engagement of review and refinement process there and finally the TRC is charged with reviewing proposals for the formation of new TSGs. So membership, there are three classes of membership. There's a group of people who are members by virtue of some other role in IIIF, the managing director, the technical coordinator, the editors and if it doesn't overlap with the others a chair from each active technical specification group. So there are currently three active groups, AV, text granularity and discovery, sorry thank you. Then we have the largest group of members which are representatives from IIIFC full consortium members. Each of those may offer one person, not a requirement but a benefit of membership to get that. From associate membership of which we have I think only two at the moment but there's up to half as many seats as associate members on the TRC and those people serve 18 month terms elected from that group. And finally we have five slots for community members perhaps unaffiliated with any institution or affiliated with a non-member institution and they're elected from self nominations. TRC works through monthly calls, we've done some work to be inclusive of people from Europe, from the US, from Japan by changing our call times but it's also possible to participate fully within the TRC even if you can't join the calls. Everything happens via asynchronous means as well. We have a requirement that TRC members must participate however so they don't need to join the calls but they must express an opinion on the issues up for consideration to maintain good standing and if they fall out of good standing they have to participate for a certain amount of time before regaining that. So I'd just like to go through a sort of quick summary of some recent TRC actions and how it works, give you a flavour. So we do everything on GitHub, this is a particular issue which was a pretty straightforward non-controversial issue. The TRC voted on this recently, it got 34 thumbs up, that means everyone agrees, members vote on the issue with a simple plus one. I don't know, it's kind of a quizzical looking face or a minus one, I disagree with it. So we've had other issues which have led to some more extended discussion and I think that's pretty useful so this is a particular one of how in the presentation API we define whether behaviours are inherited in enclosed objects as you go down the tree and this was something that was not specified properly in version 2 and we wanted to come up with some rules. So here there was some discussion, we ended up with 21 plus one votes for negative votes and two I'm not sure. So that is still a super majority of two thirds in favour of the issue so that went through and was approved. We've so far had only one issue that has been rejected. This was a simple majority of people expressing a plus or minus, it was a tie of plus one to other opinions. It was not a super majority by any way of counting. So the rule of the TRC is that this goes to the ex-officio members to make a decision and in this case we debated it and thought that although we could go with the simple majority there was a lot of discussion and sort of debate which really needs more airing so we rejected this issue and that may come around again for a later discussion. To me this felt like a great example of a TRC working. The process is transparent, it all happens on GitHub like everything else within the AAA specification work. The only difference here is that only TRC members gets their vote counted and I think with that we have some time for questions.