 My name is Dominic, I'm the data and partnership strategist for DP LA. So just some, some quick background. And first of all, what is DP LA so do public library of America is a nonprofit member network that aggregates digital collections for over 4000 contributing cultural institutions in the United States. So it's essentially a search portal for searching across all of the libraries and museums and archives in the United States. Since the start of 2020. We have launched what we call our digital asset pipeline to Wikimedia. Just a quick summary of what we've accomplished so far since 2020 DP LA is now the single biggest kind of contributor to Wikimedia Commons. And so we have uploaded over 3.7 million files to Wikimedia Commons generated 250 million page views. There are over 300 contributing institutions across the United States. And as part of this, it's not just an upload project, but we've actually developed technology for synchronizing continually over time the metadata for the files that we provide. And that's what I want to talk about more. This synchronization project makes use of structured data on Commons. So in addition to those 3.7 million uploads, I took a look today and our bot account is actually made over 15 million other edits, because we're constantly adding new metadata and updating the existing structure data statements. So that represents about 5200 million structure data statements. It's kind of hard to tell because we're kind of maxing out what the query service can actually handle the technology we developed uses Wikimedia database queries. So I use quarry for that and the code for the bot is written in Python using Pi wiki bot and on wiki it relies on Lua based templates to display the metadata, which I will show you right now. I have some tabs queued up so I'm going to go through and just kind of quickly walk you through the how the structured data part of this project works. So this is an individual file upload. It comes from one of our partner institutions the natural resources conservation service uploaded through our hub the Northwest digital heritage in, which is the regional hub for the states of Oregon and Washington. And so you see here as you expect under Wikimedia comments pages, all of the data that we upload along with the image, and it comes from from the catalogers from the source institution. When I look at the structure data tab to see you'll see all of this data is actually represented as structured data statements. For each statement we use a qualifier that says that this data was determined by glam institution at its website. We provide a reference for every single DPLA originating statement that uses the DPLA catalog as the reference URL. So that is, whenever we make changes, we only will change things that are supposed to exactly match what's the current state of DPLA is catalog. So this allows using the reference statement like this allows the Wikimedia community to make changes to any of the structure data for a given DPLA upload and we're not going to mess with it or override it in any way. So one file looks like represented entirely all the descriptive metadata represented as structured data statements. And one of the goals of this project and what this has allowed us to do, which I'll show you is to be able to actually reflect, make changes to the structure data, which allows us to easily detect changes when we have something out of sync in the catalog we just compare values across the two sources from the DPLA API. We make those changes, and they're immediately reflected in in the in the actual wiki text of the page. Once we've finished migrating all the templates but this is showing you the ideal case here so if I hit edit at the top of the page here you'll see what I mean which is that that all of the text that you're seeing there all the data that you're seeing there was actually generated live on the fly from the actual structure data on commons. Okay, so this is view it it's a another tool that I made with together with my partners. And Jamie, this was our team. And this, the goal of view it is to provide browsers readers and editors of Wikimedia projects, easy access to all of the images on Wikimedia Commons representing the topics that they're actually looking at. It's not limited to just what you know editors of an article has curated for that image. And the idea of this really comes out of the fact that structure data and depict statements in particular, allow us to have these draw these relationships where you're looking at an article and know, you know, through the technology, all of the images that are tagged on commons has, you know, depicting that subject. So, to start off with this is the the tool documentation lives on meta wiki, you can go to meta.wikimedia.org and search for view it tool. It is a user script, which means at this point, you need to be logged into an account and you would add it. Using the very simple instructions there just copying and pasting to a page to add the code to your account. And what you will get will be a set of tools and links on your page when you're viewing Wikimedia projects that will let you see more images than you normally see. So this is a quick screenshot, I'm going to quickly walk you through what that looks like in practice. So here we have the view tool page on meta. I've gone through, I've installed this the script using the instructions here and I'm going to show you what that looks like on some pages. You can read through the article to see a lot of images that the editor has selected, or you could, if you're, you know, wanting to just see the images right away or the reason you're looking at the topic is to see pictures of that thing. View it showing the image at the top will provide you really easy access. You can expand it, like I already had before for a little more real estate to vote the images there. What these images represent are images on Commons that either depict mangroves, meaning they have a depict statement in their structured data, in which the value is the wiki data item that's linked to the Wikipedia article for mangrove, or are in the Commons category that is also linked as the category for mangrove from its wiki data item. And so that's data being pulled live from the API. There's also the view tool adds this new view tab to the top of the page. If you click that, it will give you an actual, like full page gallery, and which will infinitely scroll for images of this subject. So I pulled up here, the James Wittgen-Reilly Museum home, which is a museum about a mile from my home and also one that is one of the participating institutions in the DPLA project I was just talking about and its collections have been uploaded. So here you can see, it's a little bit of a shorter article has one main image and a few in a gallery. You know, I noticed, as I read this, that view it is showing me all of these images at the top that are historical photos, including some of the interior, none of the articles in the in the image as I came up to it, have any photos of the interior yet, even though as I read it, I could see that there is actual text on the page. If you can't read that it says the interior would work is all hand-carved solid hardwood. So there's text on the page that relates to the interior of this building. So what I want to show you here is also how view it is useful for editors. So when I hit edit, you'll see all the images in the top now have a little copy to clickboard icons. So it allows me to just quickly go to the one I want. I'm going to choose one that shows this interior would work. So I'm going to click on this, this one just like that copy button and go to where I want to put it. And I'm just going to paste it in there. It went down to the bottom because of this info box and hit to the left option there and publish. I've done this completely live entirely using view it and made the save.