 I've just got a little brief intro here about Sam. Sam has been developing ways in which wiki data and other small wiki projects can better talk to each other like the Freeopedia project. Freeopedia began as a project to install QRpedia codes around free mantle Western Australia that link people to articles on Wikipedia. And this has since evolved into a wiki town project aiming to build comprehensive coverage of free mantle on Wikipedia. I'm going to put a few links in the chat about Freeopedia on the Wikipedia and the project page, but now over to you, Sam. Cool, thank you. Yeah, so I'm going to talk about a couple of different things. So it's all around using wiki data in non-wiki media wikis. So yeah, Freeo wiki, yeah. So there's a bit of confusion around Freeopedia and Freeo wiki. Don't worry about that, but this is more of a sort of how does it all work than revisiting the problems or concerns around Freeopedia. Yeah, so basically, we're working from... So basically, on most wiki media wikis, you can link a page to a wiki data item. We all know how to do that, I think. I've got too many terms open here. So for instance, here's a wiki source page. It's linked to a wiki data item. Yeah, so we have a wiki data item that defines a bunch of stuff and we have a bunch of site links to particular wikis. That's cool. So the first thing I wanted to talk about was this template. So the wiki data link template. This is on wiki source and similar things exist on lots of wikis. So with this template, you can give it an ID number, a wiki data ID, and it will link. It'll look up the label for that item and it'll display that as a link. So much, so normal. But the clever thing that it does is it looks for a link on the local wiki. So it looks for a wiki source link first because we're on wiki source and if it finds that, it links you there. Then it looks for a wikipedia link and if it finds that, it links there. And then it looks for a commons link. And then finally, if it can't find anything else, it links to reason as well. And if someone subsequent from the link being created, if someone goes and creates one of these other pages, then the link will change and point to the wikipedia article or the wiki source page or whatever it might be. So that was the beginning of the work on this. That was sort of like, well, how can we... This was specifically around marking up transcribed documents so that you can mark out people's names, places, things like that with an identifier and not have to worry about where the link is going to at that point. You might be linking to a place article that doesn't yet exist so you can name it by ID and then later on the system will automatically update and link to the right wikipedia article. So from that, the idea was to be able to link it on wikisport. So wikisport, I talked a tiny bit about this at... Wow, in Sydney. But so wikisport is a newish project that it aims to exist to provide a place to experiment with content. A little bit like incubator is for languages. Wikisport is for different types, different genres of content. So they have different areas called spores. The biospore is biography and is aiming to have biographies of people who are not notable enough for wikipedia. That's sort of beside the point. So the links in the info box here are all using this wiki data link system of looking for, in this case, we link to the same page, sorry, to the same site, a page on wikisport and pages that don't exist on wikisport link to, in this case, wiki data is the nearest that there is. The crucial point of this being, wikisport isn't linked to wikidata. So like if we look at this person this person's article on page on wikidata there is no site link here to wikisport and yet wikisport is able to look up and reuse all of the information from wikidata. So that's being done with this extension called unlinked wiki base. So wiki base is the software that runs wikidata unlinked wiki base meaning there are no site links and it provides ways of bypassing the site links and you can get entities and you can run queries within wiki text and do various things. So that's the crux of this technical exercise and it means a bunch of things. So for instance the info box here if I just edit this page you'll see there is no information in the info box. It is just one single wikidata item ID and all of the other information is coming from wikidata. That's not completely unknown in the wikipedia world. English wikipedia is slowly getting there. Other language wikipedia is already pretty advanced. Some of them build their info boxes completely from wikidata like this but there aren't any other wikis that are not linked to wikidata that are doing it. So that's the new thing here and you can see it's also got the authority control. There are a bunch of other ideas about how to enrich this like someone's doing a lot of work on grey use and cemeteries and sort of using wikidata as a basis for a sort of find my grave type of project and yeah, various other things like that. So yeah, another example of this. This is another wiki that I work on, archives wiki. Similar idea. So if I edit this, sorry I'm on the wrong page aren't I? I've got too many pages. Maybe it was here. I should have prepared a bit better. Sorry. Live demos never go according to plan. So yeah, this is an example of an info box that is built. If I go to the edit you can see the info box does contain a bunch of stuff but it also contains a wiki data ID and so this is an example of like sometimes you want your info boxes to be built from wiki data but sometimes you're working on content that doesn't belong on wiki data and you can mix and match and you can do things so some of this stuff is coming from wiki data and some is coming from the local wiki and that's a useful middle ground I think. Yeah, where are we? So the other cool part of this is that you can run Sparkle queries so you can actually put a Sparkle query within the wiki text and then do interesting things with the results. So here for instance is an example of this is from last year's Calfhack competition that me and my Williams took part in the map at the top here all of these dots this is a Sparkle query being run on wiki data and we're giving it the bounding box of this part of the map and it's returning all of the wiki data items within that and we're displaying them all here. So that's not something you can do within a wiki media wiki and some of this stuff definitely doesn't scale like if we were to allow that in wiki media wikis then all sorts of problems would happen because the wiki data query service is quite slow in a lot of ways but for small wikis it works really well and yeah I think the thing of like using on a smaller wiki using the same systems using the same syntax the same sort of like conceptual models of pages and how they link and stuff I think is really helpful as a way of sort of building those skills and then they are still useful across the wiki media world. All of these other boxes down here are similar sort of queries this is against the state heritage office and then I think yeah at the bottom we're also got another this is pulling images from wiki data and displaying them in a gallery. I had hoped to prepare an example of querying depict statements from commons but because the commons query service at the moment is you have to authenticate as a user and so that means a third party wiki is in a slightly tricky position about making those requests I think it is going to be possible but I also think that that service really isn't ready for reuse yet but I'm really excited about what the possibilities of how we can do structured data on commons and then querying that and then reusing that outside of commons it's going to be just amazing when we get there which I think is not that far away I don't actually really know where that development is up to but yeah it's definitely happening. Yeah where am I going to go? Ah yes okay so we can link small wikis to wiki data the other little part of this I think so the other little part the other little development was this thing what's it called the redirect manager slightly boring title for an extension so everyone is familiar with redirects on wikipages so for instance okay so this is a common way that I've been structuring things is by using what are called shortcuts on wikimedia wikis or just short redirect names and so what we can do sorry this probably isn't a unique sense let me start at the beginning if you're on a given page about something you define a redirect or a shortcut in this case it's djw for this page and then anywhere else on the wiki you refer to that page by the shortcut it's effectively giving it an identifier it's a unique identifier and it's somewhat human readable and it means through the magic of tools that are not yet available on wikimedia wikis that you can actually display the full page title from the target page and then as the page title changes if you rename or move pages you don't have to go around and update everything it will automatically keep that sort of propagate that title throughout everything so that is super useful for disambiguation clarifying and things like that and the way I mean obviously you can just go and create a redirect to do that there is this toolbar item the redirect manager and you open that and it lists all of the redirects coming to the current page and it gives you a little form where you can add new ones and it also because of the way what it complete works it will tell you if you're about to add a new one that doesn't exist it's dj in this case we know that's unique and it's not already assigned so you can select it and it lists them all here and I found that a really useful extension for creating unique identifiers for pages yep, that's that little tool and yeah, that's about it really and as far as playing with any of this I would encourage anyone who wants to have a go at these tools WikiScore is a great place this other place I was looking at Archive's Wiki feel free to come and register an account there and do whatever you want there and Frio Wiki is I'm slowly working away at it trying to import all of the heritage list data here and that's a work in progress so yeah, feel free to have a look at that yeah, I think that's about all I wanted to go over