 Hello, everyone. Welcome. Let's get settled in. This is the Wikimedia Hackathon, Wikimania Hackathon 2019 showcase. This is Sebrind, and I'm Rachel, and we'll be your guides today through the session. A couple notes for the audience, and then Sebrind will give a couple notes to the presenters, and then we'll get into our lightning talks. So for the context, all of the projects presented here today were created and or worked on heavily in the last five days here during the pre-conference and Wikimania. The speakers have been instructed to make this session accessible for non-technical audience, meaning that they will give you context about how this fits into the Wikimedia movement generally, which audiences this project is for and serving. And if you'd like to follow along and find more information about that, you can follow our ether pad that you can find linked from the session on the Wiki. Just as another note, this session will be live streamed, so you can send it to your friends or watch it later. And that's also a note just anyone on stage will be part of the live stream. Passing it off to Sebrind. Thank you. Thank you, Rachel. So everyone who's seen one of these showcases before, the format hasn't changed. The speakers have two minutes. After two minutes, there will be an alarm. And after two minutes and 30 seconds, we'll be yelling you off the stage. To the presenters, please check the notes that we make in the ether pad of your presentation if they're correct and if you want to add anything, because we're going to be archiving the written text about your presentation on Fabricator. Please also remind that about half of the audience here at Wikimania is non-technical. So use words and explain things well. Finish your sentences is also very important. When you're next to present, make sure that you line up where Rachel is. She's raising her hand over there. And we have a hands-free microphone and we have a computer here that you can use. So without further ado, the first presentation by Harmonia and Lucas. So hi, I am Harmonia. This is Lucas. And there is this really new thing on commands which is wonderful, which is called structured data. So for example, I am working on figure skating. I want to use structured data. I just say, this is a picture of Caroline Zhang doing the body position bilman spin. I want to do that on all my figure skaters doing bilman spins on commands. But right now, we have a difficulty which is that most of the figure skaters don't have one category. They have subcategories. So it's not easy for me to just see what picture I actually, of you, doing a bilman spin. So we... Yeah, we can use an existing tool called PetScan to find all the images in this category and the subcategories. So category user will hand you. We can make a commons in the file name space and there we have all of the images. But with these images, I can just select what I want. So we created this wonderful new tool which is called page pile visual filter, which does that. Yeah, so in PetScan, you can export this result as a page pile, which is another tool that just keeps track of lists of files. And here we have it 108 pages as a page pile and we can feed that by its ID up there into the new tool called page pile visual filter and say, I would like to filter page pile ID 25,000 and something and there it is. You see the picture and you can just click on it, select what you want and then say filter and you create a new page pile with the file you actually want that you can use then in other tools like ACDC to add the statements. So you just put the list of files on ACDC and then you can say, this depict user who are new doing body position, build man spin and you can put that on commons. Or you can still use on the category system like using the tool, quick categories. So you put the page pile ID and say, I want to add the category build man spin to all this picture and that works too. So that's a new tool. Thank you and I think Harmonia has a second thing to present. Yes, so I am staying on stage. Hold on because it's a PDF I saw. Yes. Which I will download and show. Okay, so I'm still Harmonia and this is Ashcro. We actually made all the technical work but I will be the only one speaking because it's a non-technical presentation but he is the one who actually made this work. So I'm still working on figure skating but this time on Wikidata. And on Wikidata, I want to create a figure skating competition and add the scores but the figure skating competition you have several items. I did an items for men single, for per skating, for ice dance and for all of these I need other items like for short program or free program. So I have like a dozen items with duplicate statements because it's always with the same organizer on the same rings and things like that. So I am lazy. I don't want to have to create that by hand and to copy statement. So we created this wonderful tool. Can you show the next slide? This is a gadget that you can put on your GS and then it looks like that. Can you, the next slide please? So I just put the statements I want on the competition item. Then I just put the variation that I need like the date of the programs and I said what the reference is and then it creates all the item I need with all the statements I need and the links together. So can you, the next, and yeah and it also works for these complicated cases when you have three events instead of two. So it can be adapted to a different need like if you need to create item link together this tool is for you. So that's it, it created items. It's a lot of time because you are only doing the work once and all of the edits in the history of the items. So that's it. Thank you. The next speakers are Jay Tuom and Tuka about Knowledge Crystal's wiki base for structured discussion. Hello everyone, my name is Joni, this is Tuka and we will present a discussion tool. So often discussions get heated and complicated and it's really hard to figure out what is actually going on, there are too much text and we try to offer a tool for that. So this could be used for solving Wikipedia edit wars or societal political issues and things like that. So our tool is based on wiki base. So everything that you can see here comes from wiki base and you can store data there. So what we are doing here is that we are organizing a discussion into a hierarchical thread so that original statement that you can see there as blue you can attack that argument if you don't agree or you can defend it if you agree with it to try to support it. And then you create a tree like structure that Tuka is just writing down. And the arrows, red arrows are attacking and green ones are defending. So you can visually easily see what's going on. And the structure is based on argumentation theory. So we are now implementing that on wiki base. And also one of the ideas here is that when you get more structure you can start hiding irrelevant or not so important details so that the user is highlighted with the most important arguments and we try to make users to kind of see the important things and contribute to those rather than some details that someone happened to say but it's irrelevant for the main argument that we are discussing. So this demo tool works and we are interested in collaborating with people who might have used for this kind of structure discussion. Thank you. Thank you very much. Our next speakers are going to say something about Mortar Interactive Documentation for Glam uploads. Hi, I'm Mauton and I work with Ashko. We work on a project that we call Mortar. So Ashko will present the idea of the project and I will present the second part. Hi, so the general idea was that we have many tools to do some parts of importing items, collection and photographs of these items on Wikidata and Wikimedia Commons but you have to repeat several parts of the process to do one, importing the data on Wikidata and two, importing the files on Wikimedia Commons and we wanted to see how we can add some Mortar to is the whole process to link all the different tools together. So we worked with two basic use case. One is an institution who wants to upload their own collection to Wikidata and Wikimedia Commons and two is a Wikimedia photographer that goes to a museum, takes pictures of the collections and then wants to import them to Wikidata and Commons once again. I leave my transcripts of the details of implementation. Exactly, so we develop Mortar, that is a tool we develop in Node.js and is currently deployed on Toolforge. For example, if you are an institution you just need to come here, click and select the type of data you have. Maybe it is a picture, if the picture, then you go there, you have the different steps you need to go through to make all the uploads. So that is the tool we are developing there. There is a second part that is open refund we need to add, but currently it's there and it's my really first tools I have developed so I'm really excited about it. Thank you. Thank you. Up next is Ryan Caldari. You cannot, you cannot, okay. Cool, cool, cool. Rachel instructed you. So you have your links here. An iNaturalist import tool for Wikimedia Commons. Hello, okay, so probably a good number of you are not familiar with iNaturalist, so I'll just explain it, try to explain it super quick. But iNaturalist is a project that's kind of similar to the Wikimedia projects in that it's an open source, crowd-built database of observations of plants and animals. So basically there's an app and a website and you take pictures of plants and animals in your area and you upload them to the centralized database and then people ID them and then people follow each other and comment on each other's observations and all that. If you've never used it before, I highly recommend it. It's got a really nice interface that my friend Kenichi built. So this is a website here. So anyway, the nice, the really nice thing about iNaturalist is that they encourage the use of free licenses for the photos that you upload. And so iNaturalist has, I think, I don't remember how many at this point, but they just hit some million number of observations. So they have literally millions of photographs of plants and animals, like probably far more than we do on commons. So what I wanted to do is basically write a little tool that lets you import those images into commons. So let me just, oops, log in to my account right here real quick. So basically what I wrote was a user script, username already in use, oh wait, I'm on the wrong page. Log in. I don't know my password off the top of my head so I have to look it up, sorry. I'm running out of time, okay. Okay, gm, oh no, okay. Okay, pray this works. It works, okay. Sweet, okay, so basically once you install this user script, you can go to any category or gallery page for a taxon and a new button will appear at the top that says iNaturalist import. You click on it and it loads all the thumbnails of the images on iNaturalist that are under a free license. If you see one that you like, you can click on it and it loads a larger preview of it and then at the bottom you just click on upload image and it loads that image into the upload interface and all you have to do is click upload and there you go, it's on commons, yay. Thank you Ryan. That way please. That way please, yes, I'll log out. Now it's Lucy, Hadi and Joe about scribe. Yeah. Okay, so we work on scribe which is a tool to support editors of under-resourced languages and especially new editors to write their first article. So what you've seen, you click on the article button, you choose a topic you wanna write on so that's a river in Czech Republic that's on Czech Wikipedia. You get the structure based, learned from this language, so Wikipedia. You can select which section headings you wanna keep from the structure, select them, deselect them and so the idea is that we work mobile first so this all kind of, I mean that's in the browser but it should work on mobile as well. Beautiful interface, I know we're working on that and so then you can select references so we pull references on the topic for each section. You can start writing, read through the references, the references are with summarized so you know kind of what's the reference about. You start writing and then, write very slowly, then you scroll through the references, select them and as I said, since it's mobile we try to make it as concise and very much like building blocks so you have step by step you're guided through the different sections, you can click on the arrows in the top to go to the next section, you can select the references on the bottom and when you're done writing, you're done, you get a preview and before, out of the elevator here and before you then publish, you can see the whole article and edit the last things basically so the idea is step by step working through the article, make a smoother editing experience and focus on mobile so people have access to editing and writing new articles and other resources languages. Yeah, that's it. This work is part of the Scribe project. This work is part of the Scribe project that we work together on. If you haven't heard yet about that, come talk to us. Thank you. And as always, big shout out to Joe who is the volunteer who implemented most of the JavaScript. Yeah. The next presentation is by David and Ranjit on the Parliament Diagram Tool. So a few years ago, I was very active on the Graphics Lab and we frequently got arch-shaped parliament diagrams which were generated with another tool and which were PNGs and so I suggested to make an SVG tool to make them. This was a few years ago and a request which has come a few times was to make it easier to make the diagrams. So with the help of Ranjit and Barry, we implemented new functionality. So the first and most interesting one is Wikidata. So you can select from a live list of countries in the world. You can select a country, for example, Sweden and then you can select the parties and then when you click Add a Party, then you have the name of the party already there and you can add the number of seats that you have. So for example, let's have another party there, give them some seats, great. And then you click on Make My Diagram. Now the colors are not correct there and we've got a warning on that for the users but the users normally know what colors they want their parties to be. Then you get a diagram which you can either download if you want to use it locally or you can directly upload it if you have a Wikimedia Commons username. The other cool functionality which we built together in this one was that if you know the name of a diagram which already has the right list of parties for you, you can just get it from Wikimedia Commons. So you click Get Party List on there and you have the parties which were in the previous diagram which you made. This is a different one now. So then you can again just add the number of delegates for each party and you have your new diagram. So if you are making 20 diagrams for the last 20 years for a country where there were no diagrams, you can do it really quickly this way. I'm sorry, we had some logistics. The next presentation is by Wikisource by, I think Jay and Suyash. Hello. I'm a little bit nervous. So by the way, my name is Suyash Divedi and he's Jayaprakash and we are from IndicTech community. Nowadays in India, there are lots of work going on Wikisource. So we are getting so many requests to develop different type of technical things related to the staff, the staff. So we have developed a library named Pi Wikisource which helps to populate the, you know, different, different parameters like this number of book pages, current pages, quality status, proof reading and validator in Python. So this is a very unique library which is being used by the developers and that's it. Anything you need to say? Yeah. Thank you. Okay, thank you. Maps, maps, maps. Can I start? Yeah, you can start. All right, there's lots and lots of maps. We can read your comments. They're just a bit hard to find. For example, of course you can find them using the categories and there's many of them. Some of them have been rectified, they're referenced in a project called Wikimaps Warper. So dereferencing is the process of taking a scant map and placing it on a map of the world such as OpenStreetMap or Google Maps by stretching it, rotating it and transforming it. Lots of maps have been dereferenced. It's just still pretty difficult to find them. You can go to Wikimaps Warper. You can click on the location, you can find some of the maps. This is a large collection from Stanford. They just use points to place a map of the, to make sure you can find a map. It's also pretty strange to find a map with a point. Some websites use bounding boxes, it's also a bit awkward. This is a project I did when I was at New Republic Library and you can actually click on a very street corner in the city, you can find the maps of that very street corner. So I think this is what we should have in Wikimedia Commons too. So how do we do this? First of all, we need Wikidata properties for reduced spatial data. So a mapping between pixels and latitude, longitude, and also the mask, so the pixel mask of the stuff you want to keep. And then this is the process so you place control points, you remove the stuff you don't need and then you end up with a polygon of the very location depicted by the map. So we need Wikidata properties for this. So we've, during hackathon we've written the proposal for properties for Wikidata so we can actually start storing this data in Wikimedia Commons and Wikidata. It's just a bunch of coordinates, tabular format. And when we do this, we actually have a geospatial polygon which we can index and make it easy to find the maps we need. So this is a map of Lancashire somewhere in England. So what do we do? We used five large repositories of maps. So first of all Wikimedia Commons of course, then the Library of Congress in Washington DC. The British Library has lots of maps and then also some collections from the United States, Stanford, New Republic Library. And many of those maps are also in Commons. So we try to identify overlapping maps and then get the geospatial we needed and converted it to this new Wikidata property. So we created this proposal for a property and we have now lots of data which we can start indexing and can start putting in Wikidata so we can actually make easier tools and visualizations to start searching for those maps. So this is the property proposal, I hope you'll vote for it and that you'll like it. So what's next? When we have this data and Wikidata we can start making microservices that do the transformation for us. And we can get more geospatial data from all those repositories of maps and start putting them into Wikidata and Wikimedia Commons. So we created a GitHub repository with all the code and all the data so you can have a look if you like to and you can hopefully soon there'll be an easier way to find all those beautiful maps that are in Wikimedia Commons but pretty difficult to find. Hopefully soon you can find those maps and look at them and everything will be better and easier. And that's it. Thank you. Thank you so much for the opportunity to film one. Next is Internet Archive Bot. Yes. Yeah, so I'm Dan or Skaldman and this is CyberPower or Max. So we've been working on the Internet Archive Bot. The Internet Archive is a service that saves information like saves webpages, stories, old versions. And when there's a dead link on Wikipedia, it's a good idea to add a reference to the Internet Archive so that visitors can actually see the content that was available, but now there's a dead link, right? So that is what the Internet Archive Bot does. What we've been working on during the hackathon is we've been making basically updates to the administrative interface. And hopefully we'll be able to demo. So one of the biggest things or people have complained to me about is that Internet Archive Bot demands a lot of permissions to be able to even use the administrative interface. And this is primarily because one of its primarily used features is that it can make edits on your behalf to fix dead links, essentially attributing these operations to you, but that's really not ever always necessary in most cases. So what we've done is we've done to a dual method. The first method is the identification only. So it's only asking you for identification permissions. It's not actually asking to edit on your behalf. And this is in hopes to those who have told me that they're not comfortable with granting a tool that much permissions, especially with editing protective pages and stuff. So basically, if you have no interest in actually running the bot on your own, you can basically still queue up the bot and it'll still attribute it to your name. And you can still edit the URL data, for example. One of the enhancements we've also made to domain data is that it's a very expensive process and sometimes when you want to load or tell the bot to do something else, you're basically sitting there. And I just typed that in wrong, hold on a sec. And that you tend to sit there forever for very, very large data sets and sometimes the tool could even time out on you. So, well, still loaded something. So we'll select something here and instead of now sitting in the here forever, it actually loads on the front end so you can actually do your job here while this is still working on it. So you're not basically waiting forever to do something on it. So that's another nice new feature. But if you ever actually decide to still wanna run the tool, you're automatically upgraded to a higher request. And you can still keep going and that is actually a bug I'm working to fix. So it's very nice that you have the option to choose what kind of level of grants you wanna give the tool. And yeah, pretty much. Thank you, make sure you log off. Next is an experiment analysis, I guess. Yes, you've all been waiting for experiment analysis, haven't you? So now's the time. Let me see. So anybody here familiar with the tea house? The tea house is a place on Wikipedia where newcomers can get mentoring. And this is an old project. But the main way that people get to the tea house, if they don't find it themselves, is that they get invited by this bot called HostBot. And that's quite fine. But one of the problems is there's only so many, there's only so many hosts on the tea house and they don't want too many people coming. So they only want 300 invitations of newcomers today to come to the tea house. And that was a problem. And it was solved by using heuristics like edit count to find the right new user registrations to invite to the tea house. However, I decided to do something, a while ago decided to do something different, which was we now have Aure's to determine the quality of people's edits. And we ran an AB test in the last 100 days to determine the retention rates of people staying on Wikipedia that were either invited by a host bot that was using heuristics and one that was using AI that is powered by the new Aure's framework. And just at the same time that this experiment finished was the hackathon, so I decided to conduct the analysis for the hackathon with thanks from the researchers. So all the, these are different retention measures for how long people stick around. And on the Y axis is how long, as what percentage of the 13,000 new comers eventually created edits. So you can see the blue bar is the percentages of the new comers that stuck around after being invited from the heuristics host bot and the orange bars are the ones that stuck around from AI. So the conclusion here is that the AI enabled host bot is actually gives us quite a boost of like two to 5% extra. So we can identify new comers using the Aure's tool to continue to boost our retention rates across Wikipedia and find mentors for the best people coming. Thanks very much. Cool, thank you. Next is C Scott on a multilingual JavaScript. Okay, I'm gonna use this microphone. Hi, I'm a C Scott and Anian. I work for the foundation and the Pariser team, but this is not any sort of official anything. This is my personal sort of hacking project during the hacking time was thinking about what we would like to have for programming just for the project. So with the idea that everything should be translatable, although it's not likely that everything will always be translated, we shouldn't require knowledge of English to be a prerequisite for precipitation. And a lot of stuff with templates, especially with Lua modules, does require English as a bar to enter it right now. I screwed up all the animations here. So, oh boy, and now it's even more screwed up. Let's see. There you go, okay, ah! Change this. Okay, here we go. Yeah, okay, you can, so the idea is, so imagine using JavaScript instead of Lua for template modules with the idea that every part of that JavaScript should be localizable. So the left-hand side is you've got variable names, you've got strings, and you've got some method calls, some API stuff. You rewrite it with a little compiler to just $1, $2, $3, and then translate all those things. So I just did it in Pig Latin because I was running short on time and couldn't find people to come up with really cool languages. And then when you're editing it, you display it in English or you display it in the other language. And those are just orthogonal. I write it in my language and it gets translated back to the numbers into the lookup table. And so we can all work on the same code. It doesn't guarantee that every comment and every variable is going to be translated, but at least it gives you an option to work on code in a language-independent way. And I think that would be great. Thanks. And then we have copy via bot for comments. There you go, good luck. Hello, so I have a new bot to help patrollers and editors of comments to find issues with copyright for new images. It doesn't have yet UI, so it's just a wiki page. Basically, the bot goes over new files that by new editors that lacks active data and it reports them, it creates a report. The reports give a score based on a random forest model that looks on all the metadata of the file and suggests what files need more attention for copyright issues. And for example, I will check this new image. So the image is already assigned for speeded relation. As you can see below in the active data, it has a FBMD indicating it comes from Facebook. So I didn't write myself this rule. It's just learned that such images require more attention for copyright issues. It also let you easily search it in Google. Although sometimes it's impossible to find the image in Google, if it comes from Facebook, Google doesn't always scan Facebook. So I hope it will help a comments community to find copyright issues and will help them. If you have any suggestions on how to improve it, you're welcome to add your comments in the talk page. So thank you. Thank you. The various I18N tasks. Very good. Good luck. Hi, my name is Tonina and I'm with Wikimedia Germany. This hackathon I worked on various small tasks but for the showcase I chose to talk about this bug we had in the advanced search interface. It's related to internalization. So the little tag that you see there circled in red is a message that says sort by relevance. This is the Russian Wikipedia. And the problem with this message that was that in the code it was actually two messages which were concatenated on runtime. And in the end you see an entire message but the underlying is two parts. So the problem was that in other languages we didn't give the opportunity to translators to actually swap those two messages because it would be more grammatically correct in their language. And actually in Russian this is a problem and it's not grammatically correct. So now this is fixed and hopefully I think on Friday depends on the deployment train you would see the message fixed. And I just wanted to say that I also took part in the Wikidata documentation translation sprint and it was really awesome. I translated the FAQ page in Bulgarian and I recommend to anyone who doesn't know Wikidata if you want to get to know it more, get in the docs and translate them to your native language because this way you're forced to read the text and actually understand it and it's really awesome. Thank you. Thank you. Mentorship tools. Hello. I worked on two called new camera homepage which among other things automatically assigns a mentor to all newbies who register on Wikipedia but the problem is that the mentor is assigned automatically and if you are on Wikipedia because you learn how to edit Wikipedia through a wiki course you want your instructor to be listed as your mentor because that's the person you actually know. So I worked on this tool which allows mentors to claim a mentee and it allows you to have one particular person assigned to you as a mentor which you already know of Wikipedia so it can change to somebody different. Thank you. Thank you. Nice. Then we have a lightweight tool to visualize Wikipedia article contest flow and results. It should be there. You should go. Hello. Well basically the article contests, you don't really understand what's going on there because you don't see, well if you look through all the articles written for the contest then you have to go to all the pages and it's stupid. So it's, if you're yourself not participating it's useful to somehow visualize it. So I already had a project with a dirty code to do it so during the hackathon I just tried to write proper code using media wiki API instead of direct chasing calls to API. So article contest is something like users, some amount of them and then articles and starting date and then date when it ends. So visualization basically goes through all the users and articles and articles written during the time of the contest and basically then you can visualize it. And this is some example visualization that I did. It's basically just an example just to create the process of getting the data and then visualizing it to then create other visualizations. So this is articles by length. So these are article names here and if the article is longer then the bubble is bigger. So you can automatically see like which were the most important articles written during the contest. You don't have to go through the pages or some text. So but this is more about the results of the contest but the second visualization is a bit more tricky. It's users and number of articles written by them but as Wikipedia is an open platform then it happens that there are participants who actually are registered for their contest but then there are usual Wikipedia users who also edit the same articles. So this visualization shows who did actually commit more to the articles of the contest were these the people who joined the contest or just usual Wikipedia editors and it's actually data from actual contest which shows that it's not really evident who actually commit more. So and I kind of think that this could be useful for lots of contest organizers and it's useful after the contest if you send press releases about what happened then you just don't mention that we had like 50 participants but you can also show what really happened and people can see what was the contest about. That's it, thanks. Cool, thank you. Category overview, good luck. Hello everybody. I am Neharka Kohli. I work as the product manager on the anti-harassment tools team. That is however not relevant at all to what we're gonna see here. So this project was pitched to me by a friend who wanted a way to be able to quickly get an overview of a topic on Wikipedia. My first thought was category pages but our category pages really any good when you want an overview of a topic. There's a wealth of information there but they are not, it's not presented in a way that makes it easy to get a broad picture of a topic. So I built this lightweight tool, it works in JavaScript client side to that fetches information about pages in a category on Wikipedia, presents it in a readable way. So let's see, what do we wanna do here? Let's look at, is something, oh, sorry. Use a different keyboard layer, sorry. So yeah, it fetches the first, the primary image of the article and the page extract from the top. It can present up to a hundred pages or more. That's configurable. And of course we can't be done without Goat. I learned there's a company called Rent-a-Goat that lets you rent Goat, so. So hopefully this sparked some more ideas about how category pages can be made more useful and maybe there can, at some point now, we'll convert this into a user script that actually makes category pages better and easier to read. Thank you everybody. Okay, two more presentations to go. The one with last, bookmarklet for your all-shorter, shortener, and we go to fabricator. Hello, my name's Ed Sanders. I usually work on the editing team building new things for the visual editor. And I usually come up here to talk to you about real-time collaboration or barcode scanning or something cool, but I'm gonna talk about something completely different. Who here has heard of or used the URL shortener? Yes, lots of fans. Has anybody not heard of the URL shortener? Well, nope. Well, I'll explain it anyway. You take any URL from any Wikimedia site, including like, can I click on that? Including etherpad and you copy the URL and you go to w.wiki and you paste the URL and you press shorten and you get a short URL. That's a relatively quick process, but it could be easier. So all I did during the hackathon was make a bookmarklet. So you're looking at a random media wiki page. I've actually hidden the bookmarklet from the screenshot, but you guys know what bookmarklets are. You know how to generate them. You know where they go. You hit the bookmarklet and then it does an API request and brings it straight up for you. If you're on etherpad or fabricator, then it will just redirect you to the URL shortener page with the URL prefilled because it doesn't have access to the credentials. This could be easily converted into a gadget, not a gadget, a browser extension. It's only about 12 lines of JavaScript. So I'll follow this URL, w.wiki slash seven EE to read about that and I'm gonna take a selfie as I walk off stage. Thank you. Awesome, Ed. The last presentation is an interesting one. There's some extra tooling for it. It's offline voice-based Wikipedia and I think, and it's, yeah, the presenter is a bit camera shy. No, not really, but there was a very difficult technical setup that the presenter will explain. So you might not be seeing him. Hi everybody, my name is Adam Basso and I'm kind of interested in audio stuff. I thought it'd be kind of cool if you could have Wikipedia offline and if you could have Wikipedia offline with just your voice. So I've been playing around with the Missila Deep Speech open source project, which is used in Common Voice and is powered by TensorFlow and I'm just doing a little bit of hacking. So let's try it out. I'm not up on the screen so you can't see that I'm truly offline, but just trust me. I'm happy to show you the files. And I'm sorry if we have any feedback from this when I do this because I'm dealing with multiple microphones. Hello, sweetheart, would you please go offline? Wait and wait and wait and wait and wait and wait and wait and wait and wait. Wait and wait and wait and wait and wait and wait and wait and wait and wait. Okay, what do we want? Chlorin flakes. Here is what I heard you say. Chlorin flakes. Let me ask Wikipedia if it can help you hang on. I'm not sure. Okay, the demo gods are not with me, but let me try one more thing here. Solar system. Here is what I heard you say. Solar system. Let me ask Wikipedia if it can help you hang on. Solar system, sorry, I have a small file. Maybe we should talk to Kiwis about this later. It does work. Happy to show you later. Happy to show you later. Thanks. Thank you. So we saw an example of the demo gods not being very willing, unfortunately. Well, that was 19 presentations. Wow, can we get a round of applause for all of the presenters? Thank you. Every time again, I'm amazed by how much is accomplished and this was only a small thing. We had 240 people at the hackathon. Yeah, so, and I think we saw about 35 on stage or so. You can also get involved in the Wikimedia technical spaces. Oops, that wasn't supposed to happen by going to this URL. For welcoming and orientation information, everyone can join. There are easy things and it's easy to get involved, so we hope to see you there. Then for the closing part of this session, I would like to ask to the stage ratio friend who has some interesting surprises for us. All right, thanks everyone. Let's give a big round of applause to Sebrin for running this session. And again, for all of the presenters here, it's kind of scary to go on stage, so let's one more time. This is a picture of most of you. This is this year's hackathon group photo, big group this year, and it was really nice to work with all of you, so thank you so much for joining us. Where are we gonna be next? Where can you meet us next in person? I have some announcements about that. So first of all, I'm really excited to announce Tirana Albania. They're organizing the Wikimedia Hackathon 2020. We're organizing this in partnership with Open Labs Albania and the Wikimedia Community User Group of Albania, and the dates have just been decided as of this weekend as May 9th through 11th. We'll send out a public announcement about that on the technical mailing lists, but just you can add it to your calendar now if you plan on joining us. A little bit about your local organizing team. I've been meeting with them regularly for about two months now. They're a really great group of people. They're the organizers of the annual open source free software conference, OSCAL. Also, LiboCon, CryptoParties, WikiWeekends, and a lot more than that. They've done a lot of event organization and hopefully we'll do great together there in Albania. We haven't had predefined focus areas yet, but these are some things that they're super interested in on a local scale. Glam, Wikidata, Multilingual Efforts, Commons, Mobile App, and GeoInformation, Wikidata, Wikimedia Commons, quite a lot of stuff. So those are some of the things that we might put at the forefront and of course you're welcome to work on anything that you like related to Wikimedia technology there. Next announcement. So the EC app region in the next session after this one will announce a specific location for you. So please join the closing of this event. However, we will have another Wikimedia hackathon in the summer of 2020 organized in partnership with the EC app team and the Wikimedia production team. So please also join us there. And then finally, I have one more announcement as well, which is we are discussing options for the Wikimedia hackathon 2021. We're trying to do a better job of planning ahead and embedding the org. Hello, great. Sorry about that. So the Wikimedia hackathon 2021 not yet confirmed and we would be organizing in partnership with PACT, a Center for Expertise and Digital Heritage. And they've already been partnering quite closely with some of our Glam groups here in the Wikimedia movement. And of course, the local interest areas are Glam, analytics, data enrichment, and multilingual efforts. This will be in the spring of 2021. I just want to stress that this is not yet confirmed. We are still discussing the details, but I also wanted to be as transparent as possible and let you know what we're doing so that if you're interested in working with us to organize an event, we can think even further into the future. Finally, I just want to say one more time, thank you all for participating in this session in the hackathon at Wikimedia and make sure to join us for the closing session after this and a break. Thanks again, one more round of applause for everyone who presented.