 out of form so that you can nominate participants for this conference. So you can suggest people that should be there. We really want to diverse crowd with a lot of skills. From those selections, we will hand pick certain relevant stakeholders, like people who can actually decide on budget, for instance. The timeline is there will soon be an email announcement, like next week, within this next week, or tomorrow, then later we will be sending out a nomination form where you can make these nominations, latest May 25th, and the deadline for nominations will be June 8th. So before that, if you want to be there, that's the date. So get your nominations then, you only have two weeks to do it. I know you were doing vacations, but sorry. Thank you. Next is the documentation sprint group with Leah, Nick, and Jan. This microphone works, and you can click the links, show them on the screen. All right. So yeah, we've been running the documentation corner, and a lot of people did several stuff. I'm going to present a few of them. So we've been working on the documentation about how to install and set up Wikibay, so I have nothing to show you about this. It's been mostly discussions about how to prepare the revamping, the existing documentation about that. Jens has been writing a very cool blog post in the direction of the Wikipedia, so they can understand the better Wikidata and how to watch monitor what's happening on Wikidata from Wikipedia. The blog post is not published yet, but will be very soon, both in German and English. I've been working a bit on the user documentation for lexicographical data, which as you may know will be deployed next week, so how to edit and modify all this new entity type that we're going to have in Wikidata. And then someone also made some help document updates on Wikitech. I'm not going to open all the links. You can have a look yourself. Yeah, about two Ford and this kind of things. That's it. Thank you to everyone who documented their projects. Thank you. Then we have the mobile translation prototype by Faujiner. Hello, everyone. I have been working with the language team of the Wikimedia Foundation on content translation for a long while, allowing people to translate Wikipedia articles, but that was designed for desktop, so I was exploring in a very basic prototype how that could work on a mobile environment. This is the prototype. The basic idea is that instead of working on a paragraph by paragraph basis, you would work on a sentence by sentence basis, so you will have a proposed translation at the bottom that if you're not happy with, you can edit it. If you are happy with the proposed translations, you can keep applying them if they work for you so that the paragraph keeps being translated. And also, if there is a long text that you are translating from, we want to still keep space for the source text so that you can swipe through it as you are translating and we have obviously here some space for the keyboard. Once you complete everything, you are encouraged to proofread the paragraph and marking it as proofread. There are still more details to be iterated. We want to use these to test with users and get feedback to see which interaction patterns work or not in a mobile device to translate articles. That's it. Thanks. Thank you. That's Monty. Yep. Hi. I'm Monty. Let's see. So, let me bring up the screenshot. So when developing for the Wikipedia app, it can be a pain to switch device languages so you can see and proof native interface elements using actual localized strings. And so I worked a bit on an automation script which is presently capturing about roughly a couple dozen apps interfaces in a dozen different languages and five different device types. And so it ends up being about a total of 1,500 screenshots. It takes about an hour or two to run, but the goal is just to improve the quality of non-English versions of the Wikipedia app. And so we hope to maybe run this like once a night and expand the number of interfaces that it's capturing. I'd like to actually get every single one in there captured. And yeah, just make it easier to fix these issues before things actually ship. So what you see on screen here is just one device. So it's like the iPhone 5S and how those interfaces look. But it also captures for like tablets and things like that. So yeah. And the tool gives us a nice little web page that we can just go and look at. That's about it. That's great. Thank you. Next we have the map infographic tool by Ryan Caldari. You can open both of those links. Hello, everybody. Okay. So I basically just created this tool because I like to create infographics. And they're really tedious to make. I mean, you basically like have to open up an SVG and change all the colors and all this. But it's not hard enough that it actually takes a ton of time. But it's hard enough that I don't want to do it very often. But I was like, wow, this could just be automated with a little tool and made really easy and fun. So I built a little tool on tool labs. And it's basically just like an interactive map coloring book. Like you just choose a color and you start coloring states. And you can be like, okay, these are the states where some laws enacted or something. Or if you wanted to do something based on percentages, you can switch to a gradient color palette and be like, okay, the dark green ones are the ones that have the highest pay rate for women or something like that. And then color them in however you want. It also has a nice little feature here where you can also show all the territories. And if you want to assert that Puerto Rico is in fact part of the United States. And then once you're done with that and you have all your states colored in exactly how you want, then you just click download SVG and you have your SVG to upload to Commons. So that's it. Thank you. Then we go to expanding stop articles using content translation by Santosh. Okay, content translation is to create new articles using translation. But one of the often requested features is to expand an existing article, especially when it is a stop article. Stop article is basically small articles, maybe with one line or one paragraph. But without losing that content, how to create that article a better one. So I created a screencast just in case. Okay, here you have an article. It says it's already exist, but it's a stop article. That translation, and it gets loaded in the editor. But on the top, the stop content is presented. You see it on the right side. That's a stop article, existing article. So that's a visual editor. You can change it if you want. But you can also copy that or cut that and paste it here. So you are using the content, and if you are done, then you're done. So you can expand the article like that. Thank you. And I think now we get episode 4 or 5 in Amir's series about translation using telegram. Hi. I actually have a very short time, so I'm not going to speak about that. I'm going to speak about something else. No, very short time, and a lot of you already saw that thing. Now, what I'm going to speak about was done by these nice people, who unfortunately had already to leave on the flight. But how many of you know what Translate Wiki is? Quite a lot. How many of you actually translate stuff using Translate Wiki? Okay, quite a few. So I have to log in just a second. Okay, so for many years we have had a watch list in Media Wiki, where you can watch pages and have some kind of a notification when a page that interests you changes. In Translate Wiki, we didn't have anything like that, but now we do. So we have this little star here next to the project that you are translating. So if, for example, you care about a certain extension and you translate it often and you want it to be complete, so when there are new strings to translate in this extension, you will be able to get notifications. So there's this little star, which looks just like the watch list star, and it's right here, and you can watch and un-watch. And that's about it. And it was done by these nice people that they showed you. And that's it. Thank you. Okay, more surprises. Jupiter notebook with a list of Wikimedia gadgets by James Hare. Yes, we will do that. There we go. Hello everyone. So if you are not aware, one of the things we offer to our volunteer developers is something called PAWS. And if you're not familiar with what a Jupiter notebook is, it is our Jupiter notebook as a service. Write code in the browser. You have your choice of Python and R. You can share stuff with your friends. It's great, but that's not what I'm going to be showing off today. What I'm going to be showing off is one of the notebooks I've created. And gadgets are pieces of JavaScript that you can run on the Wikimedia projects and do little things that help augment the interface a little bit. I'm not showing those either. I'm showing a notebook I created creating a list of every gadget on every Wikimedia project. So this is my code that I wrote. There's a lot of verbose output here. Here's the list. So for example, basically what I did is if a gadget has the same name across the different Wikis, I collated them together. And so there is a gadget out there called CSS tab. I don't really know what it does, but apparently it's deployed on... Can anyone tell me what language PS is? Pashto, Yiddish, and yes, Pashto and Yiddish have this thing called CSS tab, and we know this because I've created this list of every gadget. Thank you. Okay, next is Aaron Halfracker about gadgets to inspire the use of RS. Hey, folks. So I've miraculously covered for my injuries. Actually, not really. But I'm feeling much better. Thank you. That's wonderful help. So today I'm going to show you how easy it is to program something with ORS. So let's see how fast I can type when I haven't been using my left hand very much for a few days. And while I'm logging in here, who has... Who's an admin, ORS bureaucrat? Who's an admin or a bureaucrat and has not enabled two-factor authentication? I'm not even going to look up. You know who you are. All right. Nobody look. Okay. All right. So I have a few gadgets that I've developed that I want to show you. They're very tiny, and they use ORS to do some cool things. So actually, let me first start with... The middle click doesn't work. Does the control click work? No. Oh, yeah. Just click. There we go. Okay. So first, I want to show you this little thing that appears at the top of the page here. What this is doing is it's telling you that ORS thinks that this page is about entertainment and maybe about visual arts. Given that it's about a comic, I think it's hitting the nail on the head. Let's try another random article. So let's see. Here we have Mojavea, which is a genus of moths. So geosciences, definitely biology, maybe Oceania, something about countries. This is our new topic model. We're just deploying this right now, and it's going to be really useful for routing new page creations to people who have some subject interest in reviewing them. And I just want to show you the code to do this is less than 30 lines long. Yeah. It's very, very tiny, so it's really, really easy to build tools based on top of ORS. Like seriously, go check this out. One more tool. This one's about 35 lines long, and this one runs on top of WikiData. So I'm going to do a random item here. And if we wait for just a moment, here we go. So what this is doing is it's loading up our item quality prediction on top of an item in WikiData. And so this predicts that this item is about D-class, this little progress bar that's kind of hideous. I'm sorry, OOJS just doesn't render well by default sometimes. Let's see. Let's try one more. Let's see. This one, not very much content here at all. That's a D-class. Come on, let's get a good quality one. Let's try Q42. We'll get something good out of Q42. There we go. A-class. All right. Enable your two-factor authentication. Thank you, Aaron. We continue with David and Rowan on the important article into Special Collab Pat and Google Docs-style collaborative editing. This one also works. You want to talk for 30 seconds while I'm putting the tabs up? Sure. I didn't think I was going to be presenting, but okay. So we're presenting Ed's work. Ed had to leave early, so he's not here to present his own work. What we will do our best to present what he made. That's the local host, David. It's not Troy de Dorg. Oh, you need it? Yeah. So we have been working on, for a while, on an Etherpad-like real-time editing feature and visual editor. And that was previously just a standalone thing. We've demoed it at last year's Hackathon. What Ed made of this Hackathon is a way to import a wiki page into that editor so that you can work together on an article with your friends. And then once we build export functionality, you'll even be able to save what you wrote. So David is going to import probably the main page into this thing. I just type into here, and it'll search through my pages. Okay, here's a page I made earlier, and then I click import. So what it's done here is it's created a random document in the collaboration space, and then it's pulled the article into there. So I can now edit in here. And this is a collaborative session. I can share the URL with somebody else who lives on the other side of the screen. And hopefully they'll be in the same universe. Otherwise, yeah, here we are. Can you see that cursor? Yeah, and then when you finish, you could copy and paste to export. We haven't got an automatic button to publish it because that would be naughty. Pacing into another visual editor works. It's kind of clunky, but it works. Awesome. That's exciting. Thank you. I think we have Nicole on the movement strategy and the technology working group. You need the computer? Yes. David messed everything up. It doesn't matter. I do it like this. So I will see to talk about the movement strategy process and the technology working group. And we did a session, a very nice session on Friday afternoon about this and created a lot of sticky notes and to-dos. And wanted to answer the question, what are the high-level questions of this working group will be tackling who should actually be in the group and how should the group work and operate. And this was a collection of all these inputs that I gathered throughout the session. And then I put up a strategy, a magic strategy wall, and grabbed people into work with me on the different tasks that we came up with in the session. And then we provided some candies and socks and stickers and did some magic. And then this is what came out of all of the conversations. We clustered the words and the topics that we heard most about in these sessions and got also some answers to the question. So the overall who should be in the group, so the group should be connecting and doing outreach to the constituents from all over the movement and also beyond, so from each corner of the movement. They should collect advice and needs and aspirations and visions of the movement via different channels, not only on Wiki. And they should focus on the developer ecosystem, decision making and platform evolution and the platform architecture principles. They can serve as a starting point for questions that this working group can provide advice on. Thank you. Thank you Nicole. Next is someone whose name I cannot make out from his username, but it's about Wiki Provenance. It's about Wiki Provenance? You are not Jason Wright. I'm just a person that's just after us, sorry. Okay. Careful. I could just maybe pass it right now. Yep, so you're in number 13 then. Yes. Module data box. Okay, we'll do that first. Perfect. So with Tobias, we made ourselves a challenge how to build a Lua module that generates infoboxes based on Wiki data with as less code as possible and with basically no required configuration and that also works with basically all types. And so we wrote something like 200 line of codes. So just this and it gave us this kind of infoboxes with the image, the types, a lot of data, nice and sorted and mapped. So I think it's very nice to show that easy for even small Wikipedia to build infoboxes based on Wiki data with a lot of information. Also, I also worked on making a GF hotpot for Xeems in order to be able to do Sparkle query with them. And with Tobias, we also worked on making Rusts work well with WikiBase by creating a WikiBase Rust client. Thank you. Thank you. Please go off there because we don't have steps here. Here you go. Okay, so this is a tool for the WikiData contributors. So what exactly it is doing is we have, like if you're a new contributor to WikiData, the new questions that you would be asking, how can I contribute to WikiData? And this is a regular question and if you see the community on the project chat page, these questions are often asked. So what I am trying to propose here is, like if you're a new contributor, you could start with languages translating their languages. So let's take our simple translation statistics. I hope it gets me the data. Thank you. Okay, so here you are. Here you have the statistics of the translation in WikiData currently. So you have English, followed by Arabic, followed by Ukrainian. So English has 4,640 translations, followed by Arabic and then Ukrainian. But then you can also see there are languages like ARJ, I do not know the code for them. So you could see there are not many translations. So if you want to translate, you could start this point. Then the next question that you would ask, how many properties are there right now? So I will say, okay, this is 4,641, and then if you want to see, okay, I would like to translate some particular properties, so PT15, you could get the complete translation statistics like you have total 263 languages that do not have any labels, okay? Next, something that you could do is you do not know how many data types. Can I try to navigate by the data types? So you have 13 data types, and you could navigate to each of them. Then next thing is that if you want to do, how can I describe a particular entity? For example, I wanted to describe a heritage building or a software or a programming language. So I could do, okay, let's see, okay, data is coming. Yeah, okay, there are 75 properties. So you could use any of these properties to say, okay, I know this how to do it. I don't know how to do this, but let's try the details in this particular search property. Finally, one more thing, the translated templates. If you want a discussion, these are only translated in 52 languages, but you could improve them in other 250 languages. I think that's it from my side. Thank you. Thank you. Okay, now I kind of have lost track of who we're having next. But it's you, that's fine. But what are you presenting about? Context class. Context class. Okay, go ahead. Hey, hi. So recently, reading web team in the foundation released a feature called page previews on Wikipedia. On Wikipedia. And one of the things that we have been exploring is how can page previews be used outside Wikipedia. I want to hand it over to Joaquin to talk about technical parts of it. So what we did is take the code from extension pop-ups and the front-end code and did some work for making it independent of MediaWiki and jQuery to try to make the bundles more. So right now it's just ready for the prototype. There's some things to change, like adding licensing information, but it's mostly good. And basically, you just add a URL from a CDN. This is one CDN. It's published on MPM as context card. And then you mark your HTML with DataWikiLang and DataWikiTitle. And it will just show the same previews that you have on Wikipedia. So Nizar will go through the case studies that we made. Yeah, so one of the things that we've been exploring is what are the kind of audiences that might use this. And we've been getting a lot of interest from other publishers and news sources that can use page previews for their reading. So we kind of put a few examples right here. So let's say we are reading about others and there is a term which is kind of technical biological term. The editor of this page can use page previews to sort of give context for it. So if you hover over it, you get exact preview from Wikipedia, which is updated live. So if you actually edit Wikipedia article, it will show up on BBC right away. We kind of created a couple of more to sort of just give idea about how can this be used. Yeah, who's man Ray, I don't know, visual artist. There you go. Cool. Check it out, it's on context cards. There's a URL. Thank you. Yeah. Okay. Let me just open all my things here. I didn't want to make slides and I couldn't use my own laptops. Instead I just made a bunch of images. Here we go. These are all screenshots. So on recent changes on a number of Wikis now, we have these Aura's powered filters for contribution quality and good faith. And so these filters are called things like very likely good and may have problems and such things. But what does that mean? If it may have problems, what percentage of the things that it finds actually have problems? What percentage of the things that have problems does it find? These are pretty difficult things to find out. The way that I previously found this out is so you have these different filters that span different probability ranges and different amounts of edits that are covered by them and not and different amounts of edits that they find. But how do you find out what these percentages are? So there's an API for that, which as you can see is super usable and super intuitive and an end user would totally use. I previously had this script that I hacked up in JS bin and if you look at the URL, this is actually version 39 of my script. It generates like this TSV output over here that's tab separated, which I then paste into this giant spreadsheet and I use it to handpick which levels I'm going to use to assign these filters and then I forget about it and the spreadsheet gets out of date. That's all terrible. So I made a special page that looks like this that just gives you a table of for each filter. What is the precision? Meaning what percentage of the edits that it finds are actually bad or good or whatever. And what is the recall? Meaning of all the good edits, what percentage does it find? So this tells me that on the Wikipedia if I select likely have problems then 45.2% of the edits that it will find will in fact have problems and it will find 43.7% of all the problematic edits. And this is information that was previously very hard to get reliably even for us, let alone for users. Now it's here in the special page we should probably work on like integrating a little bit more in the interface instead of just outputting this table, but this is a start. Thank you very much. All right, then we get Niharika. That doesn't count. It does. Everybody close your laptops. Okay, so I made this clever little user script that talks to an API, that talks to the Google API for doing some basic image recognition and label suggestion for common images. So let me shift seven. Okay, so, yeah. For example, this one's pretty simple. It knows it's a cat with mammal, small, medium-sized, it suggests them categories. This one was easy. This is a little tricky. You can see the suit. So parachuting, stream support, it also knows it's a paratrooper, which is more clever. This one, it's a little more tricky again because of the headgear. It knows it's a gas mask. How am I doing something? All right, so, yeah. It can do more interesting ones, but it also gets them pretty wrong. So like, this one is the flying skeleton from Hill, but it thinks it's the sky and the sunrise. Yeah, you can check out the others yourself. Thank you. Thank you. Next is a review of recently uploaded images with the Wikimedia Commons Android app by Elias, Ness, and Wim. Okay, as we talked, we added peer review thanks to Yusuke and Eliott. Successfully, we made it. And are you able to see? Okay. And it's a part of the gamification of our app. For example, this is a photo from Beta Server. And if you think it's out of context, you can report for nomination, and you can write something. Yeah, I made several typos. Yes, when you nominate it for deletion, you can see it actually works from the recent changes. Yes, see, I am nominated something for deletion. And then, okay, if it is not out of scope, you can just get another image randomly. And if it's not out of scope, yes. It can be a copyright violation, and you can also report this one. And it will give you another picture to play. Yes, see here. And lastly, yes, we are planning to offer some categories for non-categorized pictures, but it's not applied yet. And you can send thanks to the contributor. And you will see, actually, we thank to the contributor. Yeah, here. And that's it. Thank you. Thank you. Next we have Kim reporting on the progress on the Wikimedia developer support. Where am I? Do you just want to speak? Should I open some URLs? Yeah, we have to go lower. It opens in a new window automatically. Okay. So this one, it's a bit of kind of cheating because most of the work was done before the hackathon, but anyway, it's good that I cannot log in because this is actually about not being logged in, not having an account. But this is the Wikimedia developer support. It's a pilot. You are all invited to join if you have questions or you can help other developers. And we started the pilot with local account creation, which is not nice. And after some discussions and some work, now we have, well, Github. You can log in through your Github account if you have one, which already comes with this course, but you can also have an account through your Wikimedia fabricator account. This work was done by a student, a volunteer. Let me just get this right. Yana Agun. And we have been using this. We have tested it. And, well, this weekend we decided to just remove local registrations. So now Wikimedia fabricator it is. We also have enabled email replies so you can basically reply to a topic from your email. Testing email is complicated. Many clients, mobile, et cetera. So please participate, help us testing this. And this morning we had a buff session. And, well, we had an interesting discussion. And we agreed that next steps should be focused on more adoption. This is why I'm here, pitching this space for all of you. More visibility. So, for instance, having templates in relevant media wiki or pages. We are going to try this plug-in to connect with Matrix IRC chat rooms and talk with support desk people to see if they want to completely move over here. And, well, by the way, define success criteria to the pilot because when we started the pilot we forgot to define success criteria. You are all invited to participate. And thank you. Thank you, Kim. Next is Lingua Libre. You can sit and use this microphone or use a... Hi, I'm Antoine. I will present to you the progress that we made on Lingua Libre this weekend. So Lingua Libre is a tool to record words in every language. So yet you can connect with your Wikimedia account. Turn your head. Turn your head. That's it. So yeah, the work we have done this weekend is also mainly about supporting more languages that we do before. So yet we support basically every languages that have a Wikidata ID. So when you're connected you can access to the root routing tool. And we have a microphone in this PC but we'll just do as if it has. You create a profile for the speaker. So basically all what you see are Wikidata... Extrotech from Wikidata. Our item from Wikidata. Then you select a list of words that you want to record. So for example, French fruits. And then you just have to speak to read the list that appear. And the software will automatically cut the words into separate files, put them on commands and put the good description on it. And so they are just ready to use it on other Wikimedia projects like Wikipedia or RetroNavis. And behind Linear Libre is a MediaWiki instance with a Wikibase. So for each record we have a Wikibase item and which can be queried with Wikidata with a federated response. That's pretty cool. Thanks. Thank you. That's an awesome feature. Okay, I think we start the experimental part of the showcase now because this is when people that requested to use their own computer come on stage. So we have David Parrott. I think first. Sorry, historical, social network connection to the Wikipedia, yes. And that would still use the laptop provided. I was getting out of myself. Okay, hello. Hello. So the university here is involved in a project which is recovering data from people. Sorry. Okay, they are recovering historical context from people and sensors from all over the Catalonian cities. And they are building a search engine. So we wanted to make a link to the Wikipedia and have these people search in the Wikipedia and also historical context so we can have more information about them. This is a design implementation that they wanted to build. At the right you can see these rich links. And we are also building an app using the Ionic framework. This was a design that was tested here. And that's the actual implementation which was done using an NPM package from Maxim which is also here. And we are also using the Star Wars API right now because we don't have access to the real API. But that's more or less how it will look like. Thank you. Thank you. Now it's David with Composer CLI support. Good luck. Oh, thanks. I'm going to try to share my screen. And otherwise I'll pantomime this. That's smart. Okay. Oh, well, that's not too bad. So we're going to create a project here. Kudos for that solution. Oh, this was an empty folder because this is doing this. It just has a local settings in it that we're going to copy in. Oh, sorry. But while that's waiting, I'll explain what it's doing. It's cloning my repository which will be moved to carrot eventually. And then it's doing Composer install and installing all of the dependencies of MediaWiki. And now it's installing MediaWiki itself. And finally it's going to apply a patch so that way the auto loader works. Okay. And now we're going to install... We need a scan, obviously. So... Oops, if I could spell. MediaWiki... Vector... Scan... Slash it. We'll install AbuseFilter while we're at it. And we'll install Paramailer. And I use AbuseFilter... It would help if I could type right. MediaWiki... I actually don't know where I mentioned this name. Oh, whatever. Oh, it's because I didn't see the end to it. Sorry, I didn't see the end to my new project. Your time is almost up. But anyways, I use AbuseFilter as an example because it also has the dependencies so it's going to install AbuseFilter, Vector, and those dependencies and then it all works if I could do it right. That's okay. That's nice. Thank you. Thank you. Next we have Petter with support for extensions in JavaScript in Huggle. I got a video. Hello, I'm Peter. So I'm working on Huggle, which is MediaWiki different browser used for patrolling on MediaWiki wikis, mostly on English Wikipedia. So how do I start the video? Oh, here we go. So can you see it? Is it big enough? I hope so. I won't show you Huggle because I don't have enough time but I will show you what I did here. So if we start Huggle, there's a video still. Yeah, so there's this menu scripting which wasn't here before the hackathon. It's a new thing and I basically implemented possibility to extend Huggle and its interface using JavaScript extensions. It's using Google V8 engine and you can see here I will... This is like a script manager where you can see the extensions and I will show you how we load a new extension which will alter the interface and it's called Hello World. So if you load the extension you can see that there's a new extension here. You can see the outer version is working description for the extension, the path and I will show you that now the menu scripting is a little bit different. There's a new item here called do it and if you press it there's going to be something. It will say Hello World. So we basically changed the Huggle interface using JavaScript extension. If you want to see more about Huggle you can just open the Wikipedia column on Huggle and I will also show you how the source code of the extension looks. It's just JavaScript. So now anybody can make modifications to Huggle without needing to know C++. So, yeah, that's it. Thank you. Yes, okay. You cannot use your own computer. Okay. So, hi. What are you looking for? I'm Marutan from Benin. Just let me present you Benin. Benin is a country in West Africa. He's near to Nigeria and in Benin we have about 10 million inhabitants. Many languages are spoken in Benin. Among them you have four. Four that is spoken by more than four million people but there is no wiki in that language. That's why I decided to create one. You know, in foreign language there are some special letters. So I just asked myself how we can type that and I'll build a virtual keyboard for that so that we can type those special characters. Let me show you a demo of it. So you can see, then you can write the special character from foreign language using this. That is a translate wiki. So I started translating the messages from wiki. This is a logo for me. Thank you. Thank you. The next step is to better a community for that so that we can start translating and editing articles. Thank you so much, Amy, for your help and thank you, Santosh. Thank you all. Now we have Jean Frotte, obviously. Jean Frotte with the worst slides I've ever done in my life. Okay, so if you're like me and you develop a tool to be made on Toolforge, you really like to test it locally because it's just better. And that's the moment where you need... Imagine your tool hits the database, the wiki replicates that we have on Toolforge and you hit this moment like, oh, of course locally I cannot use comment.wiki, blah, blah, blah. So I was like, hmm, that's really annoying because that means that I need to test in production and I obviously don't like that. I always use Docker containers and Docker Compose for my environments and I thought, hmm, maybe I can use that to simplify the setup. So can you read that kind of bit or not really? Anyhow, so first stage would be, oh, I can just like put in an empty MySQL container and that's slightly better because it doesn't crash connected database but it crashes because of course it doesn't know the tables. Maybe I can just have a container to have the media wiki schema. And so if you pull this image you will have an empty media wiki conform schema where, so that tool just displays some images after a query. It's not really important. Here it doesn't crash anymore. It displays there are no images. So that's first step. And then I was talking to Brian Davis and he was telling me, oh, you know you can proxy Toolforge, make an SSH tunnel to query the live data. That sounds cool. So now if you use this Gen5 slash wiki replicas proxy Docker image, you just mount your SSH off stock because you all trust me and that's all going to be fine. I wouldn't do anything with credentials. SSH user, up, choose your table. And ta-da, this is local and it displays the images that you just queried. This is not doctored. It's a genuine screenshot. And because a picture is worse, so these are the links for anything and the picture is worse a thousand words. So to summarize this presentation, thank you. Thank you. Next we have Moriel. Hi. Hello. So I need to look in. Hang on. We have collected a nice collection of user names and passwords today. Yes. I'm changing my password right after this. What did you say, Aaron? Enable two-factor. Enable two-factor? Enable two-factor. Aha. Enable two-factor. That's it. Yeah. Enable two-factor. Okay. So, um... Oh, I don't understand the language here. Sorry. Okay. You're good. I forgot to start the clock. Oh, excellent. All right. So, I'll start with... So on English Wikipedia, there is a... Where's the slash here? What is going on? Okay. Sorry. So on English Wikipedia, there is this tool that allows you to create a new page based on templates. And it's really, really cool. And some Wikis, like HebrewWiki and ArabicWiki, asked that I take a look at bringing it over to them. The problem is that this is really localized and very much a... It's just a big collection of templates. And it started getting very, very difficult to import them over and do that at scale so that we can do that, not just the two Wikis, but then two whole Wikis. So instead, I created a gadget or rather a user script called Article Helper. So you open it up and it shows an introductory text. That text is in the Wiki, which you can edit. So the Wiki itself can edit that and add whatever they want. And then the Wiki itself can also create their own articles kind of like the categories and article structure. So let's say I want to edit... I don't know, I want to write about a cat. It's going to show me that it's going to look more or less like this. What do I want? I don't know. I can't spell Cheshire, so I'll just write small cat. Okay? But just for technical stuff, so where's the... Where's the pipe? I don't know. I'll use a Mac. What? Oh, God. Okay, I wanted to impress you all and show you that it will tell you when it's not legal, but I can't make it not legal. So just trust me on this. What, what? Out, out, out gr. There you go. This is illegal. Anyway, so okay, small cat, I'm going to create the article. It opens it up in my sandbox. This is also configurable, so each wiki can decide where this is opening. It can be also in the draft space. It's opening it up and it's repopulated with, you know, something for me to start. So it's very similar to what is already existing in English wiki with Sandbox Plus, the template, except this is a user script. You can basically port it to any of your wikis and with two or three configuration options, you can set it up for your wiki. Thank you, Moriel. Next is Dimitri, towards easier editing from mobile devices. Hey everyone. One second, I promised John Robson to start downloading his presentation. It's downloading. Okay, try this. Is this because of the download? Who's downloading movies? Of course, sorry. Okay, here we go. So we all know that editing from mobile is kind of minimal. It's kind of rudimentary at the moment. And that's something we're going to be working on more in the near future. But for now, I thought I'd push things forward just a little bit, like infinitesimally. So here's what I did. So you know how when you start editing inside any edit field on your device, you get your virtual keyboard at the bottom. And if you're a multilingual user, you can switch between different languages for your keyboard. But I also found that you can do some customizations and custom keyboards altogether. So I give you the Wikitext keyboard. So this is a... This is an actual system-level keyboard that gets installed. And wherever you have an edit field in any app, you can now insert Wikitext syntax. So... Yeah, you can highlight words. You can insert syntax around them, make them links. If you put your cursor onto a link, you can preview that link. You can also do things like preview the actual Wikitext that you wrote. Let's see. There's a little close-up of it. And by the way, there's also undo and redo. I don't know if you notice or not, but on standard Android edit fields, you can't undo and redo, so I built that in. Anyway, yeah, that's it. I'm not the most prolific Wikipedia editor, so I don't know, like, these are the syntax bits that I thought would make sense. But if any of you have suggestions for what other stuff to include, I would very much welcome them. Come see me afterwards. Thanks. Thank you. Next is Isara with responsive monobook skin. I'll want responsive monobook. Do we? Okay, anyone who uses monobook and never wants to let it go... Okay, how do you switch this to mobile? Okay, so that's responsive monobook. That's this mobile thing. Let me log in. It's still broken. Okay, so I was trying to get... Oh, there we go. So what I basically spent the entire hackathon doing was not actually responsive monobook itself. That was mostly done. What I was trying to do was to get Echo to cooperate with it. So what I wound up doing was basically making the normal badges go away. They're still there, and then just making the numbers show up on top of that thing. So, yeah. And then you click on that, and there's just a notifications entry in the personal tools, and that'll just take you to the special page, which... Yeah. So anyway, this is now responsive. It works on mobile. I don't know how well it works, but you could... What? It's a start, yes. Oh, it is a start. From here, we can then go on to Vector and every other skin, and I swear I'll actually start working on Timeless soon, because, yeah, I... Yeah. Anyway, this exists. Thank you. Yeah, thanks. John, I think your presentation has downloaded. Wow, it should open. There you go. You want a mic? So, hello. I'm John. My team just launched the Page Previews tour for Wikipedia. We just focused on Wikipedia because that simplified things a lot. So today during the hackathon, actually like 30 minutes today, I wanted to show how easy it is to create an API for other projects. So I did wiki data, and I don't trust the white face. I recorded it. So here is an English. You can see just very similar, returning the summaries, the images. But it also... It's supposed to sound like there shouldn't be. I was wondering who it was from. It also works on German and French. There's a bit of liquidization going on there. So it's possible. If you want to make this happen, talk to me. I think you can do it. Thank you. Oh, thank you. That was quick. Let's see. There was this one. Aureus2Docker. He can. It's over there. Hello everybody. My name is Aureus. I wanted to tell you about what I've been working on this weekend, which is a first take at running Aureus on Docker. Before that, a little bit of a disclaimer. It's the very first event I come related to Wikimedia, and it's the very first time I work with Aureus. So excuse me if I say something foolish. Now, about Aureus, a little bit of an overview. So when somebody requests the score for a specific revision, what happens is that this request hits a server, which is the API. That's one service running. Then the API will request this score to a catch. If it's not catched, it will link you to a job queue. And then a worker will take care of getting that score processed. So on overview, there are four services running. One is the API, the score catch, the job queue, and a worker. So the first, let's say, kind of milestone we could talk about was getting all those services running on Docker. And with that, what we have is actually three containers running. And beyond that, there's like the need of orchestrating them. So on top of that, what I did was having a Docker Compose layer that gets all those containers up and running together and connected. And this setup is actually passing the tests. And Adam told me about like a validation URL that I could use to validate that everything is working and it's working. It's actually what I wanted to show you. And that's about it. And the idea of getting it running on Docker Compose is because the SRE guys said they're pushing for Kubernetes. So I think that once you have that on Docker Compose, translating it to Kubernetes should be much, much easier. And that's about it. Thank you. Thank you. Let's see, we have two presentations left. One is about Kubernetes, what, why, where, how. And wiki provenance. John, again. This. This time it's about wiki provenance. What we want to do is do we have references for all the statements on an item? So take for example, John Sebastian Barr. So we have got statements and references. And as you can see, there are 253 statements on Barr. But there are only 45 statements that have references. So let's see what are these 45 reference statements. And you could see out of those 45, 10 belong to P106 about occupation of John Sebastian Barr. And similarly, you have another one like the seven, I think, which is about his date of death. We have got seven references. So we don't have a lot of references for many other items. But for some properties, we have got many references. So you can also check, for example, what I would like to show here is like software, same base, 72 statements, but only six of them have references. So let's check the famous Douglas Adam. Sorry, this is a keyboard problem. Okay, 42. This is remarkable. So you have got 177 statements, but only 52 reference statements. And that's. So you can also, what other thing is, you could see external identifiers. So you have got 75, 76 external identifiers, 75 languages talk about them, and there are 30 wiki quotes and so on. You could see detail of all other languages. So this is right now a work that has been done this weekend. And thanks to Dario, Daniel Mitchin and Finn from the wiki side team. Thank you. Thank you. Last but not least, where's the plus? Where's the plus? Let's increase this a little bit. The audience has good tips. I know that audience, please help. Where's the plus? That one probably works. With. Okay. Pause this. Where's the windows? Why windows? Let me refresh. Refreshable. So I had a session this morning about what these Kubernetes and why we want to go there and I had a demo schedule. Then of course, you know about the demo gods? The demo gods are those to which you sacrifice something most often your time and your food. And of course, they did not listen to my prayers and I could not do the demo. So I kind of cheated and I recorded the demo and it's up on ASCII cinema for you to see. So here's to hoping. So the idea is that we have a Kubernetes cluster already running. I cheated and this part is not here because it takes a little bit to start. But we have Graphoid, which is a service and we're now building it in a container. And it's going to be deployed very quickly in a mini-cube cluster. There's a Kubernetes cluster that you have locally installed. So as you can see right now, it's running. It's already on Docker step 34-35 and it's now watching for changes locally. I am just curling the info endpoint. You can see I have their description that calls over writing my description. Changing the file and changing the description to something else. And let's see how much time it takes to actually get redeployed. You're not going to really like this, but let's see. So back to looking at it, we are at actually so you can see the timer up there. It started at 46.01. You can see still the old description but around about now. See? So that it is. It took us like 15 seconds for this to be fully deployed. I know all of you who added PHP are going to say, this is too much, but this is actually good for getting the thing that will be deployed in production running on your laptop. It's the exact same thing that would be running in production. Nothing else. So I think that's a net win. I'm not sure if you agree, but thank you all. Thank you. So that was 32 presentations in an hour and 20 minutes, I think. Thank you all of the presenters for your flexibility, especially the people that had to make some last-minute adjustments to their plans. I want to thank everyone who tried here to fix the technical things. Unfortunately, we didn't do it, but we managed to have at least something. And now we'll go to Hackathon closing session. Rachel? Don't worry. It will be very short because we are just out of time. I just wanted to thank you for all the work you have done and for all the work that you will do in the future because we want this hackathon to be of course the beginning of many projects of many collaborations and many efforts to weather. I'm really happy with the projects that were two of the focus of this hackathon of our previous hackathons. And I hope that you have had a very good time here despite of the problems of issues that haven't been around these whole days. The hackathon is not over yet. We have five hours left back in Ingenuity. You have more food, more beer, more socialized and more hacking time. And I just want to say thank you. Thank you. Thank you. Yes. That was amazing. I know we all want to get out of this room and go back and keep hacking so I will keep the shit. Another thank you to the University of Barcelona for hosting us here. They did quite a bit of work behind the scenes, logistically keeping things running. I was going to say about the volunteers of Wikimedia Amical. They spent the last three days working really hard. Even as far as the details of serving the food themselves or staying up at six in the morning just in case somebody walks in the room and then dealing with problems as they are reported behind the scenes. I'm a huge fan of the Amical style of working and if you don't know much about the way that Amical works I would suggest you look it up a little bit. I'm definitely going to integrate some changes that I learned from Amical into my style of working. Despite some of the very logistical issues we managed to have a really, really productive and successful event. Over 50 sessions and 90 projects and countless other groups offering help. That's pretty impressive. It's probably the most that I've ever seen at any hack. It's owned by a large amount. Really good job to all of you. On the program side of things I want to say a special thank you to everyone who's involved in the mentoring program volunteering program and family space team and everyone else who just volunteered to make this event better. Can we give a round of applause for them? It's the wiki way for when you see a problem to become part of the solution instead of complaining about it and that's been demonstrated very, very clearly here in an amazing way with these programs. So thank you all for co-creating this event together with us. It's been a very personally powerful for me experience to see all of this in action. Lila spoke about a lot of the newcomers from the university who came into the hackathon and found really good support and a really good experience. So I just know that this had a really big impact in the local community too. Thank you once again for your flexibility and kind attitude. Remember that there will be food at the hackathon space still working Lila handpicked all the food herself really well, yes. And we're closing the doors at 10 because we want to let the amicile volunteers home to their families. They have their jobs to go to in the morning so we'll leave by 10. We'll have a feedback survey sent out and we have one more thing to show you. Waiting for the text message to let me into my account. The next hackathon 2019 will be in Prague Wikimedia Czech Republic is already working on this hard for you. They put together slides with very beautiful pictures of the city as well as the organizing team so I will add it to the etherpad later so you can look at these pictures. They're going to have a full-time person working on this event and the volunteers they were here at this event learning from Lila, learning from previous hackathon organizers like Claudia and other people about what they're going to do so get in touch with them if you have some ideas and suggestions and of course we'll continue the knowledge sharing so Prague is going to be great and one more huge round of applause for Lila and amicile.