 All right, we're live. Hello. So today I'm going to be talking about a small improvement to the reading experience on the mobile website. Specifically, it's about adding the table of contents to article pages and removing section collapsing. So let me briefly describe the problem we have currently by sharing my screen. I hope you guys can see it. So stop me if you don't. Currently, on the mobile website, on small screens, we don't have the table of contents. Instead, we have sections collapsed and their titles kind of act as a table of contents so the user can see the big picture. For non-java script browsers, there's no collapsing at all and there's no table of contents. So it's like one big page without the table of contents, which makes it hard to navigate on the page. Even when the browser supports JavaScript, like in this case, where you see the sections collapsed, there is a problem when section collapsing code loads slower, in which case the page is loaded like this, expanded first, and then they collapse and become like this. These sections collapse and become like this. The problem happens when the internet is slow and the user starts reading the article and let's say they have read until here and all of a sudden the code loads and finishes executing and the section becomes like this. And this is really frustrating because you lose your reading position and you have to figure out which section you were on and you need to expand it and read the section you were reading before. So that's the problem with the smartphones. And for tablets, we have a similar issues with not similar but different kinds of issues. We have table of contents in tablets, like you can see in here. And sections are uncollapsed by default, but in this case. So it's easy to look at the sections and you're still can collapse the sections if you'd like, which is really nice. The problem with this approach is that you have to scroll back to the top of the page and click on the link you're interested in. And then sometimes it doesn't work because it's inside the section. So you have to click on the section itself and then expand it. So there are other problems like this. For tablets that don't support JavaScript, like smartphones, there is no table of contents and everything is collapsed like this. So you actually don't see the big picture easily. So these are the problems. To summarize the problems, we don't have a good way of navigating around sections in an article page. And there may be some issues with reading experience when the JavaScript code loads slowly and you lose your reading position. So to fix these problems, I implemented a simple solution where every browser is treated equally and there is a table of contents for all browsers and all device sizes, whether they support JavaScript or not. So this is one of the pages you can see where the table of contents is inserted after the lead section as it's done in Mediaweek core. And you can easily see that in the sections we have and there's no section collapsing at all, which is nice because you can also start searching on this page without having to click and un-collapse all these sections in order to search. So this is really nice. It works for non-JavaScript browsers, as I said, like in this case, where this is really nice. And here you can see it working on tablets that don't support JavaScript. You click on here, it goes to that section. You click on a subsection. It works. Or even tablets that support JavaScript, like this case, which is good. So if your browser supports JavaScript, you also get this little button on the right, which is placed here because of this button, which takes you to the top of the page. But I think this positioning can be improved better. So when you click on it, you see the table of contents in an overlay. So you can click on any other section and it takes you to that section. This is also really nice because you don't have to lose your position just to see other sections. You can just look and say, OK, these other sections are not going to click on anything and I'm just going to continue reading from where I was reading. I think these are really nice improvements and this is currently implemented as a beta feature on mobile front end. The patches have not been merged yet, but I would appreciate any reviews if you have any. And that's it. That's my demo. Thanks, Baha. And if anybody has questions, they should go to the etherpad and enter them there. If you're presenting, you'll want to take a look at the etherpad and provide responses in line. OK, I think next is mixing maps and graphs. Do we have the presenter on the hangout? Yep, we do. We do. I'm here. All right, can everyone here and see me OK? I will share my screen in just one second. So today I'm actually going to cover a very small feature that we just deployed. Now it is possible to use maps, like a rendering of a map, a street map, at any zoom level in any location as part as one of the images available for the graph extension, which means that now you can have this kind of graphs, where I made a sample template called Graph Street Map with Marks. It's available on Media Wiki and on English Wikipedia, but you can obviously copy it to others. Where you specify, show me a map and show me some overweight images, like, for example, a little icon of a mountain, and then add some label to it. Also, it has a mini location map, also using one of the common images with a little red dot drawn on it. So the idea is that you can really use graph extension to draw maps and overlay it with any kind of interesting data. One of the ideas of what this capability will give you is wiki data results drawn on the real street maps, open street map background. And that's all I wanted to share with you today. Obviously, we're working on regular maps. And for that, I can obviously show you something like this, which has just launched where you have a regular map. You go into a full screen. And my internet is very, very fast right now. So you click on More Details, and there is some additional information about the location of this map, as well as possibly external services, such as Google Bing, Yahoo! Open Street Map, and you can select another one. And that's all I wanted to show with you today. Thanks, Siri. And Peter, introduction to Event Bus and Change Propagation. Hello. Just one moment. Yeah. So with the services team and the analytics team, we were building this system to propagate changes between services. Because as we build more services, we need a reliable way to update restrictions, to re-render content, to propagate all the kinds of events. And currently, we've built this system, which is called Event Bus and Change Propagation. And I want to make a brief introduction in what is that going to do for you. So currently, MediaWiki posts events to the system when a revision created, pages deleted, undeleted, moved, or users blocked or on. And we can add more and more events. So here's an example of how we use it. In a REST base, we store several representation of an article. So it would be HTML, summaries, mobile content. And they all could depend on each other. And we need to build quite complex sequences of updates. Let's say when the pages edited, HTML should be re-rendered, then summary should be updated, then mobile should be updated, and all the warnings should be purged in the meantime. So we came up with some requirements for the Event Bus system while we built it. We needed a guaranteed, at least once, delivery for all the events. We wanted to make it automatically retry, handle all the errors automatically. We wanted to add unlimited number of reactions to whatever event. And we wanted the simple config-based rule set up. And it should scale to support all the current use cases we've had. And we wanted multi-dc support and detailed monitoring and other stuff. So what we built? We built the Event Bus system, which is based on Apache Kafka. And right now, the Media Weekend posts an event to Kafka through the event logging service, which validates the events in schemas. And then from Kafka, it is consumed to various consumers, like Kassoki, which Andrew Otter is building, to give it away for public consumption. Then edit data, which Analytics is building, and the change propagation service that we are building. So change propagation mission and what it does is basically direct consumption from Kafka is very, very complicated. Because you need to handle all the states. You need to handle the position in the queue. You need to handle retries. You need to handle errors. You need to do all the kinds of stuff. So we basically built a service that could do that for you. And right now, it's used to update res days, warm-up, or as caches, for soon we will post events to the review stream that Collaboration Team is building. And basically, here's what you need to do if you want us to update your service or if you want to react to any event in the Media Wiki or any other service. Basically, you need to set up change propagation rule, and that's it. So here is an example of how the rule looks like. This one would react to the page edit, and it would call an endpoint in res days and update res days content. So if you want to add a reaction in your service to any of the events, you just need to add a rest API endpoint and set up a rule like these, basically 10 to 15 lines of YAML config in change propagation. And all of the complexities will be handled for you. You will have guaranteed delivery of events with the retries, with the state management, with multi-DC support with everything. So if you're interested, please come to the services team and we will add these for you. Thank you. Thanks. Caldari? Hello? Oh, oh, we're here, you know. OK. Let me do some screen sharing. All right, so I'm presenting for community tech today. The plan that the project that we were working on came from last year's community wish list, which was to develop an OCR tool for Indic languages. So that's not what I'm looking at. Let's see. Oh, there we go. Thanks. OK, so we basically we developed actually two different versions of this. One is just a generic interface on tool labs that anyone can use. So basically for this one, what you do is you put in the URL of an image from Commons and then you put in the language code. I think this is Hindi, so click Go. And this should load the image and then also perform OCR on it and give you the OCR text, which you can then copy the clip forward and do whatever you want with. And this was done in collaboration with Google. It's actually using their OCR engine, which they let us use for free, which was nice of them. We originally actually wanted to use the Tesseract open source OCRing engine, but there were a lot of issues with this. And it turns out that the people who were in the Indic language communities had already been kind of indirectly or they had already been using Google's OCR, but through Google Drive. So they would actually like upload documents to Google Drive, get them OCRed, and get them back. And they were very happy with the level of quality that that OCR engine gave. And unfortunately, Tesseract was just extremely worse and did not have the same language support. So luckily, we got this worked out with Google and it works great. The other interface for this is if you go to one of the Indic language Wiki sources, like Bengali Wiki source, and you go into the proofread page extension, which is what they usually use to transcribe images from book pages, then there's a new button here, OCR. And there's actually a version of this on non-Indic languages that does use Tesseract. But since we were not able to use Tesseract for these, then this is actually the Google version, which is why it's in the Google Colors there, which is kind of subtle. But that's how you know the difference. So you just click on OCR here and it starts performing the OCR request. And then in a couple of seconds, it gives you the text back. So that's basically it. You guys have any questions? Is that everybody? Nice. That's cool. OK, I think Julian, you're up. Hello, everybody. Let me show my screen. All right, so what I want to show you today is a little improvement that I am making to the Maps user interface. So as Yuri showed multiple times, you can show external data from Wikidata using Wikidata IDs or using Wikidata query service. And previously, you didn't have the credits, the Wikidata credits within the map annotation. So now, as you can see, if you're showing some Wikidata items here in the map annotations, you can just click on it and it will point you to the corresponding Wikidata item. So here, it's Metropolitan France and the same thing. If I click, I just go directly to the Wikidata item. So if you have a Wikidata query, it will show Wikidata excellent query for now. And if you click on it, it opens the Wikidata query service with the query already there. And you can click Run and quickly verify the query and make some changes and have all the benefits from Wikidata query service. And once you're done with the query, you can actually put it back to the Wikidata article. Just for the presentation, for now, if you have multiple items, it will show a list of entities like this, or it will show the Wikidata query. And same thing, you can click on it and quickly examine and make some changes and put it back on your map. And very quick note, the links. I noticed that mouse is not visible. So the links are at the bottom of the map where the credits are. So it's kind of hard to see, but you can click on them. Yes, so that's it for now. It's a little improvement, but we hope it will help you verify the XML data source and easily modify the queries and review them. Thank you. Thanks. Sam. Hey, everyone. Quick audio check. Cool. So my name's Sam Smith. I'm a software engineer on the reading web team. So pink, as I like to call it, is a tool which generates or builds a top Raspbian images, which is Debian packaged for the Raspberry Pi to build an image that will make a Raspberry Pi operate as a multi-WLAN hosting wireless router and network conditioner, which is a lot of big words. But what it really means is that right now I have a Raspberry Pi sitting in my home office that is broadcasting a number of wireless networks, all of which condition the network such that it emulates 3G networks, 4G networks, any kind of network that I want or any condition network that I want with user configurable latency and bandwidth. I can also do fun things like adjust packet loss, et cetera. So it started off as a brief experiment after last year's all hands when I found out there was a network in the office that allowed you to sort of emulate 3G. Any device that connects to it would act like it was on 3G. And Adam gave me a couple of older feature phones to take home with me. And I didn't have one of these networks in my office. So I wanted to build one. So I built a little project that I called MicroDeviceLab, which is a UI that sits into a REST server that manipulates Linux networking tools and the network in stack so that basically any REST on the network can have its traffic modified. And it's the GIF at the top of my readme, but that basically demonstrates the UI that's presented to every device that joins the network. I thought this was actually quite a simple approach to have for an admin to basically sign in and change the network conditions of a device. But actually, I thought about it a lot. And I figured it was actually overly complicated. So instead, I built a new tool called Forge, which takes a Raspbian image, mounts it, installs the right packages, and configures them so that when you write it to an admin guard and stick it into your Raspberry Pi, the Raspberry Pi will boot and automatically be broadcasting the kind of networks that you want conditioned in the way that you want so that you can just join your devices to the network that you want. And all traffic from those devices will be configured as such. And that's it. So I'm going to turn off screen sharing now. OK, thanks, Sam. Is there anybody else who has a demo? OK, I think that is it. Thanks, everybody. Have a nice afternoon. Thanks, Adam.