 That's it. We'll have five minutes for questions. Oh, yeah, there's the recording that just started. So people online missed the introduction, but that's okay. So without further ado, let's go to our first lightning talk. Joseph, take it away. Cool, thank you Kevin. I thought Yuri wanted to go first and he's saying on the chat that he's ready. Yuri, do you want me to leave you to go first? If that's possible, sure. Please go ahead. Okay, well, my name is Yuri and I'm about to start sharing my entire screen and show you what I have accomplished. Can you guys see the screen? Is the screen visible? To me, yes. I hope so. Yes it is. All right. Yuri, we can't hear you very well. You're very muffled. The mic seems to be wrong. Yes. Excellent. Yes, okay. All right, so as I've presented before, some of the things that I've been working on and Max will continue my presentation with our mutual project. I have been mostly working on graphs and maps. Graphs, there is a graphics extension that's now capable of showing maps, simple maps and various visualizations of the data for everyone to see. I presented as a in-wiki defined format. Some of the interesting work the community has already been doing and that's actually, I think it might be better to present what community has been able to accomplish rather than what I personally did to help with that. So for example, on German wiki, I believe the guy's name is Miss, M-I-S, created this template which allows a very simplistic charts, simple one dimension, sorry, two dimensional bar and area charts can be done with a very simple template and just supplying some parameters. This has just been released. Also, another very interesting one is a simple map template. Also, very simplistic interface, much simpler than Vega itself. If you look at Wikipedia page, for example, timeline of Moscow Metro, this chart visualizes how each line grew in time. So as the years progressed, a number of lines increased and which line grew. Another use, a very convenient graphing visualization is list of most expensive paintings. There's a big list of the most well-known paintings and there's a scatter plot that shows the most well-known paintings when they were sold, sorry, when they were made and what artists they're by. So you can get much more information from the data. So there has been a lot of usages. English Wikipedia is actually lagging behind, but something like Hungarian Wiki, for example, has tens of thousands of usages of the graphing and some of them you can see, it's basically a little bar chart showing the population history taken from Wikith data. So as you can see, and actually the data is from all over the place and from different places around the world. That basically concludes my presentation about graphs and I will actually let Max continue with the maps. Yeah, thank you very much, Yuri. And Max will present a little bit later. He's not mic'd yet and ready. So I will open it up questions. So if you have a question, I'm here to ask it with the mic or put it on my RC. By the way, I love this idea of editing Wikis to create a graph. It's super powerful for an entity that is on its own. You see any possibility for integrating two, either using this graphing or our search dashboards or maybe using it in addition to search dashboards for her Wiki reports, maybe something to generate a graph. Well, graphs tie very closely with the media Wiki software and it can take data from any source, obviously, and external source, including included, but at the end of the day, this specific technology was developed as an on-site, on-medial, inside Wiki page component. We could adopt it. We can do very creative things with portals. For example, Zero Portal was actually created using Wiki technology and some Lua scripting. So that could also happen this way. It all depends on the technology of the dashboard as we develop it. If it's part of the Wiki, then I see no problems absolutely to integrate it. A question from IRC. Can data used to generate the graph be taken from another table, for instance? Absolutely. Currently, it's possible to use any other page on the same Wiki and use Lua scripting to extract the data from that page, which converted, filtered, formatted, and submitted to the graph for plotting, which means that that table, for example, that I showed of the most expensive paintings, in reality, it would be ideal if that table was stored as a data blob, as something like a CSV file, a CSV text on the Wiki page, and Lua would generate both the table and the graph from that data. Eventually, I hope that the data can be stored on comments as files, and then this way it can show, for example, United States as states maps with the highlighting of individual states or regions or things like that, or also the data like expensive paintings again, stored as big blobs of data surpassed by Lua and visualized into something more useful. All right, thank you very much, Yuri. So let's give a round of applause for Yuri. Next up is Joseph. So Joseph talked to us about paintings. Heya! So I will start sharing my screen. Hopefully you can hear me. Now, cool? Great. And can you see the slide I have now? Hmm, then it's bad. I will not use the presentation style. I will use this. Maybe I need to switch off my camera? Hmm. Share screen. Oh, it says loading. That's good. Oh, great. I'll try that way then. Works? Yeah, looks good. So talking about page views, a project from the analytics team. I'm Joseph Alamandou. I've joined the Wikimedia Foundation a few months ago in February to be precise. And page views is one of the core things I've been focusing on since arriving. So the idea is that we have a lot of HTTP Eats at the foundation. Last 18th of June, I counted 9 billion HTTP Eats on one day. And those were only the ones I knew about because I'm pretty sure that some of them might not be in our logs. And the idea here is to try to make this data more useful. So the researchers came with a definition for what they call a page view, which is basically a request for Wikimedia assets that can be counted as a single human-driven request for self-contained piece of text-based content. What that means for me is we want unsampled data with no spiders or as few spiders as we can because some automatic bots do not describe themselves as spiders and they are very difficult to track. We want no actions except the actions that are actually content presenting like search. And we want to enhance this data with some useful additional information. What we do that is we take all the hits we get on our front-end infrastructure into log lines. We log everything into our computing system called Hadoop and that represents about 1.6 terabytes per day. So that's big. After that, in order to have this data more easy to work with, we refine it and we pre-aggregate it in order to get to a point where this data is at a size where we can use it on a daily basis, about 6.5 gigabytes per day for the aggregated page views and even smaller if we don't want too many dimensions like the project views with about 4 megabytes per day. This data is for now available for internal staff or researchers under NDA but our project is to make it available to the broad community through an API. Obviously, it will be sanitized before and we will make sure that we cover all the intellectual property before deleting the data. Now, it does work and I will also make a little demo with you being an internet interface to queries a cluster. So basically here, there is a queries for the page views for the top 10 countries of desktop users on E&Wiki. It was the same day I did the data for, which is the 18th of June and using the same query for the refiner requests we took about 14 hours of CPU time so since we have many machines running in parallel it's not 14 hours of real waiting but it's still very much computation. Taking the first aggregate we used we go back to 16 minutes of CPU time and using the most aggregated data set we have, it takes about 3 minutes. We gain about a thousand a sudden time. Quick demo as I said. Let's go to you. This is an example of query done to the cluster where I basically get the continents and the page views per continent for the full months of May for users only on English Wikipedia and this is what we get. We have 4 trillion page views for North America, 2 trillion for Europe 1.7 trillion for Asia. The chart is not very beautiful because U is not perfect, it's not really a charting tool but it gives you an idea of what we get. Another example would be this one which is the top 50 pages over the month of May. What we can get here is that you know that there was a new Avengers getting out that there was some boxing going on some Game of Thrones obviously etc. And this in reasonable computation time I'll get back to the presentation. So now the work in progress on that we want to integrate this data into a dashboard the vital sign dashboard to make sure that the page views regularly updated can be viewed by as many people as possible then we will deliver 2 cubes oriented data for researchers to work with and then the plan is as I said before to build an API for the broad community to be able to request its data and make our page views accessible to people. And I guess that's me. Do you have any questions? Joseph, any questions? Silence. Okay, we'll keep it moving. Thank you very much and if you have questions you can always ask Joseph off the line or if you want to know more about this. If you also want to know more about this you can ask to turn this into a tech talk. Same goes to any talk. So let's move on. Our next presenter is Stas and we're going to talk to us about Wiki data query service. Yeah. Guys, there are some questions that I see that are just filtering in for Joseph. I think we maybe should talk about those first. Okay. Just very quickly, I see the difference between Q and vital signs. So Q is an interface into Hadoop. It lets you write type queries and also lets you visualize the data. So it's for a very technical percent. Vital Signs is meant for a high-level report card of metrics, like pre-computed metrics. That serves. Yeah. We had another question about page view API timeline. Oh, so the timeline, we're going to build an API at this quarter and it will be a very simple API like stats.brock.se. So that's as much as I want to say about the timeline for now. And saw another question. Are they both publicly usable sites? No, who is not public? Because it gives you access to the scroll logs and refined logs. There's information that is not public. Yes. So that's a tool to be used internally to create data. It's the type of researchers. All right. I'll leave it there so we can move on. Stas is not ready. Okay. Okay. Are we ready? All right. Next up is Stas. Okay. So I'm... Okay. So I'm Stas Malyshev. I'll be talking about the Wikidata query service. So let's start with why we need this thing and what the thing is. So we have about 14 million data items in Wikidata and that's just items. We probably have billions links about them. So the question is how we make sense of all of it. And more importantly, how we let people that use it make sense of it. So basically that's the purpose of Wikidata query service. More kind of formula is this is the way to ask complex questions about the data and get these questions answered. So example questions. So suppose we are interested in life and work of famous physicist Richard Feynman and we want to know who he was working with and what these people were doing at the time that he was working with them. So how we ask these questions. We are not there yet to just put it in the text field and have it answered, but we have kind of next to best thing. We have a language or there is a language called Sparkle that enables you to ask questions about the data and Wikidata query service uses this language and the software that is built to understand this language to ask the questions about the data. So the questions that we talked about look something like this basically. It's not exactly how it looks. 100% we'll see that later, but it's roughly the form that you ask questions. So there is a formal language that you put kind of the data you want to know and you get some answers like this. So it worked with Kyle Sagan in Cornell University and Kyle Sagan is astrobiologist and he worked with Robert Oppenheimer who is a theoretical physicist. So that's basically how it looks like. So let's see now how it looks like in reality, what we have. I'll need the... Yeah, I'll need the prompt here. Yeah. Okay. So can I? Yeah. So that's the site that we have now, the best version of the project and it's public. Everybody can go there and write queries. So we have an example query that lists presidents of the United States and there are spouses. So if we run this query, we get the list of answers that we can all recognize that that's indeed the list of presidents and there are spouses and we have another thing that we have built. So there is a thing called Explorer. So if you want to know who was Jane Boyman, we can go and see, for example, what was her occupations. Yeah, it's a bit jumpy. She was an actor. And then we want to see, for example, what awards she received. So she received the Academy Award for Best Actors. So we want to go there and see, okay, maybe who else received this award. Cool. So that's a lot of people. So we can explore who was Liza Maneri. So we can learn about Liza Maneri too, but I am not going to go too far into this because that would get really busy on the screen. But I hope you get the idea. So basically we can ask the query and then go and explore the data that we have learned from this query. So, okay. So can we go back to the slide? Yeah. So I wanted to switch. Oh, does the switch here? Okay. So, yeah. So what we want to do with this thing? Like all this is nice, but we want to integrate it basically in the bigger picture. And one example is like, for example, we search on Wikipedia and somebody asks the list of the United States president. So what we do, we type the list of United States presidents and we get this number of pages. And that's what we have now. But what if we could just go to something like that and show the actual list of the U.S. presidents without having to pay. That's kind of one of the things that we are thinking about doing. There are many more. And we welcome suggestions about this. So that's what we have about the Liza query service which is brought by Discovery Team. So if anybody has a question. That's really cool. Are you using a straight up Sparkle libraries for the query and also for the visualization? Cool stuff. Are those libraries? Did you write all that stuff yourself? So we are not using any libraries to do Sparkle right now. We basically just kind of write the Sparkle and send it to the engine and get back the data and display it. So there is one project that I know of that for PHP library to make kind of more convenient to use Sparkle, which would be very useful when we start kind of doing queries from Wikimedia stuff. But right now it's just kind of you just write the text basically just the same way you use SQL and send it to the server and get back like JSON data or any other way of structured data. And visualization, cool stuff, the animated jobs and everything. What was that? Yeah, it's on with JS. So it's basically a very simple visualization on top of who is JS. We just take the data from Sparkle and kind of display it in a graph using with JS. And we plan to improve that and kind of make it more pretty and less jumping more friendly to the user. But right now that's kind of the same thing. All right, no questions on IRC? All right. Okay. It's here. Thank you. All right, next up will be Dan. We'll talk to us about the cheeky. Yes. Starting to share my screen now. Let me know if it works. Yes, we see it. Okay, cool. So the cheeky is a little dashboarding system that we built last year as a proof of concept to put up vital signs, which are kind of standardized metrics. And we kind of put it in freeze after that and got to pick it up again recently. And we want to show up some progress. So the idea, the name is dashboards configured on wikis. The cheeky. And, you know, the cheekies are colorful clothing. So it's, um, so what you want to do is make a dashboard. And what you're going to need is an idea of how you want to lay out the visualizations that you're trying to kind of put on the dashboard. So the dashboard compared to a visualization serves the purpose of kind of telling a story around a particular set of data or area of data. And it, it, um, tries to combine like sort of, sort of you want, you want to look at, um, a lot of data in a simple way. Um, and a layout will solve that for you. You want to be able to update the configuration, add graphs, remove graphs, whatever it may be. And, um, you're going to need a web server to serve it on. So a layout, um, is something that a lot of people don't think about and what you end up, if you don't think, what you end up with, if you don't think about it is these endlessly scrolling pages, um, sort of just put graphs on top of each other and keep going. And at some point you go, that's too many graphs. No one can see any of them. Um, so that's why we, we recommend talking to a designer. That's a good idea. Uh, but if you do that, you come up with, um, hopefully something that's useful. This example is a comparison layout between visual editor data and wiki text data. This was needed to kind of think about the launch of visual editor and the progress. Um, and you can see here a couple of different types of graphs. This is hosted at, I can, I'll paste the link in the chat later. Um, but it's one, one example. And this is another. Um, this is, we have 852 projects. Uh, and we have a bunch of standard metrics. So, uh, showing those all is tricky. Um, we put a couple of auto completes and things around it and made it look pretty and it's much more palatable. So that's what we mean by layout. So if you, if you think of a layout and then you come up with some, um, visualizations for it, you combine all, all that in, uh, into little components we build using knockout components. And I'm happy to talk about that. This is kind of, uh, I'm more interested in kind of what you guys think about this and where we can go from here. But yeah. The, uh, configuration that you need for this particular dashboard is very simple. So you'll see that by default, we have these eight, uh, Wikipedia's that are showing up and the daily page news metric. And if you go in here, you'll see the default projects, default metrics, and the metrics that are available, um, overall to select from. So that's all the config that you need. And as you can see, it's just a wiki page on meta. Uh, it doesn't even have a nice extension. It probably should have a, an extension that registers the config namespace, but having gotten to that. And the cheeky lets you build the end result that you serve from a web server just by doing this. So you just go gulp, specify the layout, specify the config. Um, and it packages the, it minimizes and concatenates the JavaScript and the CSS and everything and puts it in a folder that you can then just easily serve with a simple virtual host. Or you can take the resulting files and serve them on a wiki, um, or anywhere because the cheeky doesn't have a server. It does everything over, um, over gets and it, it assumes cores and everything is set up. So we went around and did that to all of the places we host static files around our, um, environment. So datasets.wikimedia.org is an example, a couple of labs, places where we host stuff all have cores. So yeah, um, this, this lets you create dashboards. Um, they look like this, uh, or like whatever you want. Um, so it really, it's up to you. Um, as you can see, you can have custom visualizers, um, anything that, uh, anything that JavaScript has to offer, you can easily package in a tight little, uh, account web component and build a dashboard. So going forward from here, what we really want to do is kind of make this easy for people to add to, um, easy for people to stand up on their own and wrap it around their data. Um, so, um, I just want to open it up and kind of ask what you guys think about it. Um, if we're missing anything from kind of the vision and how we built it, or if you want to know any more details about code or anything like that. That's, there's some questions on IRC. Uh, is there some good documentation for dashiki so people can play with it? Uh, no, not really. Um, we just kind of, so it was on cold for a while and we just kind of executed on the sort of vision that we had when we first started it. Uh, so there's no documentation yet. Um, but mostly I'm not really sure kind of how people are thinking about this if it's interesting to anyone. So I'm not really sure what angle to get to make the documentation from, whether it be a user, like focus more on the user perspective or on the contributor perspective, et cetera. Another question from IRC. Uh, is Vital Signs the next generation of flow dash report card on WMF Labs? Is it hard to upgrade such report cards? Yeah, flow dot report card. I can click on it in the IRC here and show people. So this is, this is going to suffer from kind of that endless scrolling thing where you just have graphs on top of graphs. So like let's say we were to migrate this. Basically what we'd want is give us a better layout. Give us kind of parameters that you're looking at. So perhaps it looks like we have different types of actions or things that maybe these could all be in the same graph, but just people selecting and deselecting different things. So kind of give us an idea of what that looks like. Maybe work with a designer and then we can quickly throw up a layout. So Dushiki makes it really, really easy to throw up a layout around that, that design. And then we can migrate it. Yeah. So that, that is the idea of it is to get away from managing multiple different visualization and dashboarding tools to just one. And Dushiki is like a dashboard building tool. Yeah. One last question. Echo. So when would I use Dushiki? And when would I use the graph extension? So the graph extension you would use to. So. Tight integration with wikis right now Dushiki doesn't build dashboards that are particularly friendly to wikis. You could, you could host it on a wiki, but it won't. It won't like spit the files out in the way that an extension would expect them like in the same folder structure, stuff like that. But mostly I think you'd use Dushiki when you want to have a pretty consistent layout. Like you have a couple of dimensions like projects and metrics, for example, that have a lot of large cardinality like you have a thousand projects and you want to show all of them. But you want navigation around your data. That's when you would use Dushiki, I think. Because if you think about a graph made with a graph extension, let's say you wanted an instance of it for each wiki that we have, it would be kind of hard. Cool. Thanks, Hank. Give Dan a hand. So up next is Max with Max. All right. So we at the Discovery team at Wikipedia want people to discover more stuff on Wikipedia, more information, more articles to read, more articles to contribute to. So in addition to links in text, we want people to discover stuff based on geography. So for example, if we need, if we know coordinates of some stuff, you should be perfectly reasonable for people to look up what's around. So we want people to be able to do this. See on what's around, discover more information. Go to that page, read it. So basically this is a dynamic map, like Google Maps, but with open data from OpenStreetMap and the map that's specifically tailored for our requirements. So here we can see London, unfortunately, the server, I'm sure this one is slightly overloaded. But yeah, you get the idea. Another thing that we will be working on shortly as soon as we get the actual dynamic maps is to replace these location images. So like people create them manually or with some scripts and then just upload them to Wikipedia, which is a serious waste of time. And we want to be able to create these images dynamically. And when people click on them, they will also see a dynamic map with all those articles from Wikipedia, maybe data from Wikipedia, whatever. So that's it. Questions? Do you have a timeline on when you would replace the maps on the articles? We will deploy something into production within the next quarter. And within this time frame, we will start integrating with Wikipedia. Of course, a lot of stuff depends on communities here because we don't want to force them to use something particular. We want to offer them something superior that they will want to switch to. So it will still be up to the community to update articles to use this, right? Yes. Any other questions? Thank you. Let's give a hand to Max. And our last presentation for the day, James. It will just take a second to set up the slides. Are we good? Sweet. So first of all, please pardon the rigidity of these slides. I sort of slapped them together for the last minute, not realizing that I would show them today. So I'm going to talk about types and JavaScript manifested as typescript in this case. I recently had to write some JavaScript for one of our tasks and realized that JavaScript can actually be kind of hard to crack in my head. It's too small. But I decided to use typescript as sort of a crutch and that led to sort of an interesting discussion on the value of types and whether or not typescript can be useful to us. So this is not to argue that we should move from one language to another. This is mostly just an interesting exploration of sort of some new ideas. So the agenda of this thing and this presentation is probably way too long for five minutes so I'm going to kind of fly through it. But I want to talk about some of the advantages of typescript and sort of types in general. Oh, yes. Sorry. So what is typescript? Typescript is a superset of JavaScript that's developed by Microsoft and it gives you everything JavaScript gives you as well as type applications. So you can make statically typed function declarations, statically typed variables and so forth. And we'll see a bunch of it in the next slide, in the next few slides actually. And so actually that question leads me into my next point. It's something new. It's something to learn. So of course there's an overhead to invest in learning a new language. So I hope to show that it might be worth at least exploring that investment. My clicker is not working. It was a second ago. Yeah, it's on. You just advanced for me. Go ahead and pass this. I don't think in the interest of time I think we'll skip a couple of slides. Awesome. Okay, so here's sort of a thought experiment. What does this function do? This is JavaScript. The function is named who and it has one argument X and I've hidden the information. And to save you the time, I'll go ahead and tell you that there is absolutely nothing that you can infer from this function signature in terms of guessing what this function might do. It looks like it's a unary function, right? It looks like it takes one argument and maybe it doesn't do anything with that. But because of the way JavaScript works, it could ignore that argument. It could use other arguments that aren't even listed there by using the arguments object. It could mutate some state. It could do nothing. It could return void. It could assume X is a number and double it from that. Basically, we know absolutely nothing about this function. So for us to understand what it means, we are essentially required to go into its implementation and read the source code. So this is the same function, but now it's I've added a type annotation and this is TypeScript syntax. So what this says is that who is a function that for all a where a is any arbitrary type, who takes one argument that's an a and it returns an a. And so it doesn't look like we really added anything. But actually, this thing is pretty darn profound. So if you think about what would it mean to define a function that takes something of any type where we don't know what the type is. We just know it has some type represented by the variable a. And it returns a thing of that same type. Well, it turns out there's actually only a single possible implementation of this function. And that's the identity function. Can you press write? It's not working on my clicker. There we go. So I guess these don't work too well with the clicker. But that's cool. So here's the implementation revealed. And for this thing to compile and there's a little caveat if we ignore null and then define the only possible way for this thing to compile is to just return x. And I'll get into kind of some reasons for that in a minute. But I think this example, you could walk away from this presentation right now having seen only this example and get 90% of the points that I'm trying to make. Here's another example. I'm going to go much more in depth than it. So let's say we encounter this code. We see bar z equals multiplies 6 and 7. Because the function is called multiply, we can assume that it's going to take those two arguments and multiply them together. So most likely it would be reasonable to assume that z is going to be 2. Do you mind if I just use the keyboard? This thing doesn't play too well with these slides. Awesome. Thank you. Yes. Okay. Much better. Okay. So I'm going to give you a whole bunch of possible implementations for multiply. So here's the obvious one. Multiply takes x and y and multiplies them and returns that value. So in that case, it does what we expected it to do. But we could also implement it this way. Could return the sum of x and y and that would be perfectly valid. It could do nothing. It could multiply these two values, but maybe we forgot to type return. It actually returns undefined. And we wouldn't really know this until runtime. And maybe not even then. It could ignore one of the arguments. It could return something entirely different from the arguments. So in this case it's not even returning a number, which we thought it should. Oh, man, I'm way almost out of time. I'm going to really fly through the rest of this. Okay. So in on and on and on. So it can do all these things that aren't what we think it should be able to do. So let's put this on its head and run away screen, right? Well, what if we throw in some types here? Which I, yeah. So let's say that we have a strongly typed function multiply and we write it this way. So now what to see if we assume the multiply takes two integers, sorry, takes two numbers and returns a number. Let's see how that affects these functions from the previous slide. So our first function is not really correct. Second function is also still valid even though it returns kind of the wrong thing. This function will not compile. If we try to write this function in TypeScript, the compiler will say, hey, you told me multiply returns a number, this doesn't return a number, something wrong. And so now we've caught one of our problems at compile time or this thing, or ran our tasks. This is still valid and incorrect. The string return has the same number. Our type annotation says this returns number, we can't return a string. So the compiler catches that one for us. And on and on, sorry, in the interest of time, I have to rush. Okay, so this is a little better. We've eliminated some potential error cases from the code that we could be released into code review. And this is way too in-depth for me to go into. Sorry. But this is similar to that first example where we have the type parameter and it made things really awesome. So maybe talk to me after and I'll explain why this is cool. So I've shown very briefly that there are some advantages. We can annotate our functions with types and that prevents a certain subset of errors from entering our code. But there's a catch. This is TypeScript, it's not JavaScript. So a browser doesn't know how to run TypeScript, we have to produce JavaScript from the compiler. But we might not. The generated JavaScript might not be something we want to deal with. We want to deal with the TypeScript source. So what happens when we compile this code in particular? And fortunately, because TypeScript is a superset of JavaScript, when you compile it, unlike with CoffeeScript or some of these other languages, PureScript, the output from TypeScript tends to look almost the same as the TypeScript itself, so it's not too much of an overhead, I think, when it comes to reading the JavaScript is generated from TypeScript. So that's the end. So the conclusion that I totally don't have on these slides is this is maybe worth using if nothing else is just a tool to help write the initial JavaScript, even if you then throw away the TypeScript and put the JavaScript version control, that's at least how I intend to use it and then maybe we'll keep the TypeScript around or maybe not. Question. Is this basically strong typing for JavaScript? Yes, this is strong typing for JavaScript. Well, maybe that's a little bit too simple. JavaScript does have strong typing, but this is static type for JavaScript, so this is declaring and dealing with the types of things when we write the code rather than when we run the code. I have a question. So what was the project that led you to consider this or what was the problem that you were trying to solve? Yeah, the problem that I was trying to solve was adding event logging to the Wikipedia portal, www.pedia.org and so that meant I had to write a whole bunch of JavaScript to go through all of the links and all of the form elements in that page and then instrument them with event logging. There was a lot of manipulation of things and I had to know is this a link, is this a form and how do I manipulate it differently of that. So it wasn't a problem that specifically was geared toward having types that just happened to be a JavaScript problem and so any sufficiently complex JavaScript problem is too hard for me to understand, so I found that TypeScript was pretty helpful. Thank you. Any other questions? Okay, thanks. Let's hear it for James. So that will conclude our talk. Thank you very much for attending and we'll send out an email once it's posted to YouTube. Thank you.