 the French Wikipedia, that should be within a few seconds. That's what users will expect. And so we need to make sure that we're building systems that actually meet the user need, and not just I can write a scientific paper that claims that my decisions are fine. The purpose of technology is to serve, not to be the technology. And so if that means we're going to have to compromise on ways of doing things in some cases, then so be it. Right now, we're talking about using JavaScript in Python as our two example languages. Originally, we're going to ship with just one. And I pointed out that if we wanted to ship with multiple languages in the long term, starting to ship with just one means we'll always only have just one. Scribonto, technically, is a multi-language system. But no one's ever added anything other than Lua to it. And actually, at this point, it is so concreted in that it would be really hard to add a second one. And so that's why we shipped with two. Maybe we'll support 20, maybe six, something in the future. What those are or why we have them might change over time. We might do dynamic recompilation of things into a web assembler or something scary. And that might speed things up, but also make it harder to reason about the system. So right now, we're at a pretty simple level. All our stuff is obviously open source. You can go look at my code and tell me I'm an idiot. You're probably right. And I'd really love feedback or thoughts anybody has. If you want to get involved, I'd be glad to talk about it. But where we go in the future is going to be driven by the users, what we're building, who we're building that for. And so I don't really want to kind of make a decision now or even a promise now, because that may not hold true. I wanted to disagree on two points quickly. That James said first, he's not an idiot. And second, I'm not exactly sure that we really need to provide an update if something changes on wiki data immediately to the Wikipedia as like within seconds. Sure, it would be desirable. But one of the things that I did in preparation for the whole episode of Wikipedia was to try to check how quickly does right now knowledge propagate between the Wikipedia as, for example, the mayor changes of a city or something like this. And we have places where languages take up to two or three years to update if they update ever on those things. So something that takes a day or two is still better than that, right? And it would be great if, for example, it doesn't have to be automatic. It always propagates within seconds. It can also be something like, well, for this Wikipedia, I want now something like Action Purge or whatever. Kill the cashers and try to recompile it. And it will take a little bit. And it can be done in a few seconds. But in other cases, it might take a day or two until it is there. It's still faster than our current methodology, which is basically the baseline that I'm taking here. But again, we are discussing on those things. We will discuss it also with the community and see what is actually acceptable on those things and so on. We will have to find something that works with the infrastructure and that works with the community. But not all of these things are decided yet. We're going to have ongoing discussions over the next few years with all of you. So Nick, this gentleman is having his end up for a while. Sorry, Leila. Hi, so if I get it correctly, with the license that we have for wiki functions and the code for JavaScript and Python, there will be no ownership for the implementations, et cetera. I cannot say this is my implementation of this function or something like that. And so this is a multi-layered question because this is wiki functions. Can other people edit my implementation? Because different people have different approach to how to solve something. So if they do that, I don't necessarily agree with their implementation. What would happen? Or would they have to create other implementation, like multiple JavaScript implementations? And which one is better and which one is more precise, et cetera? Thank you. So on ownership, so our model is much more like the English Wikipedia. And I think most wikipedia and wikis, which is we're against ownership of objects because community members leave and the objects will stay around. And so it's important that those are owned across the community. So there aren't going to be like this is Bob's implementation and only Bob is allowed to edit it. Honestly, though, that's really not for me to say that is a community decision. And if the wiki functions community decides that they want to have named individuals and this is Bob's implementation, and as a community function, people expect only Bob to edit it, then obviously that is for the wiki functions community to decide. And they may change their mind as well. From a very practical point of view, though, once a function is approved, an implementation is up there so that you can run it as a regular user, it goes to a higher security level. Because at that point, if I then change that implementation, I potentially break all the users who are currently relying on that function. So that doesn't mean you can't do it, but we have different editing rights added in the system. The base level is a regular user who can create functions, create implementations. This is not live yet. That's how it will be. The next level up is Functioneer, who can approve those changes. They can say, yep, I agree. This is a good implementation. It passes these tests. I can add a new test case. And I say, yep, this test case is a good test case. I can also remove them and say, mm, doesn't work anymore. And then there's a super level called a function maintainer. And that has the ability to actually edit an implementation that is currently live. That is going to be a very big, scary, risky thing because it's possible for you to return false, rather than true in your code when you make a change. And that could instantly cascade out to every Wikipedia article all at once. And you break every template. So that's a big thing that we would probably want as a community to think very carefully about how that would work. Maybe instead you should authorize the implementation first, make the changes, agree the new test suite, and then reapprove it. Or maybe you create a new function and get people to switch over, whatever that is. Again, that is a community process that doesn't exist yet. And I don't want to pre-assume where that would go. Also, another thing is that a function can have multiple implementations in the same programming language. So you can have potentially several different algorithms implementing the same function. As actually, James was showing earlier for the concatenate for the join strings functions. We actually have three implementations in JavaScript. And they all free can be approved. And the system will basically look at how is it running against the use cases, how much resources does it cost, and then basically automatically decide. Actually, the community cannot really decide which of the implementations to run. Every approved implementation should be fine. And the system should be deciding based on the resource that we currently have, this is the best implementation to run. Yeah, so right now we're making a very blunt choice, which is averaged across the test suites, which is the fastest implementation. But that ignores potentially pathological cases that aren't currently in the test suites. It ignores what resources are available right now. We're going to improve that. That's definitely not long term. That's just where we got to to ship. And one thing we might end up having is different implementations get run depending on the inputs. So if you ask for the English plural form of this, then we have a function that's really fast for English because it's hard coded in. But for every other language, it's slow because it has to fetch it from Wikidata. And so we'll call you, if this function, if it's English, but if it's Hebrew or a complex language, then we'll call a different thing that has a lookup table that means it's slightly slower overall, but really fast compared to the first function. And so it's not like there's a one and done decision. That might be a dynamic impact on the system. And this is vaporware. All code doesn't exist until it's live in production, and it doesn't exist. So I don't know what it will do, but that's the kind of area we're thinking about. But I love also, for example, this is a beautiful place where I hope that external researchers or groups, for example, in the programming languages community will actually pick this up and see. This is a challenge where you can put a PhD thesis into it, and someone else will go and figure it out. And I'd be happy to just use those results. And for example, the Wikifunctions will provide a lot of, just like Wikidata, will provide a lot of places where people can contribute also as a third party, which we'll be happy to integrate. Because honestly, no matter how big the team is, we can't do everything. Leila from Wikimedia Foundation Research Team, congratulations on the milestone. So actually, Denny, I think you started touching on it. I wanted to ask, do you all have thoughts about whether something like Wikifunctions can engage people who are currently not engaged with the Wikimedia projects and help them become Wikimedians? Maybe they didn't consider it so far for a variety of reasons. Is there a particular group of people or groups of people that you think could join the community as a result of this project? Yes, yes, absolutely. So just like with Wikidata, I think that the researchers have shown that about half of the people who are active in Wikidata have not previously been Wikimedians, and about half of them come from other Wikimedia projects. I hope that we will have also an influx of new people who are interested in kind of functions. This also sounds kind of limiting, right? I mean, people who are interested in functions. Actually, I think there's an interesting space that there are certain domains which are able to be functionized for people who don't necessarily regard themselves as having a computer science background or something like that. Just to give a few examples, but the problem with those examples is I never know which one actually will turn out, obviously. And there might be something completely different that I don't have in the radar at all. But for example, how is this called? Knitting, the knitting community, for example, has over the last few two decades or so, has developed amazingly complex thing for notations and for checking whether certain knitting patterns can work and so on, which would be beautiful to capture in something like wiki functions and which they could benefit from. In music, you can do a lot of things like transpose melodies, check for keys and so on, which you could do in wiki functions. And there's so many places where you can use wiki functions. And one of the advantages of wiki functions is that you don't necessarily need to be a programmer because a lot of these things can happen with, if we have the right base functions with compositions and bring them together. This is how we want to, for example, reach out also to the natural language generation communities that they can actually work on morphological functions before necessarily being Python programmers or JavaScript programmers. That even if they just provide us with test cases, for example, this is something that they can, then one community can create a test cases, another community can actually then create implementations against the test cases and so on. So the way that wiki functions is designed by the separation of the test cases and implementations hopefully supports, for example, that people with different skill sets and capabilities can work together on getting those things out. So, and the whole idea with having the compositions be language independent, also hopefully will reach out to people who currently are not programmers simply because they don't speak English. Benjamin Marco Hill, who's also somewhere on the conference, for example, has shown in papers that the language barrier, the fact that most programming languages are based on English is actually a significant wall for people to contribute in code. So we put a lot of effort into the ability to create compositions, even if you don't speak English, that you can create compositions in Polish, as I've seen, that you can create compositions in Hebrew and so on and actually create those, without you having to learn English, without having to learn JavaScript and so on. So there is potentially a big pool of people that we can hopefully activate for this project and get in. I obviously have no idea who we will actually get. We're getting close to the end. I want to show, I have to think here, sorry, we're still showing the slides. I also want to give a shout out to the amazing team that has been working on wiki functions and apps of Wikipedia, thanks to all of them that we are now getting to this point of actually launch. We also are hiring. We have our positions out there. You can talk to us about that if you're interested. And yeah, let's go back and keep those. So we still have five more minutes for more questions. We have a session tomorrow where we want to teach actually hands-on how to work on wiki functions. And we also want to hand out the fun at the function near right to people who are here at the conference. All right.