 My name is Grant Ingersoll, and I'm the Chief Technology Officer here at the Wikimedia Foundation. The fact that we as a project in a community support over 300 human languages, as somebody who's written multilingual software is just mind-boggling. Nobody else does that. Nobody. Any of the big tech, nobody supports, I believe, in somebody's citation needed, but somebody can fact-check me there, but I don't believe anybody else supports as many languages at the scale that we do, so maybe that's my caveat on citation. And that's really just a true testament to all the folks involved working on this for 20 years who said, you know what, we're going to do this for everybody, not just English or not just French. But when I think about the way the product and the tech department work together to serve the movement, I mean, I think, you know, there's a lot that goes into it. There's some of the obvious things, right, in that we run something like, I don't know, 2,400 servers spread out across four going on five data centers around the globe, perhaps even more in the near future. We support something like, I want to say around 2 million plus lines of code, right? We do things, everything ranging from, you know, installing those servers into the data center because, you know, as I mentioned earlier, one of the key things we want to do is be independent of some of the, we want to maintain our independence, so that means running our own servers. We want all of our software to be open source in line with the open knowledge movement. And so we use only open source in production. And that requires a good deal of engineering work to make sure it all runs as seamlessly as possible for our sites. We also support an entire, what we call, Wikimedia Cloud offering, which basically allows anybody in the world to show up and say, hey, I want to do Wikimedia things or Wikimedia things, technically, and we will host that code and help them secure that code and run that code such that there's just absolutely amazing feature underneath the hood here of a bunch of bots and tools that people in the community have written that run on that cloud platform that then help keep the site up and running or a bit more effective. A little fun fact is something like 30% of all edits are actually done by bots written by community members running on that cloud services. So that gives a little bit of a taste of how that budget is spent. We also do research. We are continuously adding features and functionality. We are working with our communities on issues ranging from, hey, how do I make sure I can have great discussions on our reply to pages or our talk pages? Sorry. All the way through to how do we make sure we have effective tools for dealing with trust and safety situations because the world is incredibly complex and we need to make sure we need to do what best we can to support our community's safety. All the way through to how do we make sure we have these sites secured for those who do computer science. There's an old joke that there are two hard things in computer science. The first is naming things and the second is writing caching. But we actually made some really significant improvements in the way we do caching such that we are way more efficient at delivering content around the globe and keeping it fresh. And that freshness then directly translates to our readers and editors having a much better experience at the end of the day, right? Like, you know, if a Maradona dies or Kobe Bryant dies and you want to read about it, you don't want to read the outdated news from 30 seconds ago. You want to read the outdated news from two seconds ago. So the first thing I'm really excited about is what we are calling global site performance, namely, you can think of it as we want to make sure that no matter where you are in the world, you have, and I'm going to hedge here a little bit, but you have roughly the same experience as someone who lives, say, near one of our main data centers, right? Like, which is located in Washington, D.C. in the United States. Or if you're in Europe near Amsterdam, which is where one of our main caching centers is. And so we want to make sure that if you're no matter where you are, you have roughly that same experience. The second thing, you know, I talked to, I've talked some about, you know, we want to make sure the site is secure and reliable. We know that the landscape around the security of our sites is ever changing due to legal frameworks changing. And so in order to make sure we can truly withstand outages across any of our sites, we want to make sure that we have offsite backups from which we can restore, right? So we spend a lot of time just trying to make sure that if something, you know, if something bad happens, knock on wood, it doesn't. But if something truly bad happens, we can deal with it. So there is some disaster planning scenarios that we just work through and run through. Gory details are all underneath the hood. I don't want to bore our listeners with it. And then last, I'd be remiss if I didn't highlight, you know, I think one of the things this project has a real opportunity to do or these projects have a real opportunity to do on the software front is to modernize some of our software practices and tooling. We are working to establish what I would call a better welcome map, right? I think one of the things we've under-invested in, and it's almost sad to say because we are a site built on documentation, some of our software documentation, et cetera, could use some more love, if you will, right? So we're making some investments in documentation. We're also making investments in how do we make sure our content and our code bases are more inclusive, more welcoming of people? Because at the end of the day, you know, again, we want to make sure we're reaching everybody around the world and that's going to require developers around the world as well. Some continued focus on the resilience of our sites. We are continuing work around helping or doing research on, I believe what the community calls knowledge gaps, this notion that, you know, one wiki might have information about a particular person or a particular subject and another wiki doesn't, or it's missing information on a particular category. So trying to help through suggestions and editors fill in some of those gaps. We are continuing work on adding additional hardware capacity. So we are bringing up a second caching center in Europe. Fun little fact is 50% plus of all traffic to Wikipedia goes through our European data center, which is currently in Amsterdam. That is a single point of failure if you want to get into tech talk. And it also means that most of our traffic from Africa and the Middle East and Russia and the like goes through in all likelihood that data center, which isn't always a great performance experience. So for both resilience reasons and for performance reasons, we're adding a second data center or caching center and that should better serve traffic to Europe, Middle East and Africa with where it's located. So I'm excited about that as well.