 What do you think? Shall we get started? Yeah, might as well. All right, Chris, did you want to add anything before we get going? All good. No, I just, I want to make sure you all can hear me. Sorry, I meant Chris A. Oh, yeah. Too many Chris's Okay, we're good. Okay, right. Yeah, so with that further ado, in that case, let's hand over to Chris Nova and Michael DC for the Falco incubation review. Okay, I'm going to look at slides on my end. So I'll just let folks know when to bump to the next slide. But yeah, thanks for letting us propose a proposal to move the incubation today. So what we want to go over is just give folks a quick overview of what is Falco and the problems we're trying to solve in case you're not familiar with it, or in case you just want a brief update. Then we're going to go over some metrics that DC put together for us. Talk about how far we've come since last year when we did this. Talk about folks who are integrating with Falco that's building up software that is using Falco. And then of course, folks who are using Falco as it stands in just a sonal form. Talk about our roadmap for the upcoming 12 months and then talk about why we desperately need to move to incubation and why we think we deserve it. Okay, next slide. Okay, so what is Falco? So I think sort of the goal that we've agreed upon in our open source community is we're trying to solve cloud native runtime security. This is security for any cloud native software. And we want to do it at runtime, which is drastically different than some of the other solutions that are available today as we're doing this, essentially as a daemon over time. So we're focusing on Kubernetes intrusion and anomaly detection. And we want to integrate with a wide variety of services for alert collection and correlation. So brief history of Falco. We joined the sandbox in October of last year. Big shout out to Brian and Quintin for helping us get into sandbox. And of course, the project was started out of Sysdig in May of 2016. So we've been around the block a few times and we've been iterating on the project for a number of years now. Okay, next slide. I think this is a DC slide. Michael, you know, his zoom dropped. Let's see if we can't get him to rejoin. Okay. Sorry about that. Can you hear me? Yeah. Can you hear me? Okay. Sorry about that. It's bad timing, right? It just happened right at the beginning of the call, right around when it happened. So anyway, so let's talk about how Falco works. So one of our key things is that we take data from the Linux kernel. We do that either via a kernel module or an EPCS probe. And essentially, this is a stream of all the system calls that are going through the hosts that are running the container or the nodes in the case of Kubernetes. We also take all of the audit log data from the Kubernetes audit log API. And all of that is sent through Sysdig processing libraries. And these are OSS libraries that we borrow from our companion project. Sysdig, it's all open course. But these are the main libraries that we borrow from that project. As part of that, then we have a rule set that's applied. And that rule set basically is applied to this event screen that's coming from the orchestrator and from the underlying nodes, Kirtles. And that event screen basically allows us to check for things like is a container opening, outbound connections to the internet, is all of a sudden my node JS container starting to run other processes other than nodes and things like that. When we detect these suspicious events, we send it off into the alerting engine. And then the alerting engine will forward it into one of those destinations. One of the things that we did do over the course of the sandbox is add in new destinations. We don't want to get into the business of kind of doing data processing or positive detection and all of those sorts of things. We're kind of more of what we want to do is be a generic sensor that focuses on providing a really good data screen. And then that data screen is processed in some other third party system. So what we've been trying to focus on is getting the data out of the stick and having kind of generic interfaces to push that data screen into something else. Next slide. And this is an example reference architecture. It's something that we actually published ourselves. We see end users picking it up and using it. And we have this example reference architecture built out for Google where we use the PubSub and Google Functions. And then we also have a generic one that's using CNCF projects. So it uses NAT and then in the case of the serverless functions, it uses Kuglist. But basically what happens here is Falka detects something abnormal. We push it off into the PubSub service, in this case Amazon SNS. And then you can have Lambda's fire to do different actions. So it can enhance the event, which is one of our end users that we're going to talk about use cases that they do. You can actually have the Lambda take action and kill the offending container, kill the offending pod, isolate it with network policy, other things like that as well so that you can begin to do your process of incident response and automating that process of incident response as well. Next slide. Okay. So on this slide, we wanted to focus on the community as well as folks from the Cystic side of things on how we're structuring the project. And the first thing we wanted to do is we wanted to give a huge shout out to five particular community members who have stepped up over the past year and have taken complete ownership of small sub-projects and subsystems of the Falka ecosystem. The first one, Thomas, he's probably the most active maintainer who's out of Cystic that we see in the community. Leonardo Grasso recently published the Prometheus Exporter, which is a way of plugging Falka and Prometheus. We have Luke Perkin, Yolkasha, and Rajeev, all of which have taken ownership of various projects in the ecosystem. And then on the right side, you can see the core Falka team, which is a newly curated team sponsored by Cystic, which is myself, Michael Ducey, and then Leonardo and Lorenzo, and Laura Stiociani as well. Also, we wanted to highlight Mark Stem for all the work he's done over the past few years for bringing Falka where it is today. Next slide. So we've seen great growth. One thing I want to say is working with the CNPF has been really great. They've provided a lot of support, and I think the marketing support, the minimal marketing support they've even helped, shines light to the project, and it brings in a lot of external people. So I definitely want to speak to the fact that Sandbox has helped us tremendously and really has increased our momentum, which is one of the reasons why we want to have this conversation about going to incubation and to keep that momentum going. But you can see some numbers here. One thing I want to point out is we definitely have increased the number of external committers, other people contributing to the project. One area of improvement there, though, is that we need external committers that are working on larger things. And that's one of the reasons why we have those other kind of sub-projects I would call them, like Falka Sidekick and the Prometheus Exporter and Client Go, is that we can start pulling more people into the community to work on these kind of sub-projects, and they don't necessarily have to work on the core Falka engine, which is written in C++, which is a little bit of a barrier entry to some people in the community. Another interesting one that I'll point out is that we're 70% on our way to passing for the CII progress. We've started to try and get ahead of some of the things that we need to do for incubation, and the CII progress is definitely something that we were looking at trying to progress along. Next slide. We do maintain our own Slack and this shared with the Sysdig OSS project. We're looking at moving that over into the CNCS Slack. But one thing I like about this slide that I'll just highlight is that there's much more participation on a daily and on a weekly basis. Much more people are active in the channel post-sandbox than they were pre-sandbox, and we see that, of course, on our GitHub repositories as well, and a lot more activity and a lot more contributions, but we also see it in the online way through Slack as well. Next slide. And then downloads. We've seen great momentum in the increase in downloads. These numbers have kind of been fluctuating as I've been learning how to use Amazon Athena and update those download numbers from the RPMs or devian packages. The numbers, the growth has been really good in this area. One thing that we wanted to try and figure out if we could highlight in this slide is are people using FACO more from a container perspective or if they're installing it via devian or RPM packages? And that kind of gives us an idea of the use case, whether people are installing directly on the node, whether they're using FACO in a container environment at all. Because that's one of the challenges that we have right now is we're trying to balance between do we go full-on Kubernetes and containers, which has kind of been our path, while we have some end-users who are wanting to use this as a generic host intrusion detection system, not in a Kubernetes environment. So that's one of the things that we're trying to balance between. Next slide. Cool. So this one is me. So a big part of the work that I've been focusing on since recently joining Cisco two months ago is trying to move our decision-making process completely to open source and being as hygienic as possible about how we're calling the shots in FACO and how we're pushing all of our work that we're generating through the open source process that we are continually iterating over and trying to improve and make us friendly and it's easy to contribute as possible. So a big part of this has been making decisions in the open. So every decision that we're making, whether it's technical or process driven, we're recording, we're documenting, we're being very good about how we're chartering this through what the ecosystem that we're building. Furthermore, we've implemented proud for all of our repositories or rather all of our major ones that we're currently actively developing on. And this has been instrumental in how we're managing contributions to the project and keeping track of not only our issues, but our roadmap as well. And last but not least, we've been working with Chris A over at the CNCS on how we want to start to migrate about a thousand users from our current Slack channel over to the CNCS official Slack so we can start contributing even more so in the open. So just some process clerical items that we've been focusing on over the past 45 days. Next slide. Next slide. We're just going to go over the Sandbox progress here. So here's some items that we've shipped since Sandbox. So this is a year ago and do keep me honest here because you've been much more involved in the process than I have. But I just wanted to highlight a few ones here. Probably the most instrumental one here is EBPF support. So this is solving the problem of folks who don't necessarily want to load a custom kernel module into their kernel. So we're able to pull our kernel metrics from the kernel using the EBPF protocol instead of having to load a kernel module. So this has been a very exciting part of the work that we're doing in upstream. We also hooked up the Kubernetes audit engine to Falco. So we're now able to start enriching what would otherwise be just regular kernel metrics but with Kubernetes made information as well as the Kubernetes audit stream. Here you can see that we've got a number of other features that we've been working on and these were all highlighted in the original roadmap that we presented last year. So exciting progress for the team and again just wanted to give a shout out to everyone who's been a part of making this whole thing happen. So good job to everyone in upstream. Next slide. Yeah and when we went into the sandbox we laid out a roadmap in our proposal. This is a sampling of that roadmap and then in Teal are the things that we promise in that roadmap and the things that we have shipped. What I like about this is that it highlights that we have the engine of the project going. We can define a roadmap. We can ship features from that roadmap and we're able to have some sort of a process that we're following to actually have releases that matter with useful features for end users. Next slide and you can just go on to the next slide. There we go. Chris? Great. Yep so here's some integrations that we've been focusing on both pre-sandbox and post-sandbox. As you can see here we've grown exponentially and we've been able to work with various projects in the in the ecosystem such as Prometheus, Elasticsearch, as well as Splunk. That's another one that we see commonly for our end users and I think Ducie is going to go into a little bit more detail for some of the users that we wanted to highlight here but this just gives you a good overview of where we were versus where we are now in the progress that we've been able to make over the past 12 years both through the features that we've been shipping as well as the folks who are currently integrating with and using Falco. Yeah and I'll just caveat these integrations. I've put this in the proposal as well but not all integrations are created equal. Some of these are where we've written documentation or blog posts where we show you how to get data Falco data into one of these tools. Some of these are things where we've actually incorporated code directly into the Falco engine like container D and cryo. We actually had to make core changes to the Falco code itself. So each one of these are kind of a little bit of a different integration and the amount of work and effort that went into each one is going to be very basic on the integration itself. Next lighting. One interesting one that we saw is so there's one thing to you know end users using your tool or your project. What we have seen is that a couple companies have actually embedded Falco into products that they're offering. One of them is a company called Ultron who does IT consulting and they've created a secure cloud-nated fabric and they've incorporated this or they've incorporated Falco in to act as runtime compliance and runtime security. They focus on telco workloads as well so next generation 5G workloads and things like that. They also actually incorporate a lot of open source as well into this platform so can you go to the next slide Amy? So they're incorporating things like Claire and Encore, Kubernetes, Kubbench, OS stack, Istio and a number of other projects as well to kind of create this whole secure cloud-nated fabric including the container runtime, the cloud-native stack and everything like that as well. So I find it interesting to how people are kind of taking the cloud-native landscape and building products around it. Next slide Amy. And then another one is SumoLogic which is offering a container intelligence platform. As part of that container intelligence platform they integrate in Prometheus, Fluent T, Fluent Bit and Falco and then they have applications that have pre-canned dashboards for you so that you can actually pull all those metrics and data out of your Kubernetes cluster and then have a holistic view around monitoring and security as well. Next slide. And so for end users, end users have been a little bit of a challenge for us for people to go on the record and I think mainly that's just because people don't want to expose their security tools. But of these companies the interesting thing is the similar thing is that all of them have compliance challenges and in their compliance challenges they're using Falco to meet that compliance requirement of having a host intrusion detection system installed in their Kubernetes cluster. A couple of these are healthcare use cases. A couple of these are government or one of these is government. Then industrial control is site machine and then Shopify of course is PTI compliance and then frame.io which we'll talk about here. On the next slide is movie studio compliance which I didn't realize movie studios had their own compliance framework but apparently they do. So frame.io is a SaaS-based video review company. They use Falco as an intrusion detection system and they have a really interesting use case. I won't walk through the slide because everyone can read themselves. But on the next slide is actually what's interesting is actually looking at their architecture. Amy can you go to the next slide? Thank you. So what they do is they take Falco events and they publish it through Amazon CloudWatch logs and then that pushes it off into AWS Lambda. And then what they do with AWS Lambda is that that function actually will then go and query their environment query their AWS environment and enhance the Falco event with things like the VPC that the instance was running in and other information as well. And then that Lambda actually forwards that event into several different locations. It puts it into Amazon's event processing engine. It puts it into S3 so that you store the raw event for long-term storage and other processing as well. And then eventually it ends up into Elasticsearch where they can actually see the event in Kibana. They have gave a presentation at Usenix about this event screen. In that presentation they don't mention Falco but it does give you a good idea of the architecture behind that Falco event and how it gets processed and how it gets enhanced with more metadata. Next slide. Booz Allen Hamilton is another one of our end users. They basically offer up a platform to developers and offer up what they call Pipeline to the service to where any developer can go and get a new pipeline and as part of that they incorporate security best practices in that pipeline so it's kind of a repeatable process for developers making sure that they can embed security from the development start into their processes. And then what they do is as the container is actually running in production they have Falco rules that are actually watching to make sure that the container is not violating any policy that they have put in place earlier in their development cycle. So they check and then once they actually deploy they check again by using Falco. And they're giving a talk at Coupang America this year as well. And frame.io is also giving a talk with us as well. And then Shopify. Shopify of course a major retailer and they use Falco as part of their host and network intrusion detection system. Once again they forward the event off in something like Splunk and then they use Splunk to actually go and slice the data and actually look at what's actually happening in their community cluster as well. All of these are in our adopters.md file. So if you're looking for those references they're in the adopters.md and one thing that I'll point out about the adopters.md file is that if you ever thought it was hard to get in usage to go on the record probably the easiest thing you can do and it's such a simple thing is put that adopters.md file out there and ask people to commit to it and funny they will commit to it. So it's good to see open source communities working. Next. Thank you. And I think this is Chris. Hey yeah sorry I was texting someone. Okay so talking a little bit about our future roadmap here and this is kind of like what we're planning for the for the next quarter all the way up until this time next year. So we want to look at reevaluating how we're handling our events coming up in the kernel via the ring buffer. So we have our resident C++ expert in PHD Loris who's going to be helping to spearhead that effort and that's going to be doing working on performance improvements and looking at how we're solving dropped events that are coming out of the kernel. We also want to improve the Prometheus exporter. So this is written in go and this has been monumental in how we're driving contributions to Falco and getting folks involved who aren't necessarily the best C++ engineers. So again just pushing the Falco metrics to Prometheus. Right now we have mutually TLS encrypted GRPC support for Falco outputs. We want to look at broadening that to building an entire API out for Falco so that other folks including folks in the Kubernetes ecosystem can start vending Falco and using it in different ways which segues into our next goal. Starting to look at playing with ideas about how we can start to secure Kubernetes by default with Falco. We're still in the process of sort of coming up with ideas of how we want to start proposing the community but we've been looking at ideas of integrating with cops, Cubadmin, Cubicorn and other infrastructure management tools. And so far the folks that we've talked to have been very supportive of this ambition of figuring out a good and same way to secure Kubernetes by default. Falco CTL, Falco octal, Falco cuddle, whatever you want to call it. Basically the administrative and operation style management tools for Falco. Again, that thing goes so we can drive more contributions there. We're looking at building out what we're calling a cloud native security hub. So sort of imagine this as at home charts but for Falco rules and policy. So how do we start defining what rules and what policy we care about as a security ecosystem and how do we start sharing and versioning these rules over time. And last but not least we've been working with folks over on the Aqua side of things with developing a what we call RPI or runtime policy interface. I encourage everyone here to go take a look at that. We would love your feedback and this is just effectively a CRD that's going to solve the problem of how do we start interfacing with runtime security policy and configuration in Kubernetes at runtime not at deployment time which is substantially different than how OPA has approached the problem here. Next slide. Hold on just one second. Can you go back to the, I just wanted to call her a couple things. So the performance improvements as part of that we've participated in Google summer of code and through the CNCS and the student who participated in that actually went through and wrote some tooling that allows us to actually measure the performance of the falco engine itself and so that work is actually going to be very instrumental in helping us drive these performance improvements and actually using that tooling. And then also around the Cognitive Security Hub we also are starting to imagine this for a generic location for things like hot security policies, rego files and other things like that as well. We've talked a little bit about it with the broader community and I think there's just some things that we need to clean up on our side before we try to open this up more. The code is actually posted on GitHub and it's out there and we want to try to develop that. You can go ahead to slide 33. Thank you. Yeah, thanks Ducey. And slide 33. Why incubation? Why do we think we deserve incubation and why do we think we're ready to take it to the next level here? I think primarily there's something to be said about keeping up with the momentum and growth of the projects. We have a lot of folks interested in the falco. There's a lot of folks that are currently reviewing falco and one of the bits of feedback we've gotten is we're lucky to run it in production until we've graduated to the next stage. So in order to push ourselves and make the software as strong and as battle tested as we can, we would like to move it to the next stage to keep up the momentum that we've already been developing over the last 12 months. Furthermore, we have real end users who have real compliance requirements and again we want to just continue to focus on promoting the software and make it as secure as possible and as tested as possible and in order for us to do that we would like to move to the incubation stage. We have a CNCS case study that we've been working on with frame.io. We would absolutely love to get this published. We've been looking at offering some literature on our end around it as well and again in order for us to do this we need to be in incubation. Furthermore, we want to start pulling our build out of cystic managed infrastructure into the open source ecosystem so that we can manage our builds and our releases as an open source community and we would love to leverage the CNCS here. Again, moving the incubation would help out with this effort dramatically. And last but not least, we want folks to be able to collaborate on the RPI with us as we start to figure out what exactly this means and how we're going to start proposing this to the Kubernetes upstream ecosystem. I think it's going to be helpful to have us in the incubation stage as we look at implementing real time solutions for folks running in Kubernetes. And last but not least, there is a link to the proposal that DC put together for us. If folks have any questions or would like to see the official TOC proposal that we put together. So yeah, I think that about wraps it up unless folks have any questions. I have one brief one. You mentioned that your RPI approach was substantially different than OPAs. Could you just very briefly give us an idea of where the key difference is there? You kind of broke up at the end, but I think what you were trying to ask is concretely, what is the difference between RPI and OPA? Correct. Yeah. In summary. Yes. Yes. So basically, if you look at how OPA is implemented right now, and this goes for Gatekeeper as well, it's every time you mutate an object in the Kubernetes database is when we take action. So this is different than how we would look at what we're calling runtime, which is a continual monitoring and auditing throughout the course of an object's life, not just on create, update, delete. Okay. As far as I'm aware, OPA can be used that way as well. I assume there might be a performance difference between the two approaches, but there are people using OPA for runtime enforcement. Interesting. Okay. Okay. I think that's a question more about RPI than it is about the actual incubation proposal, which thank you for that. That's very clever. I covered an awful lot of the points we know. Chris has answered the question, which was how, where are we on conclusion? So is it actually built? It's built into that PR, is it? Yeah. And I think that's the question is, what's in the PR? Is it sufficient for us to have a vote called? All right. Turning that there's no questions. Okay. So we'll need a TFC member to kind of take the lead on reviewing that. I'll go through it and, you know, we can, we can talk about it amongst ourselves. And so did this go through the SIG as well? Are we trying to formulate the, I know it was sandbox already. Should those sandbox projects that want to go to incubation first go through the security SIG? That is a great question. I think we should ask the SIG to take a look and give us their recommendation. Okay. I can take an action item to follow up with the SIG here. Chris has just posted a link, which makes me think it's already being done. Okay. Yeah, there is an assessment underway. Great. Unless we have any other questions. I was just curious, who did the due diligence? If not the SIG? Is it in that link, Chris? A? I think that's why we need a TFC member to review what's been put in there, because I think that has been written by folks from Falco, right? That's correct. So I think that's what Joe has volunteered himself for. Thank you, Joe. Right. So shall we move on to Vitesse? Thank you very much, Falco people. This is Sugu. Can you hear me? Yes. Hi, Sugu. Hi. All right. So I am supposed to be joined with two other people, but Yoon I think said he may not be able to make it. So I'm Sugu. I'm the co-creator of Vitesse. I'm joined with Michael Demmer, who is a principal engineer at Slack. And if Yoon, who is from Square, can't make it, I will speak for his slides. So going forward, what is Vitesse? Vitesse actually has many descriptions. I mean, I think the broadest one is that it's in the new sequel category. Some people call it a sharding middleware. Some people call it an orchestration system. So it solves a few problems. The big ones are one is it solves the scalability problem. It is massively scalable while still giving you a relational interface. It solves a high availability problem, which means that you can generally comfortably run with tests with five nines of availability. And last but not the least is that it is a cloud native. The word cloud native does get used loosely. So I will cover specifically some points about what makes Vitesse cloud native. These are some of the stats about Vitesse. I think the most significant one is who are the adopters. But before going into that, the thing about something like Vitesse, which is a storage system, is actually a really difficult software to conquer, mainly because companies that decide to adopt a technology like this are making real long commitment, like five, 10 or years, or even for the rest of the company's life kind of commitment, as compared to other software systems that are more easily interchangeable. If you're using an analytic system, you can easily swap one for the other. Same with tracing or any, like if you're using data dog, you can say, oh, I want to use signal effects. So those kinds of changes are relatively easy. But to change your core storage system is a much bigger commitment, which means that companies take longer to make decisions to adopt a software like this. But once they make the decision, they also stick with it for much longer. Next slide. So in that kind of environment, it's exciting to see some really impressive names of adopters in the Vitesse list. And here, another point is the way storage adoption goes is everybody wants to know if there is somebody else who has used this. And so it kind of becomes a chicken and egg problem to gain adoption in this area. So now Vitesse has a pretty impressive list of adopters and also in a wide range of deployments. Here you can have, there are people who run on bare metal. There are people who run on public clouds, both AWS and GCP and Azure. There are Kubernetes deployments. There is actually somebody who is actually working on a Nomad deployment also. So Vitesse does show that it can run on a large number of platforms. And I'm going to cover a couple of use cases here. Next slide. Let's see if Yoon has joined us. I don't see Yoon. So I'll speak on his behalf. He gave me permission to say anything on his behalf. So I think I'm allowed to amplify. So Vitesse, Square has been one of the early adopters of Vitesse and they've been participating with the project for two or three years now. Their cache app now fully runs on Vitesse. And they started with one instance, but then they've now grown into a large number of shards and pretty large data set and query volume. While being involved in Vitesse, they also have an engineering team that contributes, of which three are actually official Vitesse maintainers, which means that they can approve and merge pull requests. And they are also growing their usage within Square. Their existing systems are on bare metal, but all their newer clusters are being launched, deployed on Kubernetes. Next slide. And the next slide actually, I'm joined with Michael Demmer, who is known as SF4531F. And he's going to talk about how Slack is involved with Vitesse. Yeah, thanks everyone. Like Sugu said, my name is Mike Demmer. I'm one of the engineers here at Slack. And I was the lead on the projects that brought the tests in as the choice for Slack's database solution. This is kind of a standard slide that we show just to kind of illustrate the growth that Slack has experienced over the last several years. It's been kind of a great experience. But of course, growth like this brings a bunch of stress on the infrastructure. And in particular, the problem that I was looking at was how to make sure that we had a primary database storage platform that would sustain Slack's current and plans for future growth. We were and well, we were very heavily invested in MySQL as our kind of data storage choice for the entire application. We had a bunch of code written that was expecting MySQL level semantics. And we were running a kind of homegrown scale out sharded MySQL system. We wanted to keep a bunch of those primitives in place, a bunch of our operational knowledge on how to run MySQL at scale, but bring in something to help us both manage the instances and implement more flexible fine grain sharding to kind of handle some of our emergent and kind of evolutionary use cases beyond the kind of original model that we set the application out on. So starting in about really in 2016 is when I started working at Slack. And then in kind of the middle of 2017 is when we started rolling out the test into production. So if you go to the next slide, this is kind of the adoption curve as comparing our legacy MySQL solution with the test. The axis is still really obscured, but this is QPS kind of roughly aggregated. So really just a measure of the amount of query volume that is going to the two systems. So as you can see, the aggregate query load goes up over time. That earlier slide indicates why we're getting more usage for more users. The share of the test has been steadily climbing as we've ported more and more application use cases over to it. We averaged about 35% right now. It's a little choppy, you know, we go through states of bulk copying and backfilling jobs that kind of skew some of these metrics. But overall, we've started to adopt more and more. And the test is really a tier one service in our reliability and kind of service posture where we're dependent on it and we've been incredibly happy with its performance and its reliability and its overall kind of operability. Next slide just has a couple just other kind of key stats. Like I mentioned, we're about 35% migrated when it comes to our overall application usage. QPS on Vittas is around 500,000 queries per second. Total is about 10 billion queries per day. And adding the Vittas middleware had a noticeable but kind of non-material impact in overall performance. Because we're going through an extra hop to get between the application servers and the database, there's about an extra millisecond of latency on average. In many cases, that's amortized by the finer grain sharding giving us better predictable performance at the MySQL layer itself. But in any event, those are kind of the key metrics for our deployment. And then the final slide here, just click one more, we've been pretty heavy adopters of the project both as users, but also as contributors. So these are some just kind of callouts of PR titles that have been primarily written by people from Slack. I'm not going to go through all of these, but from the very beginning days, we saw this as a project that would serve a lot of our needs out of the box, but where we had an opportunity to, and frankly a need to, and then an opportunity to build upon the platform to suit Slack's needs and then kind of extend the applicability of Vittas beyond some of its original days at YouTube, but to fit more and more use cases. So these have to do with both some reliability related things, some various query planner features that we needed to add. There's a query execution simulator engine that we built, a bunch of work on the kind of workflows for managing resharding at scale, really that we've been able to build here internally and then contribute back to the community. And so overall, we found it has to be a great platform to both build upon and also to kind of deploy out of the box for our use cases at Slack. So with that, I'll turn it back over to Sikhar here. Cool, thanks Demer. So there's actually a case study that's about to be published by Slack to the CNCF website that's coming out soon. And there's also an interesting talk that they are going to give at the next KubeCon where they talk about how they treat their databases as cattle. So that's pretty exciting to see that, to hear that. Cool, so there's a couple of Kubernetes workloads that I wanted to highlight because of their significance. The Stitch Labs one is actually the most exciting one. As you know, Kubernetes was released in 2015. And at that time, people were barely trying to figure out how to run even stateless workloads. But because of it's background, the fact that it could survive in Borg, the Google's cloud, not only because not only did it survive in Borg, it actually was deployed as if it was a stateless application. Which means that it knew how to deal with ephemeral movement of instances and loss of the underlying data and survive that kind of environment. So we could confidently tell people that you can run with this on Kubernetes as if it's a stateless application. And Stitch Labs actually was the first one to try this out. And they've been running on Vitesse since 2016. Later HubSpot came and they said, oh, we are not interested in really the sharding capabilities of Vitesse. We just like the fact that you can orchestrate well with it. And they have hundreds of key spaces. And in the meantime, JD.com quietly just deployed thousands of key spaces and tens of thousands of tablets in their Kubernetes environment. And then they told us that they did so, which is pretty exciting to hear. And nozzle is actually, I would say, kind of the poster child of why you should use Kubernetes. They actually deployed with us on Azure because they had free credits. And then at some point of time, they got a better deal from Google. And they migrated from Azure into GKE in one hour completely. So there's a talk by Derek Perkins called The Gone in 60 Minutes that he's going to talk about how they did this. Next slide. So while all this is happening, Kelsey has been tweeting about being very, very careful about moving storage to Kubernetes. These are his tweets from actually last week. And he talks about using extreme caution if you want to run stateful workloads or databases in Kubernetes. But at the same time, he says you can use orchestration systems. In that case, it is more safe to do so. Next slide. So to highlight why, I'll talk about the Vitas architecture a little bit. I'll cover this quickly since we may be running out of time. But the three main ideologies of Vitas is simplicity, loose coupling, and survivability. For simplicity, we said that we should not have too many layers in the system. So this is essentially a two-layer system where the app server connects to the stateless servers, which are VT gates. And then the stateless servers orchestrate into send queries down to different databases. The loose coupling comes from the fact that all these pieces operate independent of each other, which is the reason why Vitas can scale massively. As far as I know, there are no known limits to Vitas's scalability. And the third one is the survivability, which is if one of the pods go down, which has quickly promotes a new master and continues to operate without interruption. And these are the areas where it is difficult to run a storage system inside Kubernetes, because if a pod goes down, the local storage is wiped out. You cannot get access to it. So you need to have good re-parenting story, which Vitas gives you. And the other one is the ability to inform the application of re-parenting, which is really hard in a Kubernetes system where everything is treated as one category. Like a stateful set is all one or an epic asset is all one. It's difficult to single out a single pod in a system like that. So that is the Vitas architecture. Next slide. And there are some alternatives that people have used who have not chosen to use Vitas. One is the application managed sharding. At this point, it is pretty much recommended that one doesn't do it unless you've already done it before. So that is one option. Other people have just been growing their databases by buying more and more expensive hardware. And there are some newer new SQL systems like Cockroach, TIDB, which is also gaining adoption. Next slide. These are the other CNCF projects that Vitas uses. That's one scary looking Yeager out there. Next slide. So I put a ribbon to make it look less scary. So the and another name that keeps coming up is ONY. Typically, if Vitas really scales out into thousands of shards, we may need to bring in ONY to actually consolidate some connections and spread them out a little. So that's one project that we are looking at possibly adding support for. Next slide. And finally, this is the last slide. The maintainers team is now actually quite diverse. Slack and Square are major contributors, but also Pinterest and HubSpot and Nozzle. Nozzle actually contributed the health charts. Pinterest has made many query contributions and HubSpot has added orchestration related contributions. And that's it. Any questions? My main question would be, and seeing that maintainers team is very encouraging, I would be interested to know if PlanetScale were to banish, do you have confidence that Vitas would still have the maintainers and the expertise to keep the project going? That's a good question. I'll let Demmer maybe talk about it, and then maybe I'll add what I think about this. So this is a good question. I think putting there is a bunch of institutional knowledge in a handful of people. So I think like many complex projects, Sugu has a bunch of knowledge about areas of this that I think are, regardless of PlanetScale as a company, there are just a couple of key individuals with knowledge that we've learned some over time. But with the kind of tenure and involvement, there's a bunch of backstory and history around why things are the way that they are that is sometimes not necessarily captured. That said, there's areas of the code that I feel like I know the best that Raphael who's on our team knows the best. So I don't know that that's anything per se around PlanetScale, but it is not an enormous community of developers. It's also not tiny either. So it's kind of a hard question to answer because it's a little bit hypothetical where we're coupling the existence of a company with the kind of continued involvement with a corpus of kind of key individuals. So that's a sufficiently dodgy answer for being put on the spot. But the sentiment I have right now. Yeah, I think to qualify that statement at this point, I have definitely made a lot of effort to disseminate what I know about Vitas to various people. At this point, I think every area of Vitas has at least, almost every area of Vitas has at least two people that can jump in and take care of. The only one, the last one that is left would be the query parsing and actually Andres Taylor from Square is now starting to ramp up on that area. So we are basically striving for a bus factor of greater than one. I don't know if that answers your question, but it's more, basically we are focusing more on more than one person knowing different areas of the software. We haven't really thought about distributing that across companies. I think bus factor is a very good way of putting it. Maybe we should be thinking about process of defining what we mean by bus factor. But yeah, I think that's an important aspect of making sure the project is mature to make sure that the bus factor is at least greater than one. Do we have other questions out there? Seems like a no, in which case we've managed to get to the end of the presentations with, you know, four minutes to spare. Thank you very much everyone for doing that. In the meantime, Shang has volunteered to help with the due diligence. So that will be the next step. Okay. I think that's it for this week. Thank you very much everyone. Thank you. Thanks. Thanks. Goodbye. Thank you.