 for organizing this for us. So just to get started, just a little bit of housekeeping on our side too as well, before we start talking about the, or get into the presentation itself, is that how we're going to present the topic today of bringing transactionality to the streaming ecosystem is really to talk about a specific use case and then relate that to the evolution that has happened through the streaming ecosystem and the different architectures that exist out there today. So, because it's always best to really understand it from an actual implementation and usage of the system. So that's why Raja from Alpaca has joined us today. He's going to be talking about their order management system and the journey that they've taken from going from their initial implementation of the order management system to a version built on top of a streaming system that is giving them a huge amount of performance benefits and things from that perspective. So Raja is going to take us through their initial implementation of their order management system. I will then talk a bit about the different streaming architectures that we see out there today and that we commonly see across a number of different environments. I'll then also be talking about Red Panda itself, which is a streaming platform that's really focused on low latency transactional type workloads. And then I'll hand it back over to Raja to talk about their order management system that they built on top of Red Panda and then he'll be giving us some real performance results that they've seen with this new architecture and then we'll open it up for questions and answers after that. So without further ado, I'll hand it over to Raja to start talking about Alpaca. Thank you, Roco. And thank you to the Linux Foundation for hosting this session as well. So just to start off, I just want to explain a little bit about what Alpaca is. So Alpaca is an API-first stock and crypto brokerage platform and trading platform. In late 2018, we started off with a trading API which allowed people to trade programmatically via a RESTful interface. To date, we've raised just a little over 70 million in funding. We trade a few billion dollars per month or a couple hundred million dollars per day currently and that's been growing at a very steep rate. It's entirely commission-free and really what we're here to do is provide ability to empower developers no matter where you are in the world with really easy and friction-free access to all financial markets. And in the near future, we'll be expanding past U.S. equities and crypto into other asset classes as well. So at the heart of a brokerage or a trading platform, there's the order management system. Basically anything that touches a trade or touches an order passes through the order management system. It's primarily responsible for validating the order, ensuring that the account state aligns with the order and then finally getting the order to market or routing the order to market. We, as I mentioned in the previous slide, we are currently trading billions of dollars a month for the system and it continues to scale. So one of the most important things, especially for a programmatic trading application and we have a very large algorithmic trading user base as well is the processing time. So the quicker that we can process each order, the better it is for our customers and the data backs that up. The reason why that's important is that a lot of trades or orders are very time-sensitive in nature as the price of the market moves. So if someone is algorithmically placing 1,000 orders at a time, if it's taking a very long duration, by the time they get to the back of the queue of their orders, they might have missed the mark on what they're trying to trade at. So processing time is a huge parameter. So to step back and just level set on the previous order management system, I just want to explain exactly what we were working with. And truthfully, the first version worked pretty well, but we knew that we could do better. So the V1 of our order management system was essentially a singleton. We happened to be a go shop. So the way that we built our order management system, we would spin up a go routine per account and that go routine would handle the orders on a per account basis. Now, this worked fairly well and truthfully scaled pretty well, but it didn't quite have the throughput nor the reliability that we were looking for. So one, it was a single point of failure. It was a single application running on a single pod. So if there was any sort of failure, we'd actually have quite big delay in the time it took to restart the service. Secondly, the state lived within a relational database as well as some other services. So often for each order, for all these validations that we had to do, we were performing several network round trips. And finally, we noticed a lot of contention, especially at high times of volatility, such as the market open, where a significant amount of the trading volume happens within the first 30 minutes. We were just hitting some of the contention what RabbitMQ's throughput was able to do as well. Our P95 for this system was around 150 milliseconds, but sometimes during extremely heavy volatility, such as a week like we're having this week in the market, we would see the response duration as high as 500 milliseconds or more. So thanks so much, Raju, for that initial overview of your first approaches to the order management system you could think of this as being sort of a very simplified diagram of that type of approach, which is something that's been common for quite some time, which is that you have an application that is open either via API or other services to a set of users. And then you're pushing this data into an MQ system or enterprise service bus. Things like TIPCO, MuleSoft, enterprise service bus were pretty common. RabbitMQ is still very common, things like ActiveMQ too as well. And then this was really beneficial because you could easily then link it into a number of different services behind the scenes. So you can push data to a transactional database to have sort of your current state of what the system looks like. You could also push it to a data warehouse to have your sort of change log of all the things that occurred for a customer or kind of a change history for everything. And then you could also link in other different services as well. And this was a nice or kind of that first sort of step of like decoupling services and having a service oriented architecture that we've heard from some time back. The challenge with that architecture though was that it didn't scale just to the same point that Raju was talking about is that once you do sort of hit the maximum throughput that you can do in a single instance of a system like that you have to start thinking about sharding and then you have to do a lot of that management yourself and it becomes very cumbersome to actually work with that type of system. So what most people have moved to and Raju, I think you guys even looked at this at one point in time as well using a database as sort of the front end are kind of like the main sort of source of record for your system too as well, is that right? Exactly, yeah, we did explore the change to capture pattern but ultimately it wasn't performant enough. Yeah, and we see this quite often as well where because of the challenges that we've seen with these types of MQ systems when they hit a certain level of scale is that the first approach or kind of the next approach that people lean towards is using a database as a replacement for the queue. And it can definitely be done. There's ways to work around it. There's different design patterns that can work but ultimately these systems are really only meant to keep sort of the current state of the system itself and actually keeping changes can be pretty burdensome for these types of systems. And also it becomes very challenging trying to implement a queue in a system where you need to be able to have indexes and other things from that standpoint. I've worked with a number of databases myself in the past and the number of times I've seen somebody put an index on a last modified column or created at column. I've seen it too often to count and those are really big bottlenecks in performance for these types of systems unless you're hashing that particular index or you create a hash index on it but then if you're doing a hash index you're very likely going to have to talk to a large number of nodes to be able to do any type of ordering based on that index as well. So it can be done but it definitely has some drawbacks and then still even taking that approach you still wanna have a log of all the changes that have occurred and also to create actionable events from that data. And to do that you have to use another piece of software a change data capture software like Debezium and some databases have this built in to push the data to a streaming platform like Kafka. And I put Kafka here because it's sort of the most prevalent streaming platform that's out there today and there are a few different alternatives out there like Pulsar, Nats and other things but Kafka does make up the larger majority of the market in this regard. So you have changed data capture going into something like Kafka and then you have a number of systems off of that pulling data to either fill in a data lake maybe you have some stream processing happening you might even have other services that are taking actions from that data as that comes in maybe they're executing other services or updating other systems or calling other like partner APIs or other things from that perspective. So this is the kind of current state of what most solutions look like today and you might ask yourself, well, why can't we just take Kafka and sort of replace in what we saw in our previous slide the MQ system, the enterprise service bus why can't we just replace that with Kafka and you kind of call it a day and to really understand why that isn't an approach that is gonna work for most people you really have to understand the assumptions in which Kafka was built upon and to understand those assumptions we have to really look at the history of Kafka and so to look at the history of Kafka we have to look back in when it was first developed it was developed at LinkedIn the first release came out in 2011 and it was initially really focused on shipping logs from web servers to a Hadoop cluster and you probably updated it or you probably put it in a Hadoop cluster and you were doing Hive queries on top of it and those Hive queries would run for 12 hours or for however long that they would take so as you can imagine, latency wasn't super critical for these types of use cases and throughput was much more critical you had a huge amount of logs and you needed to get them to this large centralized system so for that they really focused on building the system with throughput in mind and not so much latency and then also if we look back at 2011 and we'll take a quick look at like whether the state of the art systems looked like back then they also had to work around the hardware that was available to them so one of the ways they did that was by using something called the PageCache the PageCache is something that's provided by the Linux operating system that allows you to cache files into memory outside of the actual process itself and this made a lot of sense for Kafka back then it was definitely a good choice for them to make because running a Java process with more than 16 gigs of memory or 32 gigs of memory was probably not advisable because you could have large garbage collection pauses occur within the system so trying to keep the heap size of your Java application small in Kafka's built on Java was actually critical for back then so being able to actually give off it or give the responsibility for caching data to the Linux kernel and the Linux OS made a lot of sense the challenges with that though is that you don't get a lot of control over how data gets flushed to disk you can specifically tell it to sync to disk but Linux or Kafka by default would just allow the PageCache to flush on its own default schedule which is currently today in most operating systems 10% of system memory becoming dirty before you start flushing to disk so this could be a lot of data that sits in memory that's not actually persisted to disk quite yet and that made a lot of sense when you were running on fairly slow spinning disks that didn't have a huge amount of IOPS or throughput available so this is why they delayed writes to disk by default because then they could actually have larger block sizes so you could write 128 kilobyte block of data to disk and not have to try to do like 16 kilobyte blocks or four kilobyte blocks to disk it's Kafka is mostly written in Scala on top of the JVM and has used external systems for consensus so it uses a zookeeper which a lot of projects have relied upon zookeeper for external consensus and Kafka is a great example of that where they keep a lot of cluster state information and also use it for consensus and membership coordination of the cluster too as well so let's talk about what did the hardware look like back in 2011 what was sort of state-of-the-art back then and it's always interesting to like go back and look at the Wayback machine and I actually went to like Dell's Wayback or to the Wayback machine and looked at Dell's website just it was a little bit of nostalgia to see what was actually available and this was the typical machine that was recommended for running map reduced type workloads and also Kafka clusters typically and as you can see this is a system that can handle at most 12 cores you most likely probably were running with eight cores at the time and then you were most likely running with a larger number of two and a half inch or three and a half inch SATA drives running at 7200 RPMs and just to give you an idea of what the performance of that looks like down below you can see what the right performance looks like for mixed and sequential workloads so sequentially I could write at about 82 megabytes per second and in a non-sequential write I could do it at about one megabyte per second so doing things or caching data into memory and then doing sequential writes of large blocks of data made a lot of sense for this type of workload that exists or for this type of hardware that existed back then but things have changed dramatically we don't normally go to Dell's website and look at what type of hardware that they have available today we're going to the different cloud providers and looking at the instance types that they can provide us so for AWS they just recently announced some of their Graviton two instances that are very impressive from their overall performance and price standpoint where I can easily get something with 32 cores or 32 vCPUs and the ability to write four gigabytes per second and also have a 50 gigabit connection link the other thing that I didn't really highlight in that previous state of the art for 2011 is that you were typically running with a one gigabit connection for your network so your network was typically a very big limiter back then whereas today you can see here I can get a if you want to go for the larger instances you can absolutely get 100 gigabits per second which is absolutely amazing in comparison to where the networking technology was 10 years ago and the same is true for Google cloud where I can actually have local NVMe storage that can handle millions of IOPS and also allow me to do writes of nearly four and a half gigabytes per second to those disks so we've come a huge way in just a very short period of time so what does it mean to take advantage of this modern hardware that we have available today? What we've seen in building these distributed type systems is that workloads are becoming much more bound by CPU than disk as we just saw like we can have millions of IOPS network has gone drastically better but what hasn't really improved dramatically is the speed of a processor or a speed of a core itself we don't have 12 gigahertz processors and things from that perspective but what we do have is 32 core, 64 core upwards of 200 core machines and really just we become much more bound on that CPU and focused much more on paralyzing processes versus really focused on the fast single process and with these types of this type of architecture there come other issues with it so namely one of them is going across NUMA domains when I have a large number of cores or a large number of sockets I have different memory access paths to the actual memory on the system and going across that path from one core to the other can be quite costly so if your process can be aware of where memory sits and where it gets executed this can be a great advantage to you so you don't do these sort of high latency type requests to access that memory also to make use of these systems kind of the traditional way that you would make use of them and this is a way that a system like Kafka works today is that you have a large number of thread pools so if you wanna take use of it or make use of a very large machine you have to specify that you have 16 threads that are just doing network IO 16 threads that are doing disk IO maybe you have 16 threads that are doing other sort of client operation IO too as well and very quickly you have a large number of different thread pools and different threads that are trying to be scheduled on these cores so this can lead to more context switching which then adds to overall latency on the system as well and then the other thing is that file systems or newer file systems have definitely come into their own back in 2011 most people were running on EXT-3 file systems nowadays EXT-4 is pretty prevalent XFS has become very prevalent as well and XFS provides us a number of advantages specifically the ability to do out-of-border direct memory access rights and also adaptive F allocation which these are both things that I'll talk or the system we're gonna talk about here in a little bit take advantage of because it allows us to be much more performant and allow us to not have to wait for file allocations to occur or to keep things completely sequential across different partitions within the system as well. So let's talk about systems that can take advantage of this modern hardware. First and foremost, there's a framework that exists out there called C-STAR. This is actually used by a few different projects out there today. It's completely written in C++ and it's really focused on making use of these larger machines and really actually sort of treating these larger boxes or the machines that we see today as if they were a distributed system in of themselves. And so what that really means is that looking at a core as if it was a single machine in a cluster of machines and operating with that sort of assumption. So what it actually brings to the table is really the ability to do a lot of async programming via different futures and promises and requires no locks and minimizes IO blocking. Some questions that we get sometimes is, does C-STAR make use of IO urine and things from that perspective? It doesn't today and we don't expect that there is a thought to port it over to IO urine but that's probably only gonna give us about a 5% performance benefit where since we're completely asynchronous already with this type of system, there's not gonna be a huge benefit versus a system that has much more synchronous in nature to begin with. We also have a thread per core architecture which what C-STAR does is that when it first starts up, it looks to see how many cores do I have available? It launches that many threads and then pins those threads to given cores. This really reduces all the majority of the context switching happening in the system and it also allows us to preserve the cache lines in the system too as well because those cores typically have different L1, L2 caches attached to them or that are assigned to them and this allows us to stay on a given core so that we don't have to refill or kind of refresh a different cache line when we do get switched or if a thread does get switched to a different core as well it allows us to also be very by nature, NUMA aware because we can be very specific on the memory that we work with for that given core. So with all that, let's talk about a streaming platform that's actually built to make use of this modern hardware and it's built on C-STAR and that is Red Panda. Red Panda is written completely in C++ makes use of the C-STAR library at the core really focused on a high throughput, low latency and providing strong transactional guarantees. What that really means is, we are persisting to disk after every batch of messages that comes into the system. So we're not delaying writes or anything from that perspective, data gets persisted the moment it comes into the system. It's also provided in a single binary. So it includes the broker in HTTP proxy and a schema registry altogether. These are typically things that are separate components in something like Kafka where you'd have to run Kafka, you'd have to run ZooKeeper, you'd run a HTTP proxy and also schema registry to really get the same level of functionality as you get with a single binary in this regard. It's source available, everything's out on GitHub and you can actually take a look at the issues and features that are being developed on at the moment and it's fully Kafka compatible. So you can make use of all the different sort of projects and things that are available in the ecosystem today. And Raj, I think that was like a main driver for you guys looking at Red Panda was the Kafka API and the single binary, right? Exactly. I've had previous experience with Kafka and Kafka in production is a beast all its own. What we found with Red Panda was, A, just the zero friction aspect of it. We had a very small and diligent engineering team that was able to get started and get something into production relatively quickly. And since it's been in production, it's really been pain free. So that was a huge adoption metric for us. That's always great to hear versus the opposite. So, and this is definitely, this was one of the main reasons why Red Panda wanted to focus on the Kafka API is that, although there's probably simpler APIs that exist out there today, like Nats comes to mind, it is kind of been picked as the standard. It's the VHS versus Betamax debate pretty much. And being able to have that API compatibility allows for all the different client drivers that exist out there today to be used and allows it to be plugged into anything that supports the Kafka API seamlessly as well. We've really focused on the performance safety aspect, which we'll talk about in a second. And then there's no external dependency. So raft is actually used for all consensus internally to the system. It's used for the cluster state and all the data within the cluster as well. So let's talk a little bit about raft because like we're talking about modern hardware, but we should also talk about modern approaches to consensus. Zookeeper and SED, well, SED is raft based in many ways, but zookeeper was sort of before raft was prevalent and raft has sort of taken over as kind of the consensus protocol of choice as sort of a simplified version of Paxos. As raft does have a few sort of requirements around it, you have to have odd number of replicas, each partition inside of Red Panda. So inside of both Kafka and Red Panda, you have a topic and a topic is made up of partitions. And within those partitions, you have guaranteed ordering within those partitions. And that partition can be any sort of key space that you define or it can provide a default key space for you. So in the implementation for Red Panda, each partition is its own raft group, which this gives us a lot of flexibility to move leadership around and to move replicas around on the system. We don't rely upon anything externally because everything is done inside. So this makes it a single fault domain for us. So we don't have to worry about multiple distributed system protocols that exist out there today. You don't have to worry about like your zookeeper connection and your Kafka broker connections or other things from that standpoint. And this also allows you to write out slowness and individual replicas. So if a leader can act to just a majority of the different replicas, you can make forward progress. So if one replica is a bit slow, that isn't going to stop you from being able to reply to a given client in the system. So all of that, what does that lead to? Well, this is some benchmarking that we did earlier this year and we'll actually post in the chat a blog post that highlights sort of our methodology for this. We actually can use the open message benchmark to a link which is actually a Linux foundation project to actually do this benchmarking. So against Kafka and Red Panda. And what this particular example is here is this is doing 500 megabytes per second on a three node cluster inside of AWS using the I3EN success for large instances which provide us guaranteed networking. And we went with the six extra large instances because of the guaranteed networking. If anyone's kind of been familiar with AWS, if it says up to 25 gigabits per second, it means you actually get that for about eight minutes and then you get throttled to eight gigabits per second. So there's a lot of nuances that go into this and we highlight that through the blog post as we sort of discovered that for ourselves as well. And what this workload shows is this is an end to end latency in the system. And we were telling Kafka to actually persist to disk after every batch of messages. So you can see that the average latency there is about 14 milliseconds and for Red Panda it's a little over two milliseconds. But what the more interesting part is around the actual tell latencies. And this is the part that becomes really important for transactional systems and systems like the order management system is that when we start going out to the 99.9 or 99.999 latencies, we can see that Red Panda stays very consistent at about 120, 150 milliseconds. Whereas Kafka goes out to about three seconds at these max latencies. So because of this, a lot of people over provision instances for Kafka to reduce the effects of these tell latencies. And that really creates large clusters with underutilization in general. So bringing it full circle, technology itself is pretty cyclical in nature. And this is an exempt from that particular trend that we see. Whereas now we can actually go back and put something like Red Panda that can provide this low latency transactionality within the streaming ecosystem for these types of use cases like the order management system that Alpaca has built. So now we can have our application talk directly to Red Panda and now we can feed into our different databases that exist out there, the different data warehouses or data lakes and also the different microservices that exist out there today. So with that, I'll hand it back over to Rasha. Thanks again, Rocco. So just building on what Rocco stated in the previous slides, we really evaluated a lot of different ways to approach this problem. We sort of set a very high bar for ourselves. So when we started this project on reevaluating our architecture of the order management system, our goal was to bring that P95 from 150 milliseconds down to five milliseconds. The other big goal was the throughput. So we wanted to be able to handle around $10 billion in trading activity. And those were just synthetic goals that we set based on the growth for business but I'll walk you through sort of how we approach the problem, how Red Panda played a part in that and then some of the numbers. By the way, if you're interested more about the design and the implementation of our OMS, we've also created a blog post that walks through the major aspects of it as well as some of the throughput. So the OMS itself is entirely in memory and that horizontally scalable. So you can have multiple shards of the OMS system that are running. The OMS is responsible for a subset of or total accounts and it handles all the state in memory itself including things like the positions or the details around the equity or buying power that are required. In order for this to be durable, we need to persist this data somewhere and we need it to persist very quickly and also be able to consume it very quickly. So that's where Red Panda comes in for the right of head log. When an OMS starts up, it basically rebuilds its state, the memory state from the Red Panda topic and then its ability to start working so in the event of a failure, another pod will come back up, read from the Red Panda right ahead log and it'll be active again. Another really happy side effect of decoupling the wall and building the center system was that we have a lot of other services that would love to be aware and to be able to observe the state of trade events or order events and having that wall live within a streaming engine like Red Panda makes it so that other consumers can simply subscribe to that wall as well. So here's a high level architecture of our order management system. We have basically a customer or an order comes in via one of our API services. So at El Paco, we have a few different APIs. We have a live trading API, a paper trading API, a broker API and then finally a market data API. So if it's an order related event or request, it connects to our order management system over GRPC and at the top of our order management system, we have scalable balancers so they're horizontally scalable as well. Think of these as sort of an oracle. So they abstract away the order management system and its implementation details away from the consumers so that they're sort of act of the oracle and they'll route the right request to the right OMS. The OMS themselves, like I said before, keeps everything in memory but it's persisting its transaction log to a streaming engine and we're using Red Panda here. We also have the happy benefit where we can consume that wall that lives within the streaming engine and persist it back into the relational database in the same way that it was propagating before, meaning that the applications don't necessarily need to be aware that there was a system change in place. On the other side of what happens with the order, so once an order gets processed, we communicate to the market. So most of our infrastructure is in GCP yet we have infrastructure in Socockus, New Jersey at Aquedets Data Center. We have an interconnect from GCP to our trade execution rack at Socockus and then from Socockus we have cross connects, fiber cross connects directly to a variety of execution gateways depending on the asset type. So that's generally how our OMS works. And these are the results we saw. So these are actual results from our production environment. So the version of the OMS that was replaced, like I said before, had P95 or around 150 milliseconds but was really not deterministic by any nature. During high volatility, we saw very erratic and concerning performance behavior. Now with OMS V2, which is in memory and a wall powered by Red Panda, we see extremely deterministic performance even at extremely high load and P95 of around 1.5 milliseconds. So way under what are super aggressive benchmarks of five milliseconds. And again, this is a synthetic benchmark but when we were evaluating our system before going into production, we just wanted to see how fast we could push it. Now there's no way with our previous system that we would even be able to come close. And truthfully, we didn't decide to push it even further than this because the results were just outstanding. So the test scenario was 100,000 accounts concurrently sending 10 orders each. They all happened to be a market order. So they didn't have a price determined by a limit. There were randomized symbols and these are actually just a reworked Grafana dashboard. So this was sampling every 30 seconds. And the Red Panda cluster was just three nodes. A few things about the Red Panda cluster. So a relatively recent Linux kernel, almost no tuning on the kernel and then almost no tuning in the Red Panda configuration. It was really just provision, set it up and away we went. And here's the results we've received. So again, it would be P95 well under two and a half milliseconds and we were able to process just under 1.7 million orders a minute. The latency was, we didn't really experience great tail latency. It was great to see deterministic latency. And as I mentioned before, very easy to scale out. So as we required more throughput, we're hitting actual kernel constraints. We can just scale out in a horizontal fashion. And just to touch upon the no kernels tuning, that was actually, that's something that Red Panda just does automatically. So this was one of the great things that Rajen team were able to take advantage of is that when you install Red Panda, the first thing it does is set a number of different kernel parameters to just make that very automatic for you. So when we launched this, we sort of started rolling this out gradually, but the feedback, and I'm not going to go through it, just came in unsolicited. The, one of the patterns that we saw that we didn't anticipate was some of our algorithmic community, we saw their volume of trade volume, pretty much double or triple overnight. So it's been a game changer, not just for us in terms of our ability to scale alpaca, but also it's been a phenomenal performance improvement that's had great financial and material impact to our user base. Thank you so much, Rajen. You know, a few things just before we start answering some of the questions that have been asked is, you know, you can go and check out the source code for Red Panda. The link here is, you know, from on GitHub. We have a blog post that talks about a number of different things. We have a one that's in there that's great that talks about the different chaos testing that we do on top of the system. We actually are currently working with the Jepsen team to do a full sort of Jepsen test or sort of correctness test on top of Red Panda right now that we'll be releasing pretty soon. We have a community Slack channel if you would like to engage with us or ask us further questions. And also, I know Rajah alpaca has a few things for you guys to also share as well. Yeah, I would encourage anyone that's interested in our trading API or a brokerage API. So if you want to incorporate financial market access, internet-existing product, we have an API for that as well. Then most importantly, especially for this audience, we're always hiring. So what we spoke about is of interest or if you want to be working with Red Panda, please either email careers at alpaca.markets or our job board, as well as our Slack community and our user base are posted in the slide as well. And finally, if you need to reach out, please reach out via Twitter. alpaca is available at alpaca HQ and I'm personally available at Rajah. Perfect, yeah. So opening it up to a few of the questions that we've received, the first one was, can one consider Red Panda as a real replacement for Kafka in terms of guarantees? Like exactly once semantics required for Kafka stream commits to source offsets and target topic, are there features where you don't have parity with the current Kafka versions? So for the first part of the question, you absolutely can see it as a full Kafka replacement. We do support the full transactions API. So alpaca is actually making use of item producers, which allows for much higher throughput on the system as well. And we do support the full transaction side of things. So it works with Kafka streams out of the box. There are some things around the full parity for Kafka that we don't have. And those things are things that we're working on to be addressed, like follower reads for replicas and a few other things that are much more specific to that. But the large majority of the Kafka API is implemented. So I think that and also the Flex Protocol, the newer version of the Flex Protocol for Kafka, we don't fully support. And that's more just from a standpoint that we haven't seen a huge amount of usage for that quite yet. The second question is, does Red Panda also implement the Kafka admin API? And yes, yes, it does implement the Kafka admin API. So you can set ACLs, you can do all the topic administration. You can use tools like Cal or K-minion or a number of other of those types of systems right on directly on top of Red Panda. And they work on Red Panda just exactly as you would expect them to with Kafka as well. One question for Alpaca is, how long did it take you to develop the order management system V2? What did the timeline for that look like? Yeah, so we started looking at this in the second quarter of this year. And we started skunking it out with one of our most talented engineers, Alpaca. And basically, we gave them the freedom to explore the space. We introduced them to Red Panda. We were able to get this into production in four or five months. And a lot of that was just experimentation. So as we're now adopting Red Panda in other aspects of our application, we're finding that it's almost, a lot of implementation can be done in a week or two. Given the complexity of the order management system and the in-memory aspect, that took a little bit more time than we anticipated, but it's worked out really, really well for us and our user base. Well, I don't see any other further questions. So if there's any last minute questions that people wanna get in, we'll just give it a couple seconds. Otherwise, I think that's all we had from our side. So if you do have any further questions, we have both of the different Slack channels listed here. Please do join us. We're always happy to help. And there's a multitude of different ways to make use of Red Panda and also Alpaca as well. And with that, thank you so much, Raja, for taking the time today. And thank you for the Linux Foundation for hosting us. Yes, absolutely. Thank you both for being here and for speaking on these topics. And thank you to all our participants who joined us today. As a reminder, this recording will be on our YouTube page later today and we hope to see you back at future webinars. Have a wonderful day, everyone.