 Thanks Candice. Thanks Candice and thanks for the Linux Foundation for having us. And today we're going to cover under the hood of a shop per core database Hector. In particular, we're going to talk about CTP. Before we get started, I want to highlight an upcoming event, which is very much on the same topic as this session, the P99 conference, very technical conference, which is all about performance. So check it out if you're interested. And one last thing before we get started, a quick poll on your asking you on your status in NoSQL. Okay, so let's get started. I'm Zakhly Vitan, VP product at SillaDB. Before that, I spent many years in the telecom domain. And after that, a few years in Oracle Communication Unit. And I'm in SillaDB for the last, almost from the start of the company. I think it's seven or eight years by now. So what we're going to cover today is a little bit, what is SillaDB, the company, what is SillaDB, the database. Then I'm going to give you a very quick history lesson, if you will, like five minutes explaining what are the, how hardware and database change in the last decades even, and then how SillaDB was architect to match these changes. In particular, we're going to deep dive into the sharp per core architecture as a title promise. Then we're going to cover something which will break a little bit the sharp per core architecture with the storage and I have a scheduler we built to handle the storage. If we have time, I'm going to present a few benchmarks and then hopefully we have time for Q&A. As Candice mentioned, please use the Q&A tab on your Zoom UI to ask questions. Maybe I will answer a few questions in the middle, but probably I'm going to answer most of them in the end. So what is SillaDB? By the way, who are you here on the call? Are you familiar with SillaDB? Please raise your hand. Okay, I cannot really see you, but just a quick exercise. What is SillaDB? SillaDB is an old SQL database aimed for or designed for very high performance. It's mostly used for OLTP online requests where both throughput and latency is critical. It can handle analytics requests as well, but mostly it's done in parallel to OLTP, not just because there are other databases specialized just in analytics. It's not the case here. The founders of the company and the database, if you will, is Avikivity and Dolaror. The two founders still in the company, CO and CTO. As I mentioned, SillaDB is very much about performance. When I'm talking about performance, there are at least two characteristics to think about. One is throughput, how many requests per second the database or any application can handle. The other is latency. What is the average time to response to a request or, in most cases, we're looking at the P99 response time, because many applications will actually launch many, many requests to the database. Even if just one of them is slow, it can slow down the entire database. Looking at the P99 is actually more realistic in most cases. The third SillaDB is an open source project. You can go to GitHub, download the code. It's in C++, by the way. You can compile it if you have a lot of free time on your hand. Or you can download the packages, Docker, using AMI on AWS or an image on GCP or Azure, etc. If you're really brave, you can try to contribute to the code as well. There is an enterprise version, which is a closed source variant of the open source. 90% of it is the same, but it do have some extra feature on performance and security. A quick example will be encryption at rest, which is only available on the enterprise version. Last but not least is cloud offering, like everyone else. We also have our own manage variant of our database. It's basically still enterprise, but fully managed by the Silla team. You can consume it through the APIs and don't have to run it yourself. The two APIs that we are exposing right now is one is compatible with Apache Cassandra and the second is compatible with AWS DynamoDB. If you are using any of those, you can use the same drivers to work with SillaDB as well. It will just work out of the box. A quick list of some of our customers. I would say that the common between, as you can see, there are many different use cases and many different domain, media, IoT, FinTech and other. I would say the common between all of these use cases is a combination between high availability, big data and performance. If you have only one of those three or even two of those three, you might have other alternative, but if you're looking for a combination of high availability, performance and big data, Silla might be a good choice for you. One thing that this session will not be focused on is the high availability architecture, which of course is very important in distributed databases. In particular, just not the center of this session, which is more about performance. I do want to spend two or three minutes covering that and then we can deep dive or zoom into one of the nodes of this distributed database and explain how it gets to high performance. If you're familiar with distributed databases, in particular Apache Cassandra, Silla inherited a lot of the characteristics from it. In particular, it supports multiple data centers on different regions. Each data center or each region, you can work with different consistency level. For example, I can work with the US region with a local quorum and I can work with a remote region with quorum of one or any. It's what is called tunable consistency. All the data is shared between the nodes, depending on the application factor. I can say that this table or this key space in Silla term replicated in one data center. Other key space can be replicated between two data center or three data center or four data center. We even have customer users that run five data centers. In this case, this is another example of free data center. You can imagine that application in Europe, for example, will use consistency of local quorum. Every read and write request will only wait for response from the local node in this data center. Offline, this data center can be replicated to the other data center. In case of a catastrophe, one data center is completely down. You can still serve the data read and write from the other data centers. Of course, the latency will not be as good as good because application will take more time to work with. This actually was proven to work under fire, if you will. One of our customers, unfortunately, had a big fire on one of its data centers. It was a hosted data center. Luckily, no one was hurt, but out of the 30 nodes cluster, 10 was unreachable, as you can see here on the monitoring, but the cluster just continued to work smoothly and serve both read and write requests using local quorum. I don't want to mention the customer, but if you get the slides, you can just click and see how the customer reported this unfortunate event. So that's all about introduction to CELA, introduction to high availability on CELA on really a quick notion. And now I want to take you back a few decades and give you a quick history of both databases and hardware in the last few decades. And I promise it will be quick, and then I will jump to CELA architecture. So how server or I'm talking about here mostly on production server on the data center to change in the last 10 years or 20 years. So I'm sure everyone on this session familiar with the Moore law stating that the CPU or cores will double each 18 months. So this still is true, although people say it's over or will be soon over. But if you see how the number of transistors actually change over the last few decades, you can see that the number of transistors per core actually flattened in the last 10 years. Also the frequency of a specific core flattened in the last 10 or even 20 years. And that's of course because of physical limitation. There is so much you can hit a processor before it's burned down, as we see in the previous example. What is continued to double and even more than that is the number of core per machine. So if you are old enough like me, you remember that 15 years ago, two cores, four cores was considered, was considered like a huge machine. Now these days, your phone probably have more than one cores. And on the cloud, you can easily get 96 core even more than that for a machine. It's not, I wouldn't say it's not, it's cheap or anything, you still need to pay quite a lot per hour if you are getting a huge machine. But what I want to say that machine become more and more stronger, mostly using more and more cores. So that's a complete change in hardware and VMs as well. And Cilla was designed, as we'll see in a few minutes, to match these changes. Other changes that come with the multiple core per machine is what is called NUMA, Non-Uniform Memory Access. In NUMA architecture, unlike what you might think, each CPU have its own chunk of memory, which is close to it. And other part of the memory, which is physically more distance from it, and that's directly applied to latency as well. So if you will try to read memory, a remote memory from a CPU, you get not as good latency. Again, another factor that Cilla take into account. And when we talk about Cilla architecture, keep that in mind because you will see how Cilla actually take advantage of this architecture, not just ignoring it like an application that was maybe designed or initially designed 15 or 20 years ago, it didn't take this into account. And in addition to the number of cores, of course, if we look at the last 10 or 15 years, RAM become huge from a few gigabytes. Now you can have terabyte of memory. This can storage completely change. Cilla, in most cases, work with NVMe and SSDs, which become commodities these days. And of course, they change all the time. So every new generation of instances, for example, in AWS, the I4I, or others come with more efficient core, but also an even more efficient storage. And I will touch on that later in this session. So in parallel to this hardware change, no SQL or database on general change as well. If database, of course, exists for decades. And if in the up to the 2000 relational database was the king, and everything was around it, once internet or it used to be called internet scale and later called big data, data has simply become too big for one machine. And all the databases have to become completely distributed. The first generation of such databases include, for example, Apache Cassandra, which inherit from the Dynamo paper for AWS. And Cilla is a direct, you can think about as a direct child, if you will, a cousin of Apache Cassandra. But while Apache Cassandra was designed around distribution, it was not designed around the hardware changes we seen earlier. So if you want to run multiple application of Apache Cassandra on a machine, you're more likely to run a few instances on the same machine because it does not scale per core. Because when it was created, this was not still not a problem. Cilla had the benefit of coming as a second generation, assuming that you have a very high number of core per machine and design based on that. So that was concluding the history of the world, mankind and databases. And now we can move forward and step into Cilla design. So what are the goals of Cilla design that was originally built around eight years ago, but still stand today? So efficiency, utilization and control, efficiency, you can translate it to the now throughput and latency, which I mentioned earlier, try to optimize both. Utilization, try to maximize the usage of every CPU or every core on the machine. And control meaning that you try to set your own priority, both to the CPU usage and the data and the storage usage and the network usage. And I will talk more about that later. So what are the design decisions that Cilla is based upon? There are six of them. And I will focus on this session on two of them. So the first one, which sometimes generates a lot of heat, but it's actually not as critical for the overall design is it was Cilla was created in C++. Back then, other alternative like Rust simply didn't exist or there are other languages. But C++ is still the main platform to build such a plot, such databases or high-fast application and Cilla follow the trend. A second thing that I'm going to spend more time on it later is running completely asynchronic. So all the operations in Cilla are asynchronic. Usually, if you work in, if you ever build a network-based application, you know that every work with the network must be completely asynchronic. In Cilla, we extend that and that will work with a disk, for example, it's completely asynchronic, not just every communication between nodes. Sorry, between cores on the same machine is completely asynchronic. And that's actually a must in our design, which is based on shout per core. Shout per core, I'm going to skip that for now because I have another full section of that. Unifying the cache is another successful, I would say, in retrospective design decision. Cilla manages own cache, it does not rely on the operating system page cache. Because, as I mentioned earlier, control is one of the main goals that we want to achieve. And if you let the operating system manage your own page cache, there is a limit to the control you can apply on that. And you don't want the OS to take over and decide which page can be evicted and which is not. You want to control it as a database. Ios scheduler, sorry, let me move ahead. Ios scheduler with another topic I'm going to spend more time on later, so I'm going to skip it for now. And last one is automation. Cilla has other applications include a lot of parameters that affect the scheduling. So some of it is preset, some of it is determined when you install Cilla, we actually do a quick benchmark to read, for example, the storage properties and fit it back to the application. And some of that are applied online for real time. For example, if the user chooses to change his workload on real time, Cilla tries to adapt. And each of the scheduler, and I'm going to spend more time talking about scheduler later, either the scheduler also have a feedback loop that on real time changes scheduler shares and maybe give more shares to offline processors in some cases like compactional repair, maybe give more share to online request. So this is a high level design decision that Cilla was based upon. And what I'm going to do next, I'm going to deep dive into one of this or two of this decision and the first is Sheldsberg. So if you look at most of the application actually out there, in a particular application like Apache Cassandra, which we profiled back in the day when we started working on Cilla. And most of this application are built on running many, many threads. So it's going to be thousands of threads, actually. And this thread are competing for resources. So they are competing for CPU, of course, every time one of them block, there is a context switch. This is a pretty expensive operation. They are competing for access to memory. So the memory is shared between all the threads. And if a thread need to update to read from memory chunk, you need to take a look first. Of course, a different mechanism to do that. But the result that thread are competing with each other on locks on memory and CPU and actually spend a lot of time doing that instead of their actual work of reading and writing data or serving the database. This model still working in many applications, but it breaks when you have more and more cores. Because when you have more and more cores on your machine, you want to take advantage of them. And if most of what the thread are doing, I invite you to run a profile on your application or your database and try to find out how much time you spend on that. You simply don't get utilization and efficiency. And the main principle that Cilla is based on is actually limitation, if you will. We limit to one thread per core. This thread is never a context switch. And this thread, you can think about it like a very small autonomous database. So we have its own memory allocation, its own file on disk, its own network connection. And it will never a context switch away. And this also mean, if you think about it, that this thread can never be blocked. So if earlier, we mentioned that on a legacy application, a thread, for example, when they go into the storage, it will do a synchronous write or read, basically block with the context switch or another thread. And once the storage gets back with a respond that will a context switch again, this cannot happen in Cilla. Because we only have one thread per core. And this thread pinned to an actual physical core, we can never block them. And that's mean that we have to keep everything, but really everything completely asynchronous. So this is a main principle in Cilla. It's not new in a sense that other distributed database already done partitioning of the data and of the computation. But until Cilla, this partition was node based. So if you have a database or an application that run 10 nodes, you would split the data between this node based on some key in Cilla, it's called a partition key with a function that we call a partitioner. But in the other databases can be completely random or based on some data. And so each of the each of, for example, the records in a database will be on a different node. In Cilla, it will be three node because it's replicated. But I think that the principle is the same. You partition the data between the nodes. And you have some deterministic algorithm to understand which information element live on which node. And so you can read and write this information element. In Cilla, we extended this notion and we also partitioned the data between the core and the machine. So in this case, it's a very small machine, only four cores. As I mentioned earlier today, you can do it with 96 cores. And we actually split the data inside a specific node to either four or 96 or whatever number of nodes that you have. And each core thread, as I mentioned earlier, only serve this data. So in an ideal world, nodes or cores in the same machine will never have to communicate with the charger. Of course, it's not the case, because there is materials view, secondary index, another shared information element that node do need to share. And this will be done through message queue. So to summarize this part, if in a traditional stack, and it's say here Apache Cassandra, but this is actually common to most application written even today, not just Cassandra and not just other database. That's our base on many, many threads that are competing for resources. Sister architecture is different in sister and explainable sister in a minute. Each thread is being to a specific core. Each thread is limited to its own data, its own memory. And when thread need to communicate with each other, they will do it in a complete acynchronic way through queues. And if they need to go to the network, they will do it in acynchronic way. If they need to go to the storage, they will do it in an acynchronic way. So this is the heart of Silla, if you will, it's an open source library in C++ called sister. This library, it's actually since introduced, been using many other application, mostly a data heavy application. I would say that sister is a very good fit when you can somehow partition your load to pretty separate chunks. And database is a good example because a database on most cases, you will have many information element which not as much interaction with each other. If your application is one huge computer engine that all information element have to touch each other constantly, this partition algorithm will not work as well. And maybe sister is not a good fit for you, but for many other application, it is a good fit. And since introduced application like Red Panda, which is a replacement for more efficient Kafka, RageDB, a graph database, Ceph, which is an object store implementable, Red Hat and many other application was built based on the sister application. And you can see here, this is actually a very early test with it with sister, testing HTTP server and memcache. One of the nice properties of sister, it's very nicely scaled with a number, of course, because you have a dedicated thread per core, the more core that you add, you simply add, and this is done automatically, add more threads to the system and pin them to specific physical core. And you actually get a linear growth in performance per core. So this was an early design decision by Tsila, but with every generation of new server and more core per machine, this proved to be more and more successful. Because still I can really grow linear with the number, of course. And so now I'm going to break everything I said before. So, so far I mentioned that each core is completely independent, each core handle, its own memory chunk is on storage chunk. And it does not touch each other, but that's not the complete story. Why isn't it? Because some of the resources like the storage, it's shared between all the all the cores in the machine. So it's true that every core, every thread in Silla case, have its own file or file descriptors, and manages own file system. But at the end of the day, the machine have one SSDs. And this SSD or multiple SSD, but sharing a rate between all the cores. So with this SSD share or resource share between all the cores. And this brings the scheduling challenge to Silla. So why do we need scheduling at all? So if you think on a database from a high perspective, database is a very storage intensive application. Most of what the database will do is storing information and fetching information and sending it back to the user, especially a real-time OTP database like Silla. If you run heavy analytics, your database might also involve in computing. But at least in Silla case, most of the work is just written right from the database. But that's not all. There are also administration operation like compaction, which is a Silla offline process that read all the data and compact it. There is a data structure called LSM, which I'll not cover here, but need constant compaction of the data. The other operation like repair that compare two nodes and see the data between them. And all of this operation need to be scheduled. So for example, if you have a heavy load from the user, you might want to hold down on your compaction, wait with them. But if you wait too long, the compaction will accumulate and then we have to spend much time on that. Sometimes we want to give the user some control of the load. So if the user know that you have a less intensive application where the latency is not critical, you might create what is called in Silla workload, which is ignorant to latency and don't care as much about latency and Silla might want to do the scheduling to put it at the back of the queue. So there are a lot of online decisions that are made about scheduling. And this is what the storage schedule and actually do. At the end of the process, Silla is managing all of this queue. And all of the process with actual storage and Silla send all the requests to the storage to execute. And now the question come, how to use the storage? So if you look at more than storage, of course, it's going to handle more than one request per at a time. It actually can handle some hundreds of requests at the same time. But there is a limit to that. And this is a limit that we actually was very critical for us as a database operator or vendor. So we build a series of tools to measure storage performance. And this was the first generation, this disk explorer number one. What you see in this graph is we measure the number of concurrent messages that storage can handle. And this typically is an NVME and SSD. So in this case, you see that up to 100 concurrent requests to the disk, you actually get a better throughput with every new parallel request to the disk or concurrent request. But at some point, this is completely flattened out. So the more concurrences that you add, if you try to send 200 concurrent requests to the SSD, you don't get better throughput. What has actually happened that you are starting to fill up the queue in the SSD itself. So the SSD starts to accumulate requests, latency become worse, and you are not getting anything. On the contrary, you are losing something. What you are losing, you are losing control. Because as long as the database handling the queues internally, it can determine if a specific request has more, a higher priority or less priority than the other and can actually dynamically play with it. But once you push all the storage requests out of the right to the storage, it's accumulating the queue and you lose control of the priority. So we are losing twice. First, you are not actually adding throughput through more concurrent requests. And second, you cannot longer prioritize a request. And of course, that's not what we want to do. So the first generation of SILA actually did this kind of benchmark in SILA install, took these properties of what is a sweet spot of its storage and feed it to the online scheduler. And this was working good enough for many years, but actually break on the latest and greatest machines. And we have to change it. Why did we have to change it? Because SSD become more complex. And the naive assumption that we took, which was right at some point, there is one sweet spot of concurrence request break because it's not that simple. The actual picture is more complex than that. And if you look at the spec of the latest SSD, and let me zoom into this part, you can see that as an example SSD can handle as much read per second and much write per second with specific bandwidth. But what this spec sometimes not is explicitly on that you can only do one of those at a time. And if you try to run a mixed workload, which include both read and write in different sizes and different bandwidth, that's of course not surprising, they affect each other. So if you do read and write on the same storage at the same time, they will slow down each other. And it's sometimes hard to predict how they will affect each other. But we as a database vendor or a database, if you will, must predict that because we want to control the number of requests that we're sending to the storage. And this is where we introduce what we call disk explorer number three, I skip one generation of disk explorer. It's another open source tool that you can visit on our GitHub project. And this tool is more sophisticated in a sense that unlike the first generation of disk explorer, which only found one sweet spot of concurrence request request per second. This generation actually test with a lot of that or generate a lot of data points, each a different combination of request per second and the size of the request and do it independently for read and write. So we built a more complex, a more realistic and more complex picture of how a storage actually behave. And this is different kind of hard drives and produce different heat map of the storage. And what we aim to do with this not very complex formula is try to find not one sweet spot but an area on the graph that we should work with and not go over. So this is, you can think about it as under this red line, it's a safe zone when the storage continue to serve. If you try to work over it, the story will just again start to queue the request on its internal queues and sometimes there is more than one and you no longer, you should no longer put more and more concurrence request because it will just be queued. And as I mentioned, you are losing ability to the parameters, etc. So this is the new disk explorer. As I mentioned earlier, this actually is a quick benchmark that we run when we install Silla. Or if you are working on the cloud, which most people do these days, we actually have pre-configured value to popular instances like i3 and i4 and you don't have to run it to yourself. You just pre-configure because you do get pretty consistent performance of storage property from each of these instances. And that's bring me directly to the latest and greatest i4i instances. And this is how the same performance map look for i4is. You can already see it's much better than i3. And AWS was kind enough to give us an early access to these instances, which are now publicly available. And to summarize our result with them, they are actually getting more than twice the throughput compared to i3, which lower latency. So it's true that the price are a little bit higher. But if you look at the performance per cost perspective, we highly recommend i4i for Silla usage and probably for other database and application as well, at least if they are storage intensive. So definitely want to use that. This is another test that we did compare in i4, i3. And this right bar that you see here, it's a three-node cluster with replication factor three. So as you can see, each of the nodes in this cluster can serve way over a million requests per second. This is a huge number. This is just a three-node cluster. So imagine if you are running a 20-node cluster or 15-node cluster even more, how many requests per second you can serve a lot. And this is all while keeping a very low B99 latency. And one last thing I want to mention is a benchmark that we did with PETA scale data. So Benny, one of our leading engineer, tested not actually not with the latest hardware, he still use i3en machine, which are also very good. Not as good as i4i, but still very good and with more storage per CPU ratio than the i4. So we use quite a big cluster of 20 of these machines, of big machine and was able to serve a petabyte of data. As you can see here, while keeping a lot of data on the database, we can still maintain very low latency. I'm talking about B99 latency of both read and write with very high throughput. And I'm not going to deep dive into the numbers here, but if you're interested, there are links. So to summarize what we've seen here, SILA was built with performance in mind from day one. And the main principle that allow us to implement this high performance is the shaltperko architecture, which come along with completely asynchronous architecture. There are a lot of other factors of this performance, which I didn't mention. Actually, most of what we did in the last eight years and what we're doing today is improving performance. And it's have a lot of different aspect like more sophisticated compaction strategies, and a lot of strategy of reading and writing from the disk and many other things, which I didn't touch today, because I wanted to focus on the main concept of the shaltperko and asymptotic. But if you are interested in building high performance application, I highly recommend you to look at SILA and SILA code even. And what the two principles that I mentioned start that performance can be split into throughput and latency. And while they are different, at least in my mind, throughput is something that can very easily converted into a reduction cost. Why? Because if you are doubling or five times the throughput per node, you actually need less nodes. And that's automatically, of course, can be translated to expense of the cloud, where most of application run these days. Latency is more tricky, because latency you cannot really solve by throwing more money to it already more machine. If you want to achieve low latency, it's much harder. And this is where SILA shine, especially on the P99 latency. And as I mentioned at the start, we have a specific conference coming up just on that. And with that, let me switch to a pool. And after that, we can start the Q&A. Okay. So let me switch to the Q&A. By the way, you can still write question on the window. First question here. Do SILA have its own property driver? So the answer is yes and no. So SILA, as I mentioned at the start is compatible with both AWS DynamoDB and Apache Cassandra. So every application that you have, the TRUN, for example, work with Apache Cassandra can work as is with SILA, because SILA is compatible in the protocol level. So in the binary level of the protocol, so every driver that work with Cassandra will work with SILA. But there is an exception for that. We implemented a performance specific feature on a fork of this driver, and recently implemented a new driver from scratch in Rust. So I do recommend you to work with SILA driver where you have one. So for example, if we're working with Rust, Go, Python, C++, Java, there is a specific SILA driver on some languages and environment like JavaScript, there is no Tieta SILA driver. So the Cassandra driver will work as good. And the same goal for DynamoDB. For DynamoDB, we actually didn't develop our own variant of the driver. So every driver like Boto and Python and other that work with DynamoDB will work with SILA. So a question, if SILA was built today, was a design decision, would be different? Good question. Probably not. I would say, and it's a lot about the knowledge and what developer feel experience with. I would say that we are very excited about Rust these days. And this is why, for example, we are planning to build a lot of the SILA driver based on Rust, but I'm not sure that the entire database can be built just with Rust. We are taking a lot of, I would say that SILA need a lot of control in the memory allocation and the way that the database build, which at least seven years ago was only possible in C++. Maybe it's different these days. So that's answering one question. Second question. SILA rely on Async Direct IO. Yes. Does it imply we must use only this supporting? So no, we do recommend running a file system, XFS file system, which work nicely with Async running request. Probably you can try to use, and some people actually do use old hard drive, not old, sorry, spinning. It can be very new and actually work pretty fast working a hard drive SILA. But on most cases, our recommendation is to go with SSDs. There are all kinds of combinations that people run in creating rate one and rate zero across legacy spinning disk and actually getting pretty good results. So I wouldn't rule it out, but I would say that the main and most of the user are actually using NVMe and SSDs. Okay. Very interesting question about Cassandra support asset transaction. Does SILA also support it? So it's first of all, Cassandra does not support asset transactional production just yet. It's a recent proposal and the PR, which we are very excited about, but it's not production ready yet. I just want to mention that both Cassandra and SILA do support lightweight transaction for quite some times. At least in SILA, it's production ready, but it was not really popular because it didn't get good enough performance compared to the eventual consistency alternative. But in the last two years, we invested a lot of effort in implementing the raft consensus algorithm in SILA. And in the next few release, we are starting to graduate it to production. The first feature that will be graduated on on top of raft for production is actually strongly consistent metadata. And by metadata, I mean, for example, things like the schema and the topology of the cluster. If you work long enough with both SILA and Cassandra, you probably hit schema synchronity between the nodes. And you hit the fact that you can only add one node at a time. So moving metadata to raft will actually solve this problem and many others. And this is something that we're going to roll out in the next few months. And right after that, we will start implementing actual data strong consistency, acid consistency, if you will, over raft. And we are very focused on right now in implementing and mostly testing it. And the main focus is always the SILA's performance. So we are not we're not adding features which are not great, great in performance and have superior performance and everything that we built, we are very cautious about not to introduce new latency. For example, in SILA, we have something that I'm not aware of in other databases. We have like a built in sensor, if you will, that measure the latency. And if it's that one of the tasks in a specific thread take more than a few microseconds, it's write an error or a log to SILA log because we are handling each latency as an error. So even if a task take more than 200 microseconds, even we'll treat it as an error, open initial for it and try to fix it. This is why adding a new feature sometime take a little bit longer because we're investing a lot of effort, not just testing the correctness, which is super important, of course, especially for us in transaction, but also making sure that this new feature will not hurt performance. Because this is our pride and joy, I would say, a database which is which super high throughput and very low latency. And so it's can be very easy to break this two rules if you start randomly adding features. But definitely, Asid is on the roadmap. And if you check a past presentation, for example, by Costia, our lead developer on the raft, I did a webinar within a few months back when he deep dive into the algorithm of raft and which is completely out of focus of this session, which was focused on specific not performance. But of course, it's critical if you look at the complete distributed database, the consistent algorithm are very important, just not today in this session. Okay. So thanks, everyone. I don't see more question, of course, feel free to jump on a slag or drop me an email after I will do my best to answer a more question if you have them. And with that, let me turn over back to Candice from the Linux Foundation. And I want to once again, thanks the Linux Foundation for hosting us. Thank you so much, Zach, for your time today. And thank you, everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.