 Okay, I can hear you and I can see the screen, okay, okay, we can start now, um, now let's begin. Um, hi everyone, welcome to the Taikewi community meeting for May. I'm Jay Lee, a maintainer of the Taikewi project and one of the six leaders. And I'm working in pink hat. It's my pleasure to be the moderator for the Taikewi community meeting. Before the meeting begins, I would like to remind all of you to mute your microphones when others are talking. Also, please be aware that this meeting is recorded and the video will be uploaded to YouTube after the meeting. So first I would like to invite all of you to give a brief introduction to yourself about yourself. And some of our team members are in Beijing office. So let's begin with one team. I believe they can. I'm the team leader of the community and the business development and also community manager of Taikewi to be here. So if you have anything about Taikewi community, feel free to ask your questions. Okay. Thank you, Kevin. Welcome. Hello, everyone. My name is Sean Boca. You can tell me Boca O'Connor. I'm the sub engineer from pink hat Taikewi team. And now I'm mainly working on Titan engine, key value separate engine built on RocksDB. Today I'm going to give you some updates about NGESHG. So glad to meet you all. Thank you. Thank you, Boca. I'm Jolley. Hello, everyone. I'm Jolley. I'm the committer of Transaction ESHG on that song. Thank you. Tang Liu. Hi, hello, everyone. My name is Liu Tang. I'm the worker of Taikewi. And finally, I have a lot been contributing to Taikewi for a long time now, but I still review nearly review the PR and the issue every day. Thank you very much. Thanks. And Tang Liu is also one of the maintainer of Taikewi project. Shai. Hi, I'm Shai. I'm from maybe I'm a committer for Transaction ESHG and the co-processor ESHG. And now I'm working on Taikewi's developer ecosystem. So if you have any questions, you can ask me. Sorry, I can't hear you. Since I can't see the screen and I can't hear you. Hello. Can you hear us? Yes. Okay, it works. Sorry about the problem. Let's continue this meeting. So I think I have finished my introduction to myself. Okay. So I think it's okay. So we can start from Situ Ma. Okay. Hey, people. My name is Situ. So I'm from LA, just working as a small company and recently got interested with Taidibi. So just take me as an observer. Thank you. Welcome. Welcome. Welcome. Thanks, Situ. And next is Xiao Wei. Hi, can you hear me? Yes. I'm Xiao Wei. I'm from King Cap. And I'm also a maintainer of Taikewi. I am mainly focused on community of Taikewi and Taikewi. I'm also a leader. Hello. Okay. Now next is Wenxuan. Hello. I'm Wenxuan, or Wish, from King Cap. I'm the tech lead of Co-opSS, SIG. And now I'm mainly working on the Co-opSS model and also the Taidibi dashboard project. That's all. Thank you. Thank you. And Xiao Guang. Hello, everyone. I'm Xiao Guang. I'm a Taikewi maintainer. I work for Zifu. Right now, we are building a table floor on top of Taikewi and working through OpenSauce. Thank you. Everyone, can you hear me? Can you hear me? Okay. So there is a participant from the Taikewi community. Can you introduce yourself? Which one is from the Taikewi community? Okay. So what's the next part? I think the next one is Zhou Zhenjing and Zhou Zhenjing. So what's the next part? Okay. Can you introduce your Taikewi? Zhou Zhenjing, are you here? Okay. Can you hear me? Yes. Oh, Zhenjing, can you hear me? So I am Zhou Zhenjing. I'm from King Cap. I'm working in the Taikewi team and now I'm a member of Transaction SIG. And I should be the Taikewi leader of Transaction SIG. But recently, I'm busy in other world. So now Shanfeng is helping me doing the work of the SIG's Taikewi leader. Okay. So who's the next? I think it's Zhou Zhenjing. And he will also give us a demo in this meeting later. There's a lot of audio. It's too noisy. A lot of background noise. Next part is Queenie there. Queenie is not here. Okay. But we are here to congratulate compared to Queenie to be the C&C ambassador spotlight. Is there a web page to show her? Because I remember there is a web page to show her photos and introduction. Maybe we can show it on the meeting. Her introduction is on the C&C website, right? Yes. So we are here to thank you for the Queenie's contribution to Taikewi community and to the C&C staff. Shirley, can you hear us? Can we move on to the news part? Yeah. Yeah, I can hear you. Can we move on to the news part? Let me show the screen. Yes, here we go. Okay. So first, Queenie has been on the C&C spotlight. And she has been writing the work about Taikewi to English speakers since 2016. Congratulations to her and thanks for her contributions to Taikewi. And second, Taikewi's speaking sessions on Kim Kong EU have been scheduled on August 19 and August 20. There are two sessions. The first one is Taikewi, a cloud native Q&A database is presented by Ed and Nick. Second session is Serving Children's Backup Table on Taikewi. It's presented by Yigu and Xiaohuang. So if you are interested in our speaking sessions on Kim Kong, mark your calendars and join us. And third, the Taikewi logo was updated with Thai before Kiwi. As you can see, the updated version is on the right side. C&C has confirmed this update and we are updating this logo on the right side of the hub. And that's all for the news. And let's move on to the SIP updates. I would like, in this part, SIP leader would present the SIP updates to you. If you have any questions, please click the raise hand button to let me know. And I will call you to speak. So the first is the SIP. Okay, in this month, there are many two projects in the SIP. The one is timeline tracing and the other is trunk-based computing. For the timeline tracing, there will be a demo later in this meeting so you can see it. And I will not introduce too much here. For the trunk-based computing, it is elected as a community bridge project. And the student is Zhang Shi from Shanghai Jiao Tong University. I and our cometer, Tian Li Zhuang, will be maintenance. That's all. Thank you. Thanks, Zhixi. And that is NGC for Khan. Okay, thanks, Jay. For NGSIG, the most important word recently is the Google summer code. There are two projects are accepted by the program. One is cloud native KV stories, and the other is version low KV. I'm not sure about if anyone knows about the background. So let me give some brief introduction. The cloud native KV stories is to type KV on cloud to leverage the durability of cloud storage, such as EBS or S3. And the version low KV, usually called version KV, is the middle thing between low KV and transaction KV. In short, it's just like low KV with MBCC support, but without transaction. Both of them are already applied by students. You can see on the PVT. And the designs documents are attached to the PVT by link. You can check it later for more details. And apart from these, there are some other words related to NG. First, a contributor from Vivo has sent a PR to make Titan support the compaction filter. And making Titan support the user merge operator and the low KV manual compaction triggered by deletes are still under discussion. And that's all for the monthly update on NG SSD. But there are still many issues are not selected by anyone. So are not based on the PVT. You can check them in the NG SSD project of Tech KV GitHub repo. If anyone are interested in them, please feel free to leave any comments and join the NG SSD Slack channel to work with us. Thank you. Thank you very much. And next is performance suite relay. This is a report on the performance improvement for Tech KV. We are found available in large number of requests together to reduce the time that three companies switches. There is one PR that we committed last week, no last week, maybe a month ago. And we tried to create one Rokstein ISNAM short for multiple data requests, even if they are from different regions. We update the timestamp first and then check the timestamp at each error instance if this timestamp is still in need of a leader. And then you will use the Rokstein ISNAM short created before to deal with it. This PR will reduce the Rokstein ISNAM short given times. We hope you can do similar things for other requests such as the progress data. So how does it perform? We have improved the co-properties of future requests for all five, six requests. Cool. And next is rough seed. I will introduce that date. So this month we have developed new features, a new feature named commit group. In the naive commit algorithm of Rokstein ISNAM is single majority. And we add some constraints to how to commit loss. We divide the peers in a rough group into several subgroups. Only logs that are replicated to at least two groups are committed, are considered committed. And it's a way to delay, as the description says, it's a way to delay committing and make sure data safety. We use these features to implement our synonymous replication in double data center. Without these features, when you want to ensure data integrity, you will have to use like even number of backups. And after using these features, we can ensure only those replicated to at least two labels, aka two data centers, and they will be committed. The other one is priority election. It's a feature implemented by a community contributor. It adds a priority to each peer. And during election, only, actually the old rules doesn't change. Only logs are more up to date, can become leader. But the priority, taking that in when they are logs are actually decode. And the high priority peer will not vote for the log priority peer. The feature is to make sure in some critical deployment that not all machines are the same or not all networks are the same. We can set those machines with lower configurations to a lower priority. So they are unlikely to become a leader. And that's all for the rough state. This is transition. I think it's Charlie. Yes, because Shaifan is busy. I will introduce the progress of transaction. This month, we focused on the stability. And we did some works on the stability of large transactions, which can avoid sorry speed issue on their heavy right flow. As you can see, the first P.I. is before writing to Techie B, Techie B will pre-speed regions when there are lots of data because the performance of a single route group is not very good. And the next P.I. is Techie B will collapse some duplicated request. So that Techie B didn't process duplicated requests. And this month, there are nine new members joined us. And we will pay more attention to community. For example, we have posted our weekly report in transaction SRG. And we will write a series of talks about implementation of transactions in Techie B so that anyone who is interested in it can join us easily. That's all. Thanks, Charlie. Very impressive progress in community. Is there anyone who has questions about the same updates? Okay. Now, let's welcome Zhengchi to give a demo on mini-chase. But I think Zhengchi's microphone is quite noisy. Can you fix that? If not, I think we can wish to give a presentation. I think we cannot fix this in my previous presentation, because we cannot really hear anything. How about now? Yes. It's a lot better, but it's a little, your voice is a little low. Can you make it louder? Oh, okay. Is it well now? It's very good now. Oh, okay, okay. I'm going to share my screen. Now you can share your screen. Okay, now we can see you. Hello, everyone. I'm Zhong Zhengchi, an intern of pink cap and jumping up for almost four months. During this month's internship, I studied the chasing system and implemented chasing library called mini-chase, which is also the subject of this demo. Okay, let's get started. Those who are familiar with TIDB may know that currently we have matches, log, and gearies to diagnose TIDB cluster. These tools are all helpful to find the problem. However, they are either only observing the overall situation of the TIDB cluster, such as matches. All multiple components are difficult to associate together according to a sequel, such as log. Mini-chase is used to solve these problems. It can chase the cross-time from multiple components in a single sequence. Also, the performance of mini-chase is pretty good. There is almost no performance loss after introduction. At the same time, using mini-chase does not require any modification to the existing function signature. You can use Yajur UI to visualize the results of mini-chase. So we can get the time nice of every Thai KV request. All events have a name from top to bottom. We can get the relationship between events. From left to right, we can know the sequence of events. Click on the span. There is specific information, such as start time duration. Since that chasing is really useful, but why mini-chase? What are the advantages of mini-chase? In Rust community, we can easily find some chasing libraries, such as Tokyo Chasing and Rust Chasing. In particular, Tokyo Chasing maintained by the Tokyo project is a good ecology. So why should we make a new wheel? The reason is straightforward for performance. As demonstrated by the microbenchmark here, mini-chase only takes 20 nanoseconds to generate a span, which is 17.5 times faster than Rust Chasing and 100 times faster than Tokyo Chasing. Cool, right? Someone may ask, do we really need such a high performance chasing library? Let's look at the integration benchmark. Here, 100 events are chased during Thai KV point get. That is, 100 spans are connected. It can be seen that when it is not chased, we got 183K QPS. With mini-chase, we got 173K. But with Rust Chasing, it was directly reduced to 90K. We do need a high performance chasing library. It's usually unacceptable to have QPS after introducing the chasing facility. However, don't worry about the 6% performance regression with mini-chase. In real life, we will only chase less than 10 events for point get, and the performance impact will be hardly observable. I believe you are all concerned, why is this so fast? We were inspired by Nanologue to maintain per thread buffers to reduce synchronization between threads. Thanks to this design, we can also use batch connection to reduce the number of copies. We also use GSC to record the time, which is faster than obtaining the system time from OS. In addition, we avoid using strings to name events, but numbers instead. We optimize the size of structures to better fit cache, and so on. The current performance is not the end. We will still have the opportunity to further optimize it in the future. After talking about the performance advantages of mini-chase, let's take a look at the situation of mini-chase in the open source community. Mini-chase is compatible with existing chasing protocols. As shown previously, we can easily integrate mini-chase into chasing systems such as OpenChasing and Yeager. We also have contributions from community. We'd like to appreciate Ren Kai and Grace Rich for their contributions. To summarize, mini-chase is a high-performance, general-purpose, and young chasing library, which is still improving. Here's the report link, and all comments from you are welcome. That's all. Thanks, Zheng Chi. Mini-chase is very impressive, but I believe we can help us a lot in analyzing the whole system. I'm Thanks, Zheng Chi, and Ren Kai, and Grace. The last part comes to our community time. We want to hear some feedback from the community. Anyone has some feedback for us? Or do you have any questions about the C updates or the mini-chase demo? So if you have anything to tell us, you can also contact us. Okay. If you have anything to tell us, you could contact us on the live channel. And we come to the end of our meeting, and I would like to say thanks again to presenters today for their informative talks and all the attendees for their participation. Thank you for joining us at this meeting today. Can I ask all of you a favor to help us complete the survey to be sent? It helps us with future planning because your feedback is crucial for us to make our community meeting better. And don't forget to click the live meeting button later, otherwise the recording video would be two hours long. And thank you all. I will see you in June. Thanks. Bye.