 Hello everyone. It's very honored to have you here join this lightning talk today. This is spoken from the TechCV community. I'm an infrastructure engineer at PINCAP, also a core contributor to the TechCV project. Besides, I'm a maintainer of Titan, which is a rock to be plug in for key value separation inspired by Whiskey. I'm going to give a presentation of how we support TTL in TechCV. First, what is TechCV? TechCV is an open source distributed transactional key value database. It became a graduated since that project last year. So far, we have over 9,000 stars and 300 contributors on GitHub, and there are over 1,500 adopters in production across multiple industries worldwide. TechCV is based on the design of Google Spanner and HBase, but simpler to manage and without the dependency on any distributed cell systems. Here is a picture showing the overall architecture of TechCV. The full data range is split into small areas called region, and there are three replicas in region by default. The replicas are scattered around into different TechCV nodes and keep confident by route. The placement driver stores the metadata of regions to provide clients with region routing information, and it is also responsible for auto-sharley and low balance. TechCV uses ROXDB as the underlying storage engine. On top of that, it provides horizontal scalability and high availability based on route. And unlike other traditional no-SQL systems, TechCV not only provides classical key value APIs, here we call it ROQV. But also both optimistic and pessimistic distributed transaction namely TransactionQV. Besides, it posts the Kuba Sizer API, which is similar to HBase to support the distributed computing, and it also provides the ability of elastic scheduling and geo-replication. For TTL, we talked about is mainly supported in the ROQV. Okay, what is TTL? TTL stands for time to live, which means data will be deleted automatically when out of blood time. In minor user cases, the value of the data is highly temporal correlation. As time goes by, the value of the data declines. User may need to delete data manually, periodically, which caused extra overhead. With TTL, the data can be dropped in the database automatically without any mental burden. As mentioned before, TechCV is built on top of ROQV. ROQV supports TTL natively, but with the limitation that all keys should be of the same TTL, whereas zeros demand to set different TTL for each K, and some keys are of non-TTL, that is to say the mixture of TTL with non-TTL. Besides, there is no guarantee that the expert entry won't be returned and no API to query how are the TTL left for one key in ROQV. To meet the demand, we decided to support TTL in TechCV instead of using ROQV's TTL feature directly. Here comes the first question. Where to put the TTL information? There is no metadata for K, so it's just appended as 8 bytes to the value. When writing TTL, TechCV calculates the desired expiry unit's timestamp by adding TTL to the current unit's timestamp. And when reading the K, TechCV checks the expiry timestamp to see if the current timestamp exceeds that. If yes, it returns not found just like the key is deleted. Otherwise, it returns the value with strapping the timestamp. Since that's all for it, but in the distributed system, we should take linearizability into account. The clock in different TechCV instance may now be synchronized. Consider the case of getting an expired key on the leader. The leader is transferred to another instance with a slow clock due to some reason, such as crash. A second get on the key may return value which brings linearizability. In this case, we can utilize the global metallic increasing timestamp dispatched from the placement driver, which is used for transaction. Though considering the performance overhead, it's not used by default. Now we get the TTL functionality, but how can the space be reclaimed? For background information, data in RockDB are organized in multiple SST files. And the compaction is to merge this, to merge all files even to newer ones. So we leverage the compaction filter of RockDB to literally drop the expired entries in the precise of compaction. The compaction filter goes through the key and the value and checks the expired timestamp in the Unix timestamp. If it exceeds, just drop the key value. In this way, we can do the space reclamation without any extra read and write. But there's still a problem. What if compaction is not wrong, some key accesses yet? The space reclamation may not happen in time. To solve that, time can be utilized to collect the property of RockDB to record the maximum expiry timestamp in each SST. With that, a worker called TTL checker checks the status of SST one by one periodically. If the max expiry timestamp exceeds the current timestamp, it triggers the compaction manually by RockDB Compact Files API to perform the compaction filter logic that most of the expired entries are promised to be dropped at in time. That's all for this talk. Hope you enjoyed it. If you are interested in TypeQB project or have questions, feel free to contact us through the following channels, including Twitter, GitHub, and Slack channel. Thank you, everybody.