 Hello, hello. Hello. This is from pink up. Hello, hello. Hey, hey, guess what? It's still, it's still the first cup of coffee for me. So thank you so much for, you know, switching the time for us. It's pretty good for you. No, actually, honestly, it's really horrible of me to even suggest that because it's 10am in my time zone. So it just happens that I got a really late start to the day, that's all. Oh, oh, it's 10am and that's not so bad. Yeah, no. It's my last beer of my day. I'm, I'm, I'm, I'm first coffee in your last beer down. That's great. Yeah. Everyone is, everyone is sheltering in place. I've been, I've been holed up in my law office here for like a couple months now. Yes. It looks nice. Well, thanks. No, I think it depends on your perspective because the, those are vines that are kind of growing over the outside of our, I'm in a pool house. And on one hand, it's really nice to have pretty. On the other hand, the vines make it such that squirrels can quickly climb up the side of the house and get on the, on the roof. Cool. Yes, you pay for the beauty. Everything we do for beauty. Or, or in my case, some things that I don't do. It's just not going to happen on beautiful things for me. Okay, good. I'm going to, I'm going to refresh on the coffee for about a minute and that'll give folks like Ken and Nikolai and others to join. So I've invited a couple of others. Thank you. Very good. I'm going to go on mute for a second. Refresh this coffee of your right. Okay. Okay. All right. Mr. Owens is here. Miss. Who I think it's probably most appropriate to say miss. Scar, Scar, Hello, just Amy, come on. I am bouncing between both this meeting and the SIG meeting. But if you need me, please ping me. Sorry, the other meeting. Contributor. Run time. Run time. It's runtime. Good morning. Morning. Hi. Morning. Nice to see you. Okay. Very good. We're four minutes after we've got a few. Folks, which is fantastic. I'm going to go ahead and put a link to the meeting minutes inside of our chat. And then we'll share those. Get the agenda kicked off. If you would, all that can reach the meeting minutes. Go ahead and. Oh, catalog your attendance. I'm sure folks. Ken is, is card capitalized now. It is not currently, but you know, who knows if that's going to change again. We have this huge, beautiful building in like, way out west of St. Louis. And it has master cut, big logo and name on the, on the, probably like a hundred feet huge, right? They changed name about a year and a half ago. They're not going to take down that logo and. They're going to take down that logo and name it. Small, like the CS small C for any reason. Good. I'm not sure how much they spent on that, but it shows a lot of a lot of mastercard dollars. The cost of an uppercase or a lower case. That's pretty impressive. Well, okay. A couple of housekeeping items as we get underway here. One is that. Just some of you have been on this call before some of you have not. So this is a, you know, this is a CNCF call, a cloud native computing foundation that this, this group is a special interest group focused on things related to networking, traffic, network protocols, network services. As such, some of the existing projects that are within the CNCF today, that sort of fall into this special interest group as their, their home base, their home room, if you will, is link or D on boy, GRPC, Nats or DNS. There's a list that I hope I'm not doing a disservice to any of the existing projects, but just that general area of focus. And so the, the meeting is open to all the general public. So please, we're going to have a presentation. In my head, you know, like I was saying, right, as we, we can force. I think Ken is talking to someone else. Very good. And so as such, the meeting is recorded and is publicly posted on YouTube. And so I don't say anything that your mother wouldn't approve of, I guess. But good, good to have Nikolai here today. I'm not sure that we have anyone just yet, but the first agenda item up is a project presentation for consideration of incorporation into sandbox. This is the chaos, chaos mesh project. With that, we've got two agenda items. If we will get to the second one, irrespective of whether or not someone from the contour project is representing. I'm not sure that we have anyone just yet. But the first agenda item up is a project presentation for the chaos mesh project. Kind of stewarded by the good folks at pink cap. And so we have some of those. Those representatives on the call today. You can. I'll go to this Google doc for, okay, I guess this is hadn't seen this yet, but that's part of the presentation actually. So with that, let me introduce a couple of folks that are with the project. So we have Calvin Wang who's here on the call. Calvin, you and Ed will be presenting today. Is that right? Yes. Yeah. Okay. Very good. I don't need to further introduce some of you guys or the project. I should let me, let me stop sharing so that you can, you guys can take it away. Can we introduce and present the project? That would be very happy to have you. Yeah. Sure. I will share my, my screen. Okay. Can you see the slides? Yes, we can. Okay. Yeah. Hi. Hello, everyone. This is Ed. I'm the co-founder and the CTO of pink hat. If you haven't heard about us before, we are making the four tolerance. So this project is called CTO, a physical database called tidy. And I think three years ago we donated the underlying key value storage and high BB, which is high cave, two CNCF. And currently, by KV is the incredibly level for that. And we are also the major maintainer of high theory. internal testing framework. And my talk will have two parts. The first half I've introduced the project itself, including the technical stuff and some detailed information of this project. And the second half I will hand over to Kelvin, he's our community manager. So he will talk about community status and open source governance things like this. And this is some information about me. If you have any questions, just feel free to reach out to me via email or Twitter. So okay, first thing first, why Cal's engineering is so important. You know, I'm working on distributed system for many early days, we tend to build monolithic backhand service like the single monolithic service because the scale was not a problem. And I think the online business is was so simple at the time. But I think in recent years, the idea of micro service is becoming more popular. And from the perspective of distributed system engineer, writing tests like unit tests for the distributed system is really hard. Because I think the runtime status of distributed system, it depends on so many things, like network status, message order, system topology, things like this. So that means rebuild the state of the distributed system to write the unit test is or the deterministic test is impossible, I think. It's not very hard to get it. And I see another big trend is Kubernetes is winning, have already won. And Kubernetes is eating the world. It is so easy to build distributed system on top of Kubernetes. Sometimes you just, you know, write some YAML file, and the Kubernetes will handle the deployment failure, orchestration, things like this for you. And it really lowers the bar for developers to build the complicated system. But I think it is good in a good way. And before I talk about the calc mesh, I want to share a little bit of the history of this project. Before I even created Pinkhab, I watched a video from FoundationDB. I think some of you may have heard this project before. They are talking about a testing called simulation testing. At that time, we don't even have a calc engineering at that time. And it is 2014. And this is a great talk and very insightful. I suggest every one of you, if you're interested in calc engineering or simulating tests, you can watch that video. The core idea is to use the simulator to intentionally create failure and see if your system can handle it well. Until then, I still think this is the only way to test the robotness of the system. So, I mean, maybe some of you watched the show Silicon Valley. Maybe the calc engineering and the help Richard, you know why I'm talking about. Under this idea, there are actually a lot of open source software. But I think all these tools are separated from each other. And their goals are totally different. For example, the Jepsen is the most famous one. But Jepsen is more focused about checking the consistency or correctness of a transactional system like a database, things like this. By the way, we have an official Jepsen testing report. And another interesting project, the name of this project is Namazu. But Namazu is more focused on messing up the runtime layer, like the file system, like system Linux kernel, like JVM, things like this. But most of these tools are not very easy to use. And you have to be an expert of basically everything to just make them work. It is really painful, believe me. And I think there is no such a framework to chain up all these separated tools and expose the human readable, programmable, user friendly interface to the developer or the QA engineer to manage the different types of failure and apply to your test workload or your in production workload. And another big challenge is Kubernetes and the container itself. For example, in Pincat, our internal testing environment is on a super large Kubernetes deployment. And we have different types of tests and the different versions of our database. That means we will have multiple cluster, multiple deployment, multiple services in a super huge single Kubernetes. And you know, PIDB is a distributed database, and it has many different components, that means many different parts. So we want to simulate as many kinds of failure as possible in our tests. And I have a just like this example here. If we did it by hand, like we just did it by hand, manually kill the parts, change the IP table. Yeah, I think it is good because the code is like really nice. And what's even worse, some tools are not even worked well inside the container, like TC, the traffic control, this too, and the IP table for your species like this. We also have to make sure the parts within the same physical nodes are not affected by each other when we apply some CALS tests. So here is CALS MASH, the background. The predecessor of CALS MASH was our internal database testing framework. It has a pretty cool name, we call it Shaddinger. But this project is tightly coupled to TIDB's codebase. But we saw the potential of this platform and this project and decided to extract the CALS engineering part into a more general, more independent project, which is CALS MASH. And we started this work at, I think, September last year and open source it by the end of last year. So far, we just open source the CALS MASH like for four months. And we have already migrated all our legacy CALS tests from our old platform to the CALS MASH. So we are the first adopter of CALS MASH, I think. I would say the best part of CALS MASH, it is a one-stop solution and it is so easy to use. Let's see how easy it is. We will use Istio as an example. You know, Istio is a popular service mesh framework. Everyone knows Istio. And let's say we want to inject some CALS into its control panel or control plane or data plane to observe the performance or stateability or security or anything you want to confirm. Let's see how we do this. For example, you have a Istio cluster deployed in your Kubernetes. Just two steps. And all you need is just to write a YAML file like this, define what kind of action you want to inject to your existing deployment. For example, I will use the POT failure CALS and mess up the Istio cluster. And the first step is use CUBE control, apply this YAML file. And you can see this is the dashboard of the application. You can see the metrics provided by Istio, the metrics are gone during this time. And if you want to stop the CALS, you just use the CUBE delete and you are set. And the service is alive again. So it's really very easy to use. You don't have to write the codes. You don't have to manually do anything. Yeah. And we can even control the direction and frequency of the different kind of CALS. Like here is an example to randomly kill the POTs every one minute. So here is an example. We are testing TIDB. TIDB has a self-healing feature. If you randomly kill the storage node, it will, you know, you have the high availability. So it will be back automatically. And we can just randomly kill, use CALS mesh to make sure our out of failure works. So there are also many scenarios you can try on using CALS mesh. Okay. I will talk about some key features and technical design of CALS mesh. The first thing is that we follow the cloud native design and we define our, you know, the different kind of failure, different kind of CALS wires CRD. And so you can, the developer or the QA engineer can just use YAML file to define the CALS experiment. And we use the slide card pattern to inject the CALS the CALS POT to your application, to your application POT. So it is totally transparent to the application layer. And another, we also, besides the randomly kill the POTs, we provide many different kind of CALS type like network. We are wearing the network SIG. So we can simulate the network delay and lose the network packet or the network partition. Since like this, because, you know, we are building the distributed database. So we want to make sure we can mess up the network so we can see the system still goes well. And besides network, besides network, we can, you know, also inject some IO error wire fields. And also we can simulate the clock screw. It is very useful when you are developing a database or developing the transactional system because we use timestamp all the time. So, and another interesting feature, another interesting CALS type is kernel VOT. We use EBPF to, you know, to randomly, you know, inject some failure to your CIS core. So, yeah, it is very interesting. So, this is the architecture, the architecture of the CALS mesh project. Basically, it is, just like I said, it is very, very cognitive. We use the CRD to define intent. And we use the side card to inject the specific CALS to the POTs. And we create customized controller manager to manage the hosting. It is really straightforward. And currently, I think CALS mesh is not doing enough on observability. You know, the effects of CALS test, you know, currently need to be observed through the metrics of the application being tested. So, we are considering giving the CALS mesh its own dashboard to improve the, you know, the observability. In addition, to make it easier to use, we are considering make a service, public service on public cloud like GKE or EKS, so that the user don't even need to write the YAML file. If you have already deployed your application or microservice on EKS or GKE, you can click some buttons on our service to create some CALS to test your application. And the third thing is that it's about automatic validation. Automatic validation, what is automatic validation? You know, we inject some CALS to application and application may, you know, and you have to check the output of your application to check if your system is going well when you apply the CALS. And we have already implemented automatic validation in our database. But this is very, you know, depends on your, depends on domain knowledge, right? So, I can easily write the validator for my database. But if you have another application, I don't know how to check your system behavior when you inject the CALS. So, we plan to create the verifier API layer so that the third party application can develop or create their own validator while testing, so that the whole test process can become completely automatic. Yeah. Okay. Next, I would like to invite our, you know, community manager, Kevin, to talk about the community stuff. Okay. Okay. Okay. Thanks, Ed. So, now let me talk about something that is less technical. On the community side, and although we've been open source for eight, three months, we have gained a lot of attention from the community. As you can see, during the past few months, we have received, as of now, actually, we have received 1.5 key stars on GitHub. And actually, if you look at the curve on the right side, you will notice that the first, the first thousand stars were actually achieved during the first two weeks after the open sourcing. And now, we have 27 contributors from multiple organizations. We will talk more about it later. And we have kept a steady pace of committees. So far, we've got 270. I think it's more than that commits in total. And we do not have a website yet, but we are fully aware of the importance of documentation. So we put together our documentation for the community using the GitHub wiki. And we have a monthly meeting, monthly community meeting in the planning phase. I think it should be ready soon. So everybody here, welcome to join the meeting when it's ready. Next slide. Another indication of the recognition and the attention is that actually, during the first week of the open source, we published our introduction article. It was among the top titles of high news. And also, it ranked the first in the GitHub trending list of the week. Next. Yeah. We are really flattered and humbled about this recognition and attention. And so with that, we are seeing already seeing increasing adoptions from the community. As Ed already mentioned, Pinkabby is the first. And perhaps, I think it's also the heaviest user of KessMesh. And we have other adopters from multiple industries. For example, we have DailyMotion, which is the YouTube in France. We have Selo. It's a digital payment solution provider. And Inspire is a very big cloud computing service provided in China. And the shopping model is an intelligent automobile manufacturer. It's like the Tesla in China. Yeah, we are pretty excited about that. And the contributors are from these adopters. Yeah. So here come to the question, why do we want to donate KessMesh to CNS? I think it's quite obvious to us. First of all, I think the values and missions of KessMesh is in very, very good alignment with CNSF. As Ed already introduced, KessMesh has adopted a very cognitive design so that it could easily integrate with the CNSF ecosystem. And it's, however, initial and the primary intention for KessMesh to be serving as a universal testing platform for the distributed system on Kubernetes so that it could enable resilience with observability. I think this also, these two are also the two of the most important qualities of cloud-native projects. And also our ultimate goal is that we would like KessMesh to be the Kess engineering standards on cloud. Of course, that has to be under the help from CNSF. And as for the benefits to KessMesh itself, I think, speaking of a very good experience of TechEV, which is in the preparation for graduation, I think by joining the CNSF, we could really use the guidance and assistance in terms of community governance and other development aspects of the community. And also CNSF is very, very recognized as a news home for collaboration for a very young project like KessMesh. I think it's the only way, I mean, to collaborate with other cloud-native projects. It's the only way for us to get better together. And, yeah, obviously, yeah, we really would like to help more and more developers. Yeah, with that, we are coming to the end of the presentation. Thank you all. And if you like us, just, you know, be our sponsor. And also the KessMesh maintainers are all here. You have any questions? Yes. So, basically, we can answer any questions about KessMesh. And, yeah, yeah, the whole thing is the one here. That great presentation, guys. There were even a few jokes included, which was nice. Kudos on the quick ramp of the project, by the way. You know, your marketing team will have to teach me some tricks. The numbers are impressive. It's good. So a couple of, I'll start with a couple of questions and I'll invite the community as well to ask if they have any. So, well, a couple of things. I might have gotten, so I'll say this, and we have the majority of the time of our call to probably talk about KessMesh. The second item on the agenda can be covered. If we don't get to it, we can cover it offline. So I just wanted to do some housekeeping there. Ed, at one point you talked about, you know, you talked about chaos in one container affecting another. And at the time, I don't know if you were sort of setting up and talking about the challenges of doing chaos things or if you were talking about one of the things that KessMesh as a project was helping solve. Yeah. You're doing this slide. I'm talking about, you know, the problem is some tools, for example, like if you manually change the IP table of your host physical nodes, it may affect all the network of your, you know, all the parts in the physical nodes. So if we want to simulate the network partition or mess up your network for specific parts, but not other parts within the same physical node, so you have to, you cannot use the IP table directly. So it makes sense. Yeah, it's sort of the level of granularity of chaos in the past. Think of chaos monkey that had been a bit more focused in IaaS or VM land. Didn't really, a couple of things. One of the slides that I was hoping for, I was hoping we might have a one that would just, I know this is a bit of a difficult one or can be, but it would be discussing other projects and kind of contrasting them, things like litmus chaos or others. How do you guys characterize that? You mean the competitor or the same, I think from my own perspective or my, as far as I know, KessMesh provides the most comprehensive, yeah, the most comprehensive chaos, different kind of types, chaos types. For example, the kernel, the network, the system clock. Yeah, I think the advantage of KessMesh is the comprehensive types of chaos. Yeah, litmus chaos in particular, they may be an appetite larger than the actual experiments they have today, but sort of the notion that there's a chaos hub sort of indicates or would hint that there would be a healthy set of experiments that I don't know that that's necessarily the case today. They do have experiments for, speaking with my SIG network hat on, the experiment you guys use as examples that you have support for today are in large part why it is that you're, we're doing this because they're network centric and like the service mesh examples are, I'll be very biased and say those are a beautiful thing. But so I guess the broader question here is around how it is that people design new experiments, those that maybe are network centric. What's the, you talked about a validator framework? Yeah, the verifier API. How many engineers are working on this? How does one bring a new piece of chaos? I think this question I will hand over to Zhou Chao. Can you introduce, I think the question is about how hard is it to introduce a new kind of chaos? Is that? If I can amend the question, Li and it, just a quick commandment. So speaking from networking point of view, do you have any predefined templates or receipts that actually people can reuse to extend the types of networking chaos that you can introduce? Sorry, I think the signal, some signal problem, I cannot hear the question. You didn't hear me? Yeah, I miss most of you. Okay, let me try again. So I wanted to amend the least question. I wanted to ask, are there any existing types or templates that people can actually use to extend and build their new and introduce new networking chaos types on top of what already exists? Yeah, actually, I think I think really post a link of how you create the first chaos experiment for your existing application. If you want to create a new kind of chaos failure, like you create a new type of network partition or something like this, you have to, I think we don't have an API layer to help you to create a new kind of chaos easily, but I would say it is very easy to apply existing chaos to your application. So we don't have a manual or document to help you easily to create a new type of chaos, but you can use existing chaos to create an experiment just using the YAML file. So we have a document about this. And I'm Rochang, the maintainer. Yes, now we have a document about how to add a new type of chaos into the chaos mesh. And now we already have almost two or three types of chaos introduced by Contributor. And it's not very hard to add a new type of chaos to the chaos mesh. Yeah, cool. Yeah, good question. It is an important consideration. In order to kind of achieve, add part of what your response to the, how you contrast against something like litmus chaos in terms of like, hey, having a more diverse set of, more that an SDK or a framework or or whatever. API layer or some, you know, the API, the models SDK to create a new kind of, yeah. Thinking of a new type of chaos and understanding that the project is, you know, primarily written in Go, is there a requirement that, that new kick that, that chaos be written in Go? Not all of it. Now, I would say Zhou Qiang can answer this question. Yeah, I'm sorry. It's all the, most of the chaos using created by Go. Go, I don't think so. I think we use some, you know, EBPF, it is, or it's, you know, yeah. Yes, we can write some, like a state program and others. The time chaos, if you use P, P trace and the kernel, if you use BPF. Yeah, so I would say it is 50, 15, I think, because, you know, sometimes you want to mess up the, the kernel of things you have to write the, the EBPF or using some, you know, a P trace to, to, to, you know, to create the clock screw. Yeah, so, yeah. Hi, Li, this is Quinni from Pinkap. I'm also a community manager. So I want to chime in regarding how we differentiate ourselves from other chaos engineering projects. We didn't include this in this slide because we do not want to, you know, compare to be so competitive. And, but we do have some information in place. And I would like to introduce them if you, if you think that's okay. I do. I think the people will, as you presented, I guess I would say that everyone should take it with a grain of salt, as it's just a perspective on, from your internal perspective, how is. Yeah, that's what I mean. Yes. Just because we are speaking from our perspective and might not be so objective. So yeah, please just let me give some illustration about the difference. So basically, Kiosmash provides reader and fine-grained injection capabilities and is more optimized for complex applications such as providing network partitioning, file system IO interference, time interference, kernel injection, et cetera. These are very important for testing complex distributed systems. And so Kiosmash injection capability is completely provided by itself, which can be controlled independently. That means itself does not provide many Kios capabilities so far. Some injection capabilities need to rely on other Kios tools such as network delay and POMBA. And another point is that Kiosmash is more cost-efficient and easier to use. Creating a Kios experiment in Kiosmash only needs one YAML file. And there are only a few fields that need to be set for users. It also provides a flexible application selector for Lutmas. A Kios requires two configuration files. One is RBC, the YAML, and Experiment.YAML. Many of the configurations and are complicated. Part of the reason that their Kios implementation relies on creating jobs in the corresponding namespace. In addition, Kiosmash will also provide a way of front-end management of experimenting in the future. So Kiosmash provides Kios dashboard, will provide the dashboard, a combination of Kios event and application monitoring, which will make it easier to observe the effects of Kios experiments. Currently only supported for TIDB, but a universal version will be available soon. So the last point is that Kiosmash and Lutmas architectures are very different. So Kiosmash deploys a demon set on each node. This architecture can do more things, we believe, such as supporting rate-to-network injection capabilities, kernel injection capabilities, etc. I hope that's, again, that's from our perspective and just take it with some sort of brain, yeah. Thanks for that. Yeah, that's about as much as we could ask for, I think, in terms of a couple other quick questions. I'm high-level. I know that we, since we had the time, we got a chance to ask a bit more about the workings of the project, but just sum around the project quickly. So Kubernetes native, but Kubernetes only? Is that, that would be an accurate statement? For now, I think it's Kubernetes only. And no Windows support currently? I don't think so. Okay. And in the future, is there a plan for Windows support? No, not as fast as I can see. I don't have a plan for Windows for now. Very good. Last one for me. Yeah, because we rely so many features are provided by Linux kernel. So it is, yeah, I think it's really hard to apply them on the Windows part. I think the last one for me, the service, is the vision manner that that's a sort of a free public service or, can you say more? Yeah, I put the question mark here. So I'm not sure we will provide them. Yeah, we can provide a service on some kind of like free, using a freemian model. Maybe we'll put some advantage cows for some, you know, to charge, to get some service fee. And maybe it is a business model of cows matching the future. Maybe. Got it. Very good. Others on the call, others have questions for the cast team, cast mesh team. I do have a quick question slash suggestion. So the way that the architecture was described is that you have a local node worker node demon. Have you considered being able to test the infrastructure itself? So I know that this primarily is targeting the application, but for example, I would probably like to test my kube Kubernetes infrastructure before I start deploying my application so that I know that it scales well, it reacts properly to a missing node or to latencies. I might be wanting to check if my CNI is, you know, performing well. I might want to compare it with another CNI, etc, etc. So. Interesting. Yeah. It's an interesting idea. Yeah. Maybe we can put some, you know, like a plug-in system into our demon set so that you can, you know, do some extra or extra checked, checked in the demon set. Maybe. Yeah, I think it's a good idea. Thank you. Questions from others on the call? Or Nicolet, more questions? No, I'm fine. Thank you. Ken had noted that the end user community just the chaos engineering and testing as a topic and as a set of tools, probably very interesting to the end user community as indicated by the stars and projects probably. We did, let me, let me at this point maybe harass you guys about this same question again. So I mentioned litmus chaos as like top of mind for me, but there are other projects out there. If you do have characterizations of those, I would be curious to see those. Yeah, sure. We can, we can, how can we send it to you? Maybe we can post more information on the PR. What do you think, Lee? Yeah, if I putting myself in your shoes, yeah, I would tread, you know, ping me in that I would tread lightly and the things that you'd said, yeah, it's even more, even if you're a native English speaker, it is easy to offend. Okay, sure. No problem. We can, we can come up with a table with those items that we include as the features to compare and those similar projects in the field. Is that okay? That sounds great. And let me not put all the diligence on you guys. So I think it's specifically litmus chaos that's also up for proposal. And so let I'll coordinate with the other SIG chairs there as well. That will help. So yeah, sure. Thank you. Very good. Actually, I did have one other question. Currently, I think Calvin, this time that we spoke, weren't any TOC supporters as of that time? Is that still the case today? Nope. Yeah, they still need the sponsors. Yeah, we need three. Yeah. Very good. So thank you very much for the presentation. Anyone else have any questions? Comments on the call. So what are the next steps? Like, we need to get sponsors on our own. Any advice regarding that? Yeah, a couple of things. One is that just haven't, so I appreciate the presentation that Calvin, you and others had given earlier, just to get familiar with the project in advance. For my part, I have an interest here. I'm willing to take up to perform a SIG review of the project. So just, you know, project like only four months in the open source. I do have questions around the diversity of the contributor, the governance that's there today, that sort of all of the maintainers are pink cap folks. And that's a very organic thing. That's how projects generally get started. That makes sense. Yeah. But so Calvin answered your question, kind of a SIG review, which part of that, I'll use the word diligence lightly, but part of that examination, if you will, will be, yeah, I could help with a bit, just this whole process here. We will be talking about chaos mesh as a project that we're reviewing. And just in our SIG chairs meeting, we'll be with in the presence of other POC members. So they'll, they will learn of the project through that. I will let them know that, and Ken will as well, let them know that the project is looking for TLC sponsors. And so that'll be a vehicle for soliciting interest. Okay. Outside of the other ways in which you might be soliciting interest. So do you suggest, you know, we solicit sponsors or interest on our side? Yeah. I am also a bit tongue tight here, because that process has recently evolved. And I don't know that it's entirely solidified. On paper, the process is essentially exactly what you guys are doing, which is you come to the appropriate SIG, your presentation, that SIG raises it up, helps solicit that interest. It's certainly, I don't think that it is, I think now that you've presented here, I don't think that it is inappropriate. Generally, what would happen is we would do a review. I'd say generally, this is a relatively new process review. That review and that sort of more concise presentation and perspective coming out of the SIG is a good artifact for other TLC members to quickly look at and gauge their interest and gauge their willingness to support. So that artifact I think will be helpful to you guys. I don't think that it's inappropriate to have those other conversations. You might find that some of the TLC members would say, oh, hey, I haven't heard from the SIG yet. Let me hear what they have to say or what did the SIG say in the name? On others of them, they'll be just much more willing to engage directly and have those conversations and offer sponsorship. So I don't have clear guidance because it's different ways. So is there any document that we need to prepare? On your part, no other than to be, so we will move forward with this SIG. We'll put together a short document as a review of the project and some recommendations that we might have about the project. In there, we may have some additional questions and that would be any other artifact to produce. Otherwise, they're no. Got it, got it. Thank you. Thank you guys. This is just great. Thank you for your time and your questions and comments. Thank you. Thank you. All right, very good. We'll meet in a couple. We'll talk to you all soon. Sure. Talk to you soon. Bye. Bye. Bye. Have a great day.