 Hi Rahul. Hi Rahul. Can you hear me? Yes. You mute yourself. Yeah, right. I was just finding the mute option. I couldn't find it. Okay. Thanks. Hi Karthik. Hi Amar. I think the recording has already been started. Yeah. Oh, good morning. We'll get started in a little bit. Hi. Good morning. Good morning. All right. It's 11 am in the east coast of the United States. So we'll get started at 11.02 just to let stragglers do what they do. All right. Good morning. It's now two minutes after the hour and this is the sick app delivery twice or every other. This is the sick apps delivery meeting for October 23rd. Today what we are going to do is have a two presentations and then some discussion around specs that we should be thinking about for sick apps delivery and then we'll do updates. Seeing this meeting doesn't go all day. When we're giving the presentations, please try to keep them between 15 and 20 minutes. We will definitely chime in. So first up today is Uma. Are you here? Yes. This is me. All right. So Uma, the floor is now yours. All right. Can you see my screen? Yes, I can. All right. Thank you. First of all, Brian and Lee and others for allowing us to present here. We are all happy to be here when I say we, a bunch of people from the Litmus project. Our hope today here is to introduce Litmus, which is a chaos engineering project for Kubernetes. We were part of the chaos engineering group in CNC app, which was started by Chris A. last year. And looks like Chris thinks that chaos engineering as an area is being now handled under this charter. So he recommended us to present here and they know and the hope here is we've got a lot of coaching, feedback, mentoring and also encouragement. That's our purposes. Today we have along with me, I'm Uma Mukhera, co-founder of a company called My Data. We also have another project in cloud native data management, OpenEBS. And then this is the chaos engineering project that we sponsored. We are about eight full-time members working on this project right now. And the community is starting to really grow. I'll talk about it in a little while. And I'll try to keep this presentation, my presentation to about 10 minutes and Kotik, who's the architect of this project will chime in with a simple demo. With that, I just wanted to put it there, the fact that what we think is the alignment to the gap delivery. So, you know, Kotik is a big project and now it's all, you know, people are going to use it, getting into delivery modes. And then there are a lot of challenges around that. That's where the gap delivery, the workgroup will help the community. So as we move towards development to the CA pipelines to actually staging and production, the chaos engineering becomes more and more relevant. And then the gap delivery, I could be totally wrong about this. This is my first meeting here, so I'm totally honest. I want to learn more about what we do as a group here. And what I think is, you know, when the application starts done with the development and then moving towards the production, this group will be more relevant, right, in terms of delivering, how to use it, all that stuff. So we think chaos engineering is an area is well fit here. And that's probably what this is just said. So we are very much willing to learn what we can, you know, get up here. So before we go into chaos engineering, I just wanted to, this is my typical pitch to everyone else, where I, in my meetups and other users, we talk about reliability. Reliability is actually very much important outages of services costs, not small dollars, big dollars. For example, you know, we have seen very standard platforms still facing outages, like GitHub, Slack and even AWS. It does not mean that, you know, they have not followed the reliable testing processes. In fact, they are some of the best ones. So the key really is finding weaknesses in the, in this deployment system. So failure testing in CA pipelines is generally not good enough. I just have a quote here from Ali Basiri, who is a well known chaos engineering expert that, you know, you could test your apps as much as you want. But in production, no one could predict what the environment will be. So there's always a chance to put a system to fail. So what we do to find the weaknesses breaking some purpose in production. So the loop really is find the weakness, fix them to beat the process. So this brings to, you know, what is this failure testing? And now you're talking about chaos testing. So the main difference is failure testing stops at the pipelines. CMCF.ca, for example, it's big and CD is just a matter of pushing the applications from CI to, you know, most likely the SaaS based systems will have a CD. But chaos testing never ends. It extends to pre-prod and then production environments. In another way, I also like the way Mark made by CEO co-founder of my lab, explains chaos engineering. He calls it as chaos engineering loop. What it is really is typically in usual engineering, you will have, you will wait for the system to be destabilized and incident occur, then you're quite, quite resolved and then try to bring it. But in chaos engineering, you don't wait for it. You inject the fault, you analyze, tune and then again go back to the observation. So you do the same stuff in a little more planned way. You can choose when to inject. You can actually cause less disruption. You can actually do it in staging, pre-prod and then prod. So this is more planned way to get resiliency, right? So to summarize, in resiliency is achieved by functionality tests as well as failure tests. But in production, it's really achieved by, first of all, you need to have a good CI, but then you need to have a random chaos as a way to submit your chaos and then analyze and then start adding up more resilient scenarios, right? So this is really important in terms of chaos engineering. Given that as an introduction to chaos engineering, we are all talking today because there is something called cloud native bubble that's happening and then Kubernetes is really the environment for the near future and then the whole world is starting to move towards that. So now I had a chance to listen to Dan in a recent one of the conferences. I really liked it. The way he explains it, look, you know, Kubernetes has really taken off and it's about 35 million the big project, which is Linux itself is off of the code. And then you are talking about writing a big application, which is about 40,000 lines of code. And if you can see, it's compared to the rest of the stuff. It's only 1%. It's not even controlled by me. That includes Kubernetes and a lot of applications that are being developed on top of it. So how do we really get resilience? How do we make sure that things are going to work well? The answer is really extend your testing to chaos engineering and then do it in production. Now let's actually start talking about, okay, now I understand the need for chaos engineering in cloud native environment. But how do I do that? So thankfully, Kubernetes gives a lot of resources. And then not quite recently, but for some time now, everybody started moving towards the audience. But these are all so far as being for development. And then is there a way to standardize these APIs for doing cloud native chaos engineering. And then that's where we chained in and defined some interfaces for doing chaos through CRDs so that it becomes a little bit more generic than being within a team, so that we can offer chaos engineering as a general feature in the Kubernetes community. Just to explain that a little bit more, we all know how to create a part, the developer gets it done, I want to create more resources, again, go into the declarative configuration. And then now I'm actually done spinning up my app and then I got all the resources. Now I want to do a chaos testing. It's as simple as actually spinning up one more kind of chaos engine and then pulling up the chaos experiments. So we envision chaos engineering is just an extension of the natural development. And then it should really fit in into the way we do things as developers, as the stories in Kubernetes. And that brings me to introduce Litmus Chaos, whatever I said is the requirement that's specifically what Litmus Chaos or Litmus fulfills. To put it in a way, it's cloud native chaos engineering for Kubernetes developers and SRS. And then how Litmus is used, typically. I'll talk about Litmus Chaos Hub in a second. But let's assume that you have enough experiments, the chaos experiments out there in the hub, then you have an app running or a pipeline running. And then all you need to do is install Litmus, which is quite simple thing to do. And then that installs the chaos libraries and the chaos operator. And then you pull in the required charts from the hub. And this is the community we expect to grow. And then everybody starts submitting their stuff into this hub. So you can pull in whatever is the app you're using. There must be some charts that are related to that. So you pull them in. So you install those charts, which are nothing but sea odds. And then you start injecting the chaos. So the chaos, once you start injecting, this is really nothing but you annotate your app that, hey, start this engine. And this engine includes the following experiments. And there you go. The chaos container starts up. And it runs the chaos. And then you can observe the results. Now it really brings to what is this chaos hub? You've got a good operator framework. But I really need to have the experiments. And the big challenge that we face is as a team, we can do all that is needed. And we can involve community to develop the infrastructure. But once the infrastructure is done, the major portion that is remaining is the actual chaos experiments of the applications. And that's where we created the chaos hub. And let me explain how the process works. But before that, this is how chaos works. It is available at hub.litmuscaos.io. For now, we are still in the process of moving some of the experiments. I think there are about 20 to 22 experiments already. We have about eight so far. The general chaos are the ones for the developers to start with, the port delete, container, scale, network loss, network latency, disk, quail, and so many of them are there. I'm actually doing application chaos. So let me explain how that goes. So typically, you all have this in every DevOps environment, right, CA pipelines. And then the pipelines really include some functionality and then some failure or HSA testing. So the idea is we encourage these developers or the CA pipeline admins to convert that regular failure test into a litmus infrastructure so that you can call it as a chaos experiment. And then you can use them in the pipelines. And then additionally, you push that chart or experiment to the chaos hub so that your application users can use that experiment in either staging or production. So the whole idea is we believe you in doing the experiment in your testing of functionality, but we want, when I say we, the user of the application wants the failure test cases to be given out to the users of that application so that we can use that in production for doing chaos engineering. That's the whole idea of the hub. And then, so this SR is we'll start using that experiments in staging to begin with. And then when things are good, you typically start doing game days. And then you actually move them into production, the chaos testing. And then as the developers, you know, as more and more applications, charts are submitted or experiments are submitted to chaos hub, you actually have more failure testing that you can do in your pipelines as well. So that really increases your quality of your application in the pipeline also. So it's a win-win situation for both your users as well as for your own teams. Just imagine I am converting my legacy application into a cloud native environment, which is nothing but, you know, containerize it and turn it on Kubernetes. Now I want to prepare a pipeline. Well, I'm using so many databases underneath. I don't know what value testing I need to do for Kubernetes itself. You can actually pull them up some chaos hub. So that's the whole idea. And then given that how am I doing on time? Think about seven minutes or 15 minutes. Okay, I think we have a decent community to begin with. And then we have trying to follow the standard practices that every other Kubernetes community follows. And we are trying to do a release cadence every 15th of every month. And then there's a community meetup that happens, you know, twice in a month. And then contributing new charts is easy. We are trying to create tools where developers can really come and, you know, install some programs, which will create templates, and then you just put your failure logic into that. And then, boom, you're ready. You test it, you submit it to your application, right? And who's using it? In fact, Litmus is born out of the ETV testing that we have prepared for OpenEDS, which is the CNCF Sandbox project. So what we did is look, you know, we have done a lot of ETV testing that our users can use OpenEDS users. Why don't we actually make this infrastructure open up to the entire Kubernetes ecosystem? So that's how it is. It's in production. I mean, the Litmus is in production in the OpenEDS community. A lot of users are now beginning to use these Litmus test cases of OpenEDS in their production environments or at least in street environments. And as you can see that there's a chaos test pipeline. This is a functionality test. And then for every commit of OpenEDS, now we run about 10 different chaos tests that are specific to OpenEDS, right? So something similar. Anybody can construct the negative testing in the pipelines and then get benefited. And then we are really happy that, you know, some users are starting to pitch in. I think we have created some issues. As part of the recent October Fest, we were able to get in touch with some of the community members in the process of writing these charts. Now, really what we want from this group is, you know, give honest feedback where we can improve, as well as how to really spread the word. You know, one idea was we can use Litmus just like how we are using in OpenEDS.ci. We can use Litmus as a tool for CNC.ci. I think we are trying to be part of that CSIC and recommend that. Maybe I'll do this presentation again in that group as well. And then we are already trying to get one block scheduled on whatever I just said. It'll be on CNC. And we are all welcome to join the Slack. It's part of the OpenEDS channel itself. But we want your feedback coaching. We will be working around in every meeting here. So given that, I don't know whether we have time for demo. Probably not. We'll do the demo next time. But I'd love to answer any questions. All right. Any questions? All right. Well, if there isn't, you can reach them probably through issues on your GitHub project. Would that be a good suggestion? Yes. Get a project and also on the Slack channel on OpenEDS. Yes. All right. So sounds good. Thank you for presenting today. Thank you for giving us an opportunity. All right. So the next presentation we have is the Captain Proposal for CNCF Sandbox Project. And Dirk, I thought I saw you earlier. You ready to present? Yes, I am. All right. Take it away. Thanks. So thanks for the possibility to share our project with the CNCF Seeker Delivery. So my name is Dirk, and I will present together with my colleague, Andy, what Captain is today. We want to discuss which problem we actually solve with Captain, how we solve it, then quickly run through the roadmap, the community ecosystem, and then to relations to other projects that are already part of the CNCF. So what is Captain? We tried to narrow it down to one sentence. It's a message-driven control plane for application delivery and automated operations. And let me lead by two examples that I want to go through, one for application delivery, one for automated operations to get a quick picture of what we're doing here. So application delivery, I think most of the people in this SIG will be familiar with that topic. Captain usually starts with its delivery workflow after a new container has been built. So it starts after the continuous integration part. So a new container, for example, a new version of a service is pushed to a container registry or artifact registry, and sends Captain an event that a new artifact is available. Captain stores all of its configuration in a Git repository, updates specific configuration files, and then triggers a deployment into an environment. For example, a dark deployment executes some tests, gathers monitoring feedback, gets it back to Captain. And according to the feedback, so it has service level indicators and service levels objectives that are assigned to the specific artifact and service decides if it meets the quality criteria to be promoted to the next stage, where it would again update configuration in the Git repository, continue to the next stage, for example, with the blue green deployment, again gathers the monitoring feedback. And in this example decides that the quality criteria is not met, triggers a rollback and also notifies the user that the rollback was added that deployment was not successful. The second example, how you can use Captain as an as an control plane for automated operations. So as before, Captain is in the middle as the orchestrating instance in this use case. And initially, we add operations instruction and again store them in the Git repository. And that consists of service level indicators, service level objectives, and also information of how specific service level objectives violation can be remediated. After that, Captain can set up and configure monitoring rules, for example, in in Prometheus and set up alerting rules based on the information of the service level objectives that it was given. Then the services run and given any of the SLOs is violated, the monitoring provider detects it and alerts Captain. Captain can see in its configuration if it has an applicable remediation action and execute that. In this example, it is a simple scale up. Of course, you could argue you could also do that with with auto scaling and an HPA with custom metrics. Yes, you can, but you could also think of more application centric use cases like toggling feature flags. This is also a valuable remediation action. And then again, receive the feedback if the remediation action actually helped to remediate the problem or not. So in the delivery and operations workflows we just saw, there are usually at least three personas included. That includes the developers that usually provide the container and know best what's inside of the container and what the application or the service consists of. And they would usually define the remediation action so they know which feature flags are available and which feature flags can be used to remediate certain situations. For example, to switch from delivering dynamic content to delivering cached static content because the requests number of requests increases. The second person group is the DevOps engineers. Those are usually responsible for configuring tools. For example, to have Chimita running in the right version and that it's able to execute performance tests. And they also provide service level indicators, meaning that they configure Prometheus in a way, for example, that the SLIs can be retrieved for that specific service that is needed. And last but not least, also site reliability engineers are often included. They usually define service level objectives on top of the service level indicators and define stages and processes. So historically, most of the work that these three people have done together were done in pipeline files. And that actually leads us to which problem Captain tries to solve or actually solves. So I think all of us have once in our lives written a CICD pipeline and pipelines are a good thing for continuous integration. That's where it actually started out. Then additional tasks were added like executing tests and recently also taking care of delivery parts. And the example over here is about 350 lines. So it's not that large. And it includes a lot of information, actually. So it has information about the target platform which is used. It has implicit information about the used environments. So the typical ones are depth staging and production. It has implicit information about the tools that are used. So maybe use Terraform to provision your infrastructure, use Helm for deployment and hay for performance testing. And it also knows about the process that is used, which is usually always the same when you think about continuous delivery. So you take an artifact, you put it in an environment, you maybe run some tests, evaluate if some quality criteria is met, and then you promote to the next stage if there is one. And what we found actually, and we had discussion with a lot of customers, is that they are under impression that pipelines is becoming the next unmatchable legacy code. And let me reflect on that why this might be the case, actually. Because for one service, you might need one pipeline. And this is a really simple example just to highlight the facts here. So for one project, you most likely have several services, which would mean you would have several pipelines. And usually, if one writes a good pipeline, as it starts out, the pipeline is copied and maybe adapted a little bit. Because some other project uses a different load testing tool or a different deployment tool. So you already end up with copies that are slightly changed for a project. And if you then actually scale it up to several teams, you will end up with several instances that might be still the same, the pipelines, but you will most likely end up with a large number of snowflake pipelines. And this is actually then hard to maintain. And we already learned in software engineering, one-on-one copying code is a bad idea. So I also think that it's the same case for pipelines, since we are already in the stage where we can write pipelines as code. And the challenges that we heard are, if you have this example in mind, where you have like n times x pipelines, and the security team comes up, okay, we need a hardening stage, there has been a new guideline we need to follow, you would need to touch each and every one of those pipelines to add this stage. The same goes if you want to use a different tool for specific tasks. You most likely would need to touch a lot of those pipelines. Same goes if you want to add notifications to all steps to have like a select drill, for example, of all the activity that is going on during application delivery or operations. And a very good example we got is if you run an e-commerce shop, there is this magic time between Black Friday and Cyber Monday where you don't want your site to go down. So maybe during this time, it is not a good idea to automatically promote new versions to production. So it might make sense to like carve out the timeframe of two weeks where you say, okay, in this timeframe I want manual approval before anything goes to production. And then again, you would need to go to each and every pipeline and actually implement this or ask all of the teams nicely to don't promote to production and hope that they don't do it. So this is actually where I want to come to how Captain solves these issues. And so we solve that by defining application delivery and operations processes in a declarative way. More details to come. We rely on predefined cloud events to actually separate the process of what is happening from the tools, then who is actually executing a deployment or a test or an evaluation. And we want to provide an easy way to integrate and switch between different tools. And the declarative delivery flow we call them shipyard files. So instead of implicitly integrating that information into your pipeline files, have a shipyard file where you define the stages that you have and the different steps that are taken per stage that are usually built out of a deployment, a test, an evaluation, then a promotion. So this is really the recipe of what to do in which stages. The standardized way of communication is based on the cloud events project. And for each of the steps in each stages, Captain dispatches a cloud event to a PubSubs instance. So currently our PubSubs provider is nuts. And the events decouple perfectly what the current task wants to accomplish from who should it accomplish. So you can also, since it's a PubSubs implementation, have several listeners on one topic. So that's also pretty neat. And you get additional context information about the project that is operated or delivered right now. And we introduce the concept of a uniform where you have a declarative way of defining the tools that you actually use to accomplish the tasks that are defined. So for example, Slack trail service that listens to all topics in the PubSubs implementation and just writes out the messages. You could include ARCO CD, for example, for deployment and Jamie for performance testing. So this is just an excerpt of course. And coming back to the challenges that we faced before. So the challenges Captain accepts those challenges and actually has a simple solution to all of them. So instead of manipulating all of your pipeline files, you add your stage in your shipyard file and you're basically done because the delivery flow and the operation flow is actually let's say it's decided at runtime what is done next. So once you update your shipyard file and you add a stage in the current delivery random, the stage is already taken into account. Same goes for if you switch out tools in your uniform files, or if you add an additional tool for an additional event in your uniform file, or you can also change the approval strategy for the production stage in your shipyard file. So coming back to the key features of Captain. As already said, it's a message-driven control plane for the delivery and operations. Use a standardized cloud events for communication. We had it in the example. We have a GitOps built in. So we have all of the configurations stored in a Git repository that also works when it's offline. So we had the situation that we had a GitOps approach that relied on GitHub being online. And then GitHub was down and nothing worked. So this is not a situation that we can live with. So we decided to have a local Git repository and just work with upstreams to various public instances. Captain is I think one of the first at least open source project that enables automated operations. So really self-feeling for applications that is based on SLIs, SLOs, and remediation actions. It works very well in multi-stage and multi-cluster scenarios. So usually you have a depth staging and the prod environment, and usually these stages are individual Kubernetes clusters. So as opposed to operators, for example, if you use operators, you would need an orchestrating entity all over again because operators over clusters don't work that well out of the box. We also have support for non-Kubernetes applications. So you can implement and integrate basically any tool that has an API. So as you saw the uniform services before, they get the cloud event and can translate that to whatever is needed to be accomplished. So it does not have to be a Kubernetes application or a Kubernetes service. And we of course have also observability built in. That means that each application delivery flow and operations flow, we have a UUID, a captain context as we call it, that lets us then actually visualize a trace of what has happened in that specific application delivery flow or operations flow. And this is the first version of the visualization. It's not that fancy yet, but we already have plans for the next version to actually level up at this again. So the roadmap, what we actually want to achieve by participating in the CNCF and in the CGAP delivery is extending and collaborating on the cloud event specification. So there are two ways how captain can integrate and orchestrate tools. Either you write the small service that translates the captain cloud event to something that the tooling understands. And the other option is the tool understands captain cloud events out of the box. That would require us to actually have a lot of conversations with and I wrote it there at first with all of the CD tools in the CNCF landscape. And I will be glad to have those conversations to actually take that a step further. And what captain also brings to the table, it enables easy interoperability between CNCF tools. So if you want to use several CNCF tools in one application delivery flow, the orchestration part is already taken care of and you can just hook in your tools at the specific steps that you want to use them. We of course want to continue adding additional cloud native practices like cannery releases and also feature based feature flag based cell feeling. There I see, of course, also especially in the cell feeling space a collaboration possibility with the guys from litmus that just had a very good presentation. Thanks for that by the way. So because cell feeling works very well when there is chaos testing around so you can actually test if your cell feeling strategies work. We of course want to continue to improve the interface and also want to implement the W3C trace context. So you have the ability to visualize your application delivery flows and your operation flows with any trace context conform tool. We want to build out the support for the uniforms and like a captain's wardrobe service, which is like a service registry where you can look up what integrations for other tools and services have already been written for captain. And we want to improve cell feeling. So how does captain map to the model of application delivery? I think the topic 1.5 is can be seen as like an input input to captain, where it's about the app configuration app parameters. So I see helm and customize a little bit in that space. And then it basically acts as an orchestration tool for all the topic 2 and topic 3 agendas. How am I doing on time? 20 minutes. Okay. Let's see. You got about five more. Five more. Okay. So the community is growing. We have a growing community and a lot of engagements on Slack. So a lot of people actually become aware of us now and start engaging with us. We have a growing ecosystem. So there are already integrations with other tools there. Some of us, some of those we wrote our own, some of those were contributed from others. This is a list of companies we are currently working with on captain. So this is a lot of, let's say there banking in their GSIs, SIs, and performance load testing with Neotis, for example, and also other workflow tools like Xmeters are striving for an integration with captain and have actually already built one. Relation and distinction to other CNCF projects. So this is a really interesting topic, I think. So captain uses out of the box helm for deployment. So we have a better resincluded service that does simple continuous delivery tasks. Why do we have that? Because we want for people that want to try out captain to have an easy experience of trying it out and not having the need to configure four different projects to actually get the captain experience. Envoy is used in conjunction with Istio for traffic routing with blue green deployments. We use Prometheus as a monitoring provider and we already had that we base cloud events our standard way of communication. A relation to the CD and observability space is a pretty clear one for me. So we want to build out integrations and collaborate on the cloud event specification. And of course, there are already existing workflow tools like Brigade, Argo workflows, Tactum, Jenkins X, and there are certain differences to each of those tools. That we can we can gladly discuss if you want to have more information on that. So if you actually have any questions and don't find the time to ask them now, please feel free to reach out on the captain's leg channel. Join our community. So we also have bi-weekly community meetings or write us an email or just create an Istio in the GitHub project as Brian pointed out earlier. And with that, I want to thank you for your attention and hope you consider captain as a CNCF sandbox project. All right. Thank you, Dirk. Thank you. So we're coming up short on time and we still have a couple of items that I want to to go over today. And so next up on the list would be, this says CNAB, OAM, OMS, any other specs to consider for simplifying app delivery. And what this brings up is actually a much larger topic is that one thing that we need to do in SICK app delivery is start cataloging these these specs, whether it be the the three that are mentioned or others that that might show up. So we don't actually have the so we're still new. We're a month in a cell or a month or so into this real process. But what we need to do next on this process is solicit some volunteers to help catalog some of these views. I won't ask for it here because everyone isn't online, but I'll actually listen it out to the mailing list that before we think about considering these specs, let's create a catalog of these specs so we can know what we're talking about. And I'll make sure that we bring it up in our next meeting to show the status of that. And next up is there is there any other updates that anyone wants anyone here to know? And I can start off with we did move our meeting calendar. The if you follow the CNCF calendar invite, our next meeting is on November 6. So just keep that in mind. And and then the next two weeks from then, since we're moving to the first and the third would be on the 20th of November, but that will not happen. And we're going to cancel that meeting because that is during kubecon slash cognitive console will not be doing that one. Any other updates? Yes. And as Amy pointed out, it is already it is already canceled. I just wanted to say it out. Do a follow up in December on every day. Only one meeting for the month of November. Yes. But I wanted to also highlight something else that was brought up on the on in Slack. And setup delivery is tasked with a lot of different functions right now. And we want to make sure that we are showing our showing the projects in the community, but also weighing that with other tasks that we're doing like the documentation we did around the delivery definition. So we want to make sure that that everyone knows that we won't just be doing demos. We will be doing other things as well. And if you have and if you have things that you want to talk about, please update our agenda documentation and we'll get we'll get that prioritized and scheduled. So that's all I have. Anyone else? I think I want to add more thing is that I think you guys might notice that there are a lot of projects right now. They're presenting in the seek application delivery. Well, I think it still needs some time to get formal feedback because we are still talking with things that TLC is to formalize the process like what will be the next step, what will be the recommendation later or something. So I think it's still not finalized yet. So I think the project have presented in seek up theory may still wait for a few time before all of this stuff become finalized. I think I'll always is talking with TLC right now, right? There are some conversations going on right now trying to figure out how this whole process works. Okay. And anything else? Because if not, I will end this right now and we can go about our days. All right, sounds good. Thank you for showing up everybody. We'll see you again on the 6th of November. Thanks. Bye. Thanks guys. Thanks. Bye. Thank you.