 Hello, this is Li Zhang, and welcome to joining us in the SeaGap delivery community meeting, and we cancelled the previous meeting due to the Kubecon Europe because our folks have speak there, so we postponed the several presentations to this time, and so the main topic of today's meeting is about project presentation, and we are very honored to have two projects here. The first one is Litmus Chaos, which is already a sensitive project sponsored by SeaGap delivery, and today we are very happy to invite them to give a project update on Litmus Chaos, so what's new here, and what's the progress of their sensitive onboarding, and what's the new features, and what's the next roadmap, so this is the first item we have. The second item we have is we are very happy to have a Flagger project here, which is an automatic progressive application delivery system, and it converts seamlessly with GitOps systems like Flux or Argo CD, so we are very happy to have its maintainer here to give us the first presentation of Flagger regarding to how it works, how it designed, and what scenarios can be applied to, how it fits to the landscape of the SeaNCFC Gap delivery, so we are very happy to have this project here, so I don't want to waste time here because we do have two projects to present, and so I will let Litmus Chaos folks to take over from here to do their presentation, and they are followed by Flagger, so every presentation we hope it's like 15 to 20 minutes, but it's not restricted, and we also have a very short QA after every presentation, so okay, I'm not sure if who will present the Litmus Chaos project here. Hi everyone, this is Karthik, I think Umar is new to present about Litmus, he will just be joining us a few minutes from now, could we go on with the next topic? Okay, no problem, so let's just start from the Flagger presentation, I believe Stanford, it should be already in the meeting, right? Hello Hedy. Hi, nice to see you, cool, cool, cool, okay, you can take over from here. Thank you. Hello everyone, I am very glad to be here and present Flagger, a project I've started roughly two years ago, it's now under WeWorks and it's a WeWorks sponsored project. Let me give you a short overview of Flagger capabilities. First, I'm going to talk about what is progressive delivery, so progressive delivery is a wider term that encapsulates, can releases, AB testing, future flagging, all these techniques that have a single goal, and that goal is to reduce the risk of introducing a new software version of production. And there are a couple of ingredients to achieve progressive delivery, most of them overlap with continuous delivery, of course, first you have to have a CI pipeline that produces immutable artifacts, something that signals that you shouldn't be using latest image tags in production, everything should be mutable and versioned in some way, either through a Gitsha or a Sember tag, for example, for Docker images. Another ingredient should be that the CD pipeline should be designed in such a way that does reconciliation. Most CD pipelines right now are just reacting to some event and only then are applying the state. And this is something that GitOps covers, the way that you define your desired state, and that state is continuously reconciled, not only when, let's say, you do a Git push, something can change in the cluster at some point, and who is playing around? Who drew that? Okay, I'm going to stop sharing. Can you kick out the person that's... Who's that? Okay, let me try again. Yeah, you can share again. Okay, second try. Okay, so let's say you have your CD pipeline ready for this kind of reconciliation, then progressive delivery needs a smart routing service, something that can look at the traffic and route traffic dynamically between upstreams based on some properties of traffic like cookies, HTTP headers, weights, and so on. So this routing component cannot work for just, let's say, the layer four CNA implementation, you need something at layer seven. Hey, do you know who is... Yeah, they could check the candies over there. I think we are zoomed-bombed. That is never expected. Okay, I'll try to go for one more thing. Okay, so when we start with working on Flagger, we set a couple of goals. Yeah, okay, I'm going to stop sharing. Nazi science is the worst. Yeah, I think you should password protect the channel or kick whoever is doing it. I kind of remove the participants actually, but I think there's one person I'm now very familiar with, like, Sam, can you identify yourself? Okay, so I will end the meeting for now and please join us again. Okay. We had a very similar problem during KubeCon as well. Yeah, it just so happened that when we were presenting, so back up my mind, what it's happening again? Is it me that's causing all the stars to align very badly? Yeah, I don't know. I thought our session was one of one, but it looks like it's continuing. It was very, very bad. They're just playing videos. Let's see. I think it's, if that happens, it's probably best to talk, not to share anything. Okay, I found one thing to start with. So let me try this. It seems that I can say to the meeting, I mean, to stop anybody to annotate to the presentation. I should be able to do that. Yes. So let me try. Yes. So Stanford, are you still there? Yeah. Yeah. So if you, when you share a screen, you should choose disable attendee annotation. There is a stating over there. Yeah. It says you cannot start screen share while the other participant is sharing or something like that. I mean, when you share the screen, you can choose there the stating and which name is disable attendee annotation. You can try that. So I will stop sharing my screen and you can share a screen and then choose that stating. Okay. Let me try. I have shared computer sound optimized screen share for whatever that's about it. Maybe after more disable attendee annotation. Okay. Can you find that? Yeah. Okay. Cool. Where was I? Okay. Okay. On progressive delivery. So you need immutable artifacts, a CD pipeline that applies the desired state, some kind of L7 router. It can be an ingress controller or a service mesh. Talk about that later. And of course, you need something that gives you observability into the network. Not only for performance stats, which let's say all layer seven load balancers will give you something like that, like request per second error or latency, but you could also improve the observability by exposing business metrics then and those metrics could later be used in your delivery pipeline. When we started Flagger, we set up some goals. The major one being able to give developer confidence in automating the production release. Like deploy on Fridays, it's no issue. Something will roll back if it fails. And that something should be a thing that runs on its own for the automated. And in order to give this kind of confidence, we decided to expose some fields, some things inside custom resources for developers to be able to control the blast radials to define the validation process, run their own integration tests or any kind of automated testing as part of the deployment. And for those that require manual approval for production releases, we also added manual gating to the whole process. Another goal is right as little young as possible, as you know, in Kubernetes when everything is in YAML. So we want to trim that down when you define your delivery policy. And of course, we want to manage the whole process from a Git repo through GitOps. So Flagger is a Kubernetes operator. It controls traffic and allows you to decouple the deployment of an application from the actual release process. And Flagger implements a couple of deployment strategies that you can choose based on the type of application that you want to deploy. One deployment strategy is a Canary release that's using progressive traffic shifting. And this works great for applications that are exposing HTTP APIs or gRPC APIs. Another deployment strategy is AP testing. And this is used for user-facing apps that need session affinity. I'm going to explain a little why this is needed. Let's say if you want to do a Canary release with traffic shifting for a front-end app, then what the traffic shifting means is you set up a percentage of your users and you redirect those to the new version. But if your app has static assets, like let's say JavaScript or HTML, and also the HTTP API or gRPC API on the backend, if you don't pin a user to a specific version, then they can get, let's say, the JavaScript asset from version one and the HTML from version two. So we can see how these two cannot work fine together. So for this type of apps, you have to have a way to pin users to a specific version. And that's possible through HTTP headers or cookies by defining a regex or a value that matches those. Another strategy is blue-green with traffic mirroring. What this does is users are interacting with, let's say, version one. And all that traffic is cloned and sent to version two. But the response is not returned to the user. So the user is not aware that all their actions are basically duplicated on the two versions. And this works great for either quantum APIs if you are using traffic mirroring for something that writes to a database, makes a transaction, then you'll get duplicated data and so on. So this works great for things like machine learning model that you want to test or something like cache processing and stuff like that. And finally, blue-green, the classical blue-green with traffic switches, a traffic switch, where you spin up version two, you run the integration tests, you run a load test on it, you determine that that version is okay, then you switch the whole traffic at once from V1 to V2. And this works great for stateful application and legacy apps. How the Canary deployment strategy works? So you have, let's say, a deployment running in your cluster. That deployment is at version one. You apply a change to that deployment. Let's say you change the image tag to version two. And what Flagger does instead of letting Kubernetes just do the rolling update of that deployment, it spins up version two as a different deployment and slowly starts to route traffic towards it. And while it does that, it also measures latency, error rate and other things to determine if the new version is okay, respects your KPIs. And if that happens, then in the final step, it does a Kubernetes rolling update on the old version. Once that is finished, it waits for all the traffic to go back to that deployment, then it scales to zero the Canary release. AB testing works the same, just the fact that instead of using a traffic weight here, we segment, we use a user segment and only those users, let's say that have an insider's cookie or header, are redirected to version two. And based on that traffic, the decision is taken to promote version two or roll it back. And Blue Green is the same but without any production traffic. Here you just run your conformance tests, load tests, everything goes okay, you do the switch, then your users will be interacting with the new version. Okay, so how you set up one of those strategies. There is a custom resource called Canary. Inside the Canary, you can set a bunch of things. You can tell Flagger where the deployment is, what was the deployment name. If that deployment has an horizontal portal scalar or not, you define what ports your application exposes. And then you can define how the Canary analysis should be run. And here are a bunch of things that you can specify like metrics, alerts, web hooks and header matching if you're not doing A, B testing and so on. So based on this definition, Flagger generates a bunch of objects. And if you, let's say if you want to do a Canary setup manually, you'll have to duplicate all your definitions. You'll have to have, let's say, two deployment definitions, two horizontal portal scalars, two cluster IP services and so on. Then you have to add service mesh objects or ingress controllers for each version. If you are using Flagger and the Canary definition, then you can specify only your deployment and your horizontal portal scalar and Flagger will generate for all these objects, including Kubernetes, cluster IPs, service mesh objects if you are using a service mesh or ingress objects if you are using an ingress controller. So this is how Flagger simplifies a lot the setup of a Canary release. And it also allows you to move from one service mesh implementation to another or from one ingress controller to another without having to change anything inside your deployments. In terms of traffic management, so Flagger works with a couple of service mesh implementations, Istio, Linkerd and AppMesh. If you are not using a service mesh and if you only want to do Canary releases for your apps that are exposed outside the cluster, then you can use Flagger with an ingress controller like Contour, Blue, EngineX and two weeks ago, Skipper got also integrated into Flagger. Okay, how the validation process works. So Flagger comes with two built-in metrics based on Prometheus. So if you install Istio or Linkerd or any kind of service mesh or ingress controller, these proxies will expose two metrics. One is the request success rate. From all your requests, how was the percentage of errors, let's say 500 errors, and request latency. You can determine, let's say in the last minute, what's the average request duration of your users. And based on these two metrics, you can set up KPIs. You can also define custom metrics and we'll see, have an example for Doron. And you can also specify web hooks that will be calling into your integration testing platform. You can run load testing and so on during the analysis. And of course, Flagger looks at the Kubernetes objects and queries the deployment health status. That's based on leafness and readiness pros. So for metrics, you can define custom metrics. So besides these two, the request success rate and request latency, if you want to extend the metrics to something else, let's say your application exposes some custom Prometheus metrics or you want to use other things that your service mesh exposes, you can create an object called the metric template. And here you can define a Prometheus query or a data doc query or a cloud watch. And Flagger will call into all these services, will run the query and based on the result, will decide to move forward with the canary release or roll it back. In terms of collecting, Flagger implements for other providers, Slack, Microsoft team, Discord and RocketChat. And you can configure for a canary release more than one provider. Let's say your SRE team is using Slack on a particular channel and your dev team maybe is using Discord or something else. So with Flagger, you can funnel all these events to the right team no matter what the provider they use. In terms of testing, there is a webhook section where you can define, you can configure Flagger to call into your services at different stages during the canary release. For example, you can run Helm test before you expose the new version to the live traffic to your users. And Flagger will be calling into a Helm test service, will run your Helm tests. If those are successful, then it goes to the next stage of the canary release called rollout. And during that stage, for example, you can start a load test. Why a load test is needed? Because let's say you can deploy at any point in time, but maybe you don't have live traffic. So a load test is there to generate traffic so Flagger can Flagger can see metrics and decide what to do. And for load testing, there are three implementations. Hay is a load test for HTTP. There is also one for GRPC. And for conformance testing, there is support for Helm tests, bash tests, but you can also implement your own test runner and tell Flagger to call into that. In terms of manual gating, there are several gates that you can set during the analysis. For example, let's say you push a change in your production system, but you don't want that change to be automatically deployed or tested. So there is a confirmed rollout web book. And Flagger will ask you, hey, I've detected a change. Do you want me to start the analysis or not? And after the analysis is over, it can also ask you, hey, do you want to do the final stage, the promotion? Or at any point in time during the analysis, you can tell Flagger to roll back even if there are no errors. And all these things happen through web books. In terms of integration, Flagger being a CIB controller works great in a GitOps pipeline. So let's say you change something in your deployment spec, you commit that to Git, then a GitOps operator like Flux, RLCB or Jenkins X will reconcile that object on the cluster, Flagger detects the new version and starts the canary analysis for you. Some things for the roadmap. Maybe you've heard, Microsoft create a new service mesh implementation called Open Service Mesh that is based on the SMI spec. Flagger implements SMI spec for LinkerD, so it should be fairly easy to extend support to Open Service Mesh. We are looking at adding more metric providers. For example, maybe you want to look at Stackdriver or Inflex DB for custom metrics. I'm pretty happy about Kubernetes Ingress V2 API. It has all the things Flagger needs to implement all the strategies. At some point, I'm guessing the Ingress controllers will be switching from V1 to V2. I want Flagger to be ready for that switch. Other things on the roadmap are using Kubernetes jobs for performance testing and having a dedicated service for manual gating. There are a couple of workshops for each service mesh implementation. Here are the links to the docs and the repo. Any questions? Yeah. I have several questions. Very wonderful presentation. I'm very interested in the roadmap of the Flagger because there are several things I want to ask. I know you have already integrated with SMI. Does that mean I can now use, for example, Microsoft's Open Service Mesh directly with Flagger or we still need some work to make that happen? So SMI has different versions. LinkerD is on V1, Alpha1. Open Service Mesh will be on V1, Alpha3. There are breaking changes between APIs. So what's going to happen is Flagger has to implement every version. Then when you install Flagger for Open Service Mesh, you have to daily pay. Use Alpha3 as the version. So that means the first thing we need to do if we want to support OSN is we need to upgrade or we need to make Flagger support the latest version of SMI. There's also another question in the chat box. It's about Flagger compared to ArgoRollout. Can you explain a little bit about that? Yeah, ArgoRollout is very different from what Flagger does. Flagger works at deployment level. ArgoRollout works at replica set level. So it's an implementation over the replica set. So that's very different in terms of how you define all these things. For example, in ArgoRollout, you cannot use your deployments. You have to change from deployments to rollout or however you call it. One of the reasons we made Flagger reference a deployment the same way and horizontal pod auto scalar reference a deployment and doesn't come with the whole deployment spec inside of it is because we at the beginning we wanted Flagger to be able to take over applications while they are running in production. And I've seen a lot of users doing that by just changing all the deployment spec, all the charts, all the hand charts that maybe you don't even control is not, wasn't an option for us. So that I think that's one of the main differences. Of course there are lots of differences between them, but that's the main one in terms that Flagger, like the horizontal pod auto scalar reference something, it doesn't contain all the actual definition of your app, the deployment, the service and so on. Let me look at other questions. Okay, I think that is what we have for Flagger folks and again thank you very much for joining the meeting and to the presentation and we're happy to see what's next for Flagger and we're excited about this rollout, especially you mentioned the Ingress version 2, Ingress v2. I personally really think it's really a big change for the community because right now we have so many Ingress controllers and this really makes integration with such kind of thing is very hard. So I'm also trying to looking at what is the direction of Ingress v2 going to. And we're happy that Flagger have its units on roadmap. Okay, so we are pretty much here for the first project presentation and next we will have LITMA scales folks to do an update for their project especially after it has been donated to CNCF as part of the CNCF project. So what's next? What's the current status and any new features we should be aware of. So let's, okay, so Uma, please take over for me. All right, yeah, thank you Harry. Let me first figure out the annotation, disable and turn the annotation. Okay. So the zoom wise we are safe now. Okay, that was a great presentation from Stephen. I'm going to look it up. Especially, you know, LITMAs really operates in the in that space. So what I thought is really give a quick presentation on what did we achieve in the last three months. So I'll probably show you actual updates, not really, I don't have a detailed slides, but this is just 10 to 10 minutes update. So our missions to first of all, we started using sandbox logo. So that's great. I think the community is really growing after we got into sandbox. There are more people trying out. So that's definitely great news. A lot of value being added for the project itself. So thank you, SIGAP delivery leads who have done a detailed due diligence and all. So our mission statement continues to be finding weaknesses in the implementation of either Kubernetes platform or applications that are deployed. Resilience reliability is for most importance for both these personas. And our little bit medium to longer term mission statement also is to help chaos become very easy to use for developers as well as an extension to their development process or within the CI pipelines. But right now, we're really trying to concentrate our roadmap short term to help SRIs do chaos end to end and get some validation of their existing operations or setups. So contributions, my data continues to be the prime sponsor, but the great news is that there's a lot of community embracement happening. Incubate has uploaded their work on AWS, Litmus, and Amazon itself is pitching in one of the contributor from Amazon is helping with docs and ring central is helping with Helm chart management. So we're actually we had a problem of know how do we deal with so many various types of contributors. So we did take some inspiration from SIGS, but I'll talk about it. And one of the primary assets of this project itself is hub that we continue to get appreciation from having this all this experiments in the center place. So in the last three months, we really grow the project usage by almost 100%. We were about 50,000 experiments runs when we submitted the project to it to sandbox as an application within few months actually, the usage is doubled. And we of course added more experiments from the community. So I just put one slide where I will use this as a guiding slide and then go to blogs and other stuff to show. I am not talking much about what our own team did rather what community did and how the project is actually growing inside the community. Those are related updates. So first of all, we have now a dedicated community manager to help with community questions and to run better Slack communication, etc. being available to run more meetups to go presently it must in other meetups. So he's here as well with so welcome to the project. We added four projects, sorry, experiments, and there are many more than the making. And one of the good things that happened was optito project integration. So optito is a namespace provider for developers, Kubernetes environment and space level. So they have taken litmus and provided it is an option to introduce chaos. So that actually opened up a very good new persona where we always needed kind of CRDs need to be installed. So we needed admin privileges to run litmus, but this requirement gave up that strong requirement and gave an opportunity for us to rethink the personas. So I'll probably talk about it. So that's a great update. And we also have new website, which of course is according to the guidelines. And along with the new website, we also got a contribution from an open source contributor, a new mascot for chaos. So that was a surprise. So this is an SRE chaos bird trying to do experiments. So community was pretty much geeked about it. It was a minor thing, but it was pretty awesome that this came from the community. So we also updated a chaos hub to with some more easy to use features. Apart from easy to search, there were one thing that community wanted. Your experiments are great, but I need to add or tweak the experiments after. So for that, I need to download. Is there a way your hub can provide integrated editor? So all these experiments can now be updated on that local cache in the browser. You can just update them, tune them and then apply them. So this is a great way for making the usage of experiments pretty easy. You have copy paste, replace the regular ML editor, but it's available. Although it's small technical enhancement, it makes it the user experience much more easier. This was another thing that requested by community and then we added. And we also got very organically engineers from Azure, testing and then someone from branch or community. They tested all the experiment of it must on that platform and send PRs that I can add this as a certified platforms. So that was a great addition. And then we also got a container solutions have done a good study on open source chaos engineering tools. And this was a good information for me as well on how are the projects. So both the CNC projects are covered. Litmus and chaos mesh, they both are coming almost the same feature scoring, but it's great to see that chaos engineering is being covered as part of the CNC of encouragement. Both the projects are scoring pretty high. So that was one good thing for the community. And other thing, we run monthly meetings and community meetings. We mentor some teams who are coming in in different areas. So we took the inspiration from six, you can see up for Kubernetes 6. So I think some other projects are also doing a measuring, for example. So we have done, we have created some six, the concept of six within Litmus. And we tried to have once in a month or on demand basis, the sign meetings, recently the doc, doc sake, as well as the deployment six, some meetings have happened. So how we do that is we did not create any projects, but we made use of, if you go to the teams on the Litmus chaos organization, so we basically defined teams. And these are, whoever wants to come participate, they're bad at his part of that. And this is like a pretty simple way of telling the community that there are multiple groups. You can choose observability or deployments or integrations or documentation, wherever you are interested. It looks like chaos is applicable to almost all areas or across the CNC of landscape. So there's good amount of interest that is coming in. So this is one way where we're able to segregate the big interest into smaller groups so that more contributors can come and drive themselves. The idea is somebody will find themselves as a major contributor driver within that group and then they will add the product or map prioritization list to that entire project. So that's another thing that we started. Let's receive pretty well from some parts of the community. And the other one, we are getting a lot of new queries. Air gap support was requested from somebody and from community, I think Shantanu was here. So we added some additional help there. Album support was another top request. We added that as well. In terms of roadmap, the team has been very busy in developing a litmus portal, which I'll show it's not it released, but alpha is going to come out later this month. But the idea of litmus portal is it's not just about experiments for Kubernetes resources or applications, but it should be able to create or orchestrate complex chaos workflows in order to find the deep level weaknesses in their operational deployments. So a good portal is needed. Easy to use monitoring is very, very important and what's happening to my chaos workflow. So a lot of work is going on. And we have embedded the whole workflow into the project. What I mean is our go experiment workflow wraps all these experiments. And then we call that as chaos workflow, but it just gets done in a declarative way, which has got a UI support in the front end as well. And we've been getting a lot of inbound interest in terms of we need kafana dashboards. Chaos is great. It's very easy to use. But how do I monitor, right? So and there are a lot of existing experiments. We got more updates as well as more queries or intention to help in terms of can we have more granular tunables within chaos hub. So that's probably the update. And these are some of the kafana dashboards that are in the making. They'll go about in this release. We have monthly release cadence. The whole idea is Prometheus metrics and kafana. There's a lot of information. But when something happens, you can easily know. But with chaos, litmus, you are willfully introducing these faults. So you need to know what was that problem? Was it during chaos injection or did it naturally occur? And we needed to know or we needed to give that perspective of willful fault, simulated fault versus the organic one. So this is an example. If you are taking a CPU node, you can see that this chaos was introduced, a node chaos most likely. And you can see that node increase. So it's, it's, it's also a good once it is introduced, there was some issues with that node, the node utilization was not coming down. So it could, you could say that we found an issue there. But a lot of things can be interpreted as a weakness or, you know, as a strength. So this is an example of a well known microservices demo application from Weaver's soft shop. So we took that and we added a bit of chaos metrics and chaos about graphs to that area graphs. And this is actually coming in, contributed or the process of being contributed by the community. And they are also coming in and doing a lot of of new developments to the SIG observability area. So that's really hotening to see. And other thing that I think we just started, we probably need to work with the captain team alloys and other things. We did not find a lot of bandwidth, but we did research captain and how it actually, both captain team approached us and then there were some other community members asking for, you were able to do chaos, but how do I interpret the results? Monitoring is great, but we needed to see if we are really, when a fault was introduced, did some SLOs fall off, right? So that's the idea. So we're trying to not use all features of captain, but just the quality gates feature. They're very nice features. So end users can use both captain and litmus together, just like litmus and workflow. So our go-of-close values to create chaos, work-close from chaos experiments, captain can use to integrate quality gates as well. So we see that a lot of CNCF projects are coming together, becoming more solutions. We're also trying to focus, if we are integrating, it is better be a CNCF project. So that way people not only adopt, but there is more community engagement as well. The other thing that we are dealing with right now is multi tenancy. Looks like Kubernetes has introduced good features in with respect to multi tenancy and we also did some small survey or research among various forums. Somebody here can give more feedback, but at least what we got to know is many team members using a single Kubernetes cluster is not an uncommon scenario. So what it really means that they would expect whatever the Kubernetes environment that I would need for development, I will have it on a given Kubernetes cluster within an in-space. So we did bank on CRDs heavily, the entire architecture. So CRDs need privileges, admin privileges. So we have now introduced admin mode and namespace mode. There is something called standard mode also. The whole idea is we are working towards making litmus usable by end developers or SRDs who may not have admin privileges so that it's pretty easy to use. And there are some thoughts about developing config map-based approach rather than CRDs, but easy option is let at least the admin use the CRDs, install the CRDs and later developers or other people can come and use litmus within the in-spaces. So that's another thing. There's still a lot of discussion is going on in the community, but that's I thought is an interesting update to share. So overall, I think we are progressing very well, getting a lot of support from different communities within CNCF. So thank you very much. And we're looking forward to getting a little bit more advice from various projects as we get a little bit more popular. So that's all I have, but I'm happy to take any questions. Well, thank you very much, Uma. And I'm very happy to see the progress that the litmus scale was made after joining the CNCF. And we're happy to see that there is a positive feedback from the community under the growth and the usage of this project. So we now don't have a lot of time for questions, but I'd like to check if anyone has any questions so we can let Uma answer. I will post that slides in the meeting notes. Yes, thank you. Yeah, please link these slides to the community meeting notes so other folks who cannot join the meeting can still check what's going on in the meeting, what you guys presented over there. So we are really appreciative for the presentation from Uma and Stenfin about Flanger and the litmus scales. They will try to engage more people to present their awesome project regardless of whether they are part of CNCF or not, or they have any motivation to donate it or not. We are trying to make this thing more collaborative and open community for people to talk to each other and share knowledge about application delivery and application development. So I hope that we will have more projects to present in the upcoming meetings. If you guys have any idea or any project you want to see, and I will have a very happy to talk with you, and I will try to reach out to the project as much as I can to have these awesome ideas to present in the meeting. Okay, so I think we are pretty much all right. We are pretty much done for today's meeting. And again, really thank you for everybody joining this meeting, and I will be super looking forward to seeing you folks in the next time. So bye-bye. Thank you, bye-bye. Thank you very much.