 to get off the phone. We're on. Hello. Good morning. Morning. Hello, guys. Morning. Can you hear me? Yes. Morning all. Sure. So let me have an issue that I cannot record the meeting because I do not have the permission. It's just also recording already. You are just fine. It is auto recording into the cloud. No worries. OK. That's cool. So let's start the meeting. And I didn't. Yes. Also, I was just going to say I didn't believe that I'm on the call this Saturday. OK. I see. OK. So welcome, everyone, to the SIG application delivery meeting of October 9, 2019. So I hope you guys have already opened the meeting notes because we today have, I think, three projects to present to the SIG application delivery. And so I don't want to waste your time. So we can just begin our project presentation. I think the first project we have listed on the agenda is our goal proposal. So I will end the lead to head on the our goal presentation, everyone. OK. OK, thank you. We can hardly hear you, by the way. OK, is this better? Not really. I guess our microphone. I think you need to put the microphone closer. OK. It is already pretty close now. No, not really. Maybe we'll let somebody else otherwise go first if you want to figure it out, but it's really hard to hear. And I think it will be hard to have a discussion. Or you can try it once more. Let me get another microphone. Excuse me. I'll mute myself. Being in another microphone. How about now? Can you hear me better? That's perfect. That's great. OK, fantastic. OK, I'm glad that worked. OK, can you see my presentation? Yes. Go into presentation mode. Hi, it's Alexis just joining. I'll go on mute. Hi, good morning. Good morning, Ed. OK, so this is the Argo proposal for essentially CNCF incubation. Thank you for giving us the opportunity to present. First, what is Argo? Argo is a set of Kubernetes-native tools for running and managing jobs and applications, particularly basically on Kubernetes. Our tagline from the beginning of the project has been basically get stuff done with Kubernetes. We saw at the time that a lot of people were starting to, this was about two years ago, a lot of people were starting to experiment with Kubernetes, creating Kubernetes clusters. And of course, some people started running scalable microservices on right away. But a lot of other people are trying to figure out, what do we do with Kubernetes? And we thought that in addition to running obviously long-lived applications, which Kubernetes supported from the beginning, things like batch jobs, workflows, event-based processing, all of those were also very important modes of computing. And that any large application would likely use a combination of these techniques. So we wanted to create a toolset which would make it easy for people to create, orchestrate, manage these kind of more complex applications. And so we focused on workflows first. And then we did events. And then finally, we did kind of more like this continuous deployment or CD aspect of the project. So the Argo project consists of three main components. The most mature of those components is Argo workflows. That's what we started with, which provides very container-native workflow engine basically. And then there's also Argo CD, which does support the declarative get-offs for continuous delivery. Kind of a sub-component of Argo CD is rollouts, which basically provides additional deployment strategies for applications deployed with Argo CD or other deployment tools. And then finally, there's Argo Advanced, which is an event-based dependency manager, so that you can integrate events to trigger workflows or to trigger deployments, or to generate messages that can be processed by long-running services. The Argo, we have Argo communities at this point is a fairly large community, and also is still growing rapidly, particularly right now on the Argo CD front. And there's a lot of you can see, you know, Marty recognizable brand names here, like Adobe, BlackRock, Google, NVIDIA. They were all very early adopters of Argo, particularly starting with workflows. And then more recently, SAP Ticketmaster, Tesla, Volvo, et cetera. And obviously, Intuit is also a big user of Argo. These are some of the kind of community use cases and what some of our users about Argo are saying. And a lot of people obviously do things like batch processing or deployments. A lot of people start with one tool, and then as they develop their applications, they integrate other tools from the Argo family. So there's quite a few people using multiple Argo tools for their application today. And just some example of some of the tweets and so on that's happening in the community. At this point, the Argo project has about 5,000 stars, 904 total of 240 different individual contributors, et cetera, et cetera. I guess the thing that I'm most proud of is that as the projects mature, community contributions have been increasing more and more. So at this point, actually for something like Argo workflows, which has been a little bit more mature, actually 60% of the contributions are coming from the community. So if you look at the poll requests, about 60% are from the community. And these contributions are not just buckets and so on. There's actually major features complete with GUI, COI integration, and so forth. A brief history of the Argo project, the project was launched about two years ago in August of 2017. Shortly thereafter, Aplatics was acquired by Intuit. At Intuit, we started the Argo CD project initially to meet the needs of Intuit internally. And then in May of 2018, actually early, we had always wanted to integrate event-based integration with Argo. And we had started a open-to-github issue on Argo to discuss what this type of integration may look like. When BlackRock approached us and said, hey, we use Argo. And we also wanted event-based integration. And we've actually been working on one for the past two months. So let's get through and discuss it. And we really like what they've done. And they also decided that it would be great to contribute Argo events to the Argo project. And so in May of 2018, that's when Argo events was contributed by BlackRock as a part of the Argo project. And then in June 2018, we got our first big user of workflows for the ML use case when Kubeflow decided to adopt Argo as the workflow engine, which is basically the workflow engine behind Kubeflow pipelines. They did implement their own UI on top, but it's basically in Argo underneath. And then in July 2018, we put Argo CD into production, use it into it. And today, it's deploying and managing of the thousands of applications and namespaces that into it. In November, we decided to expand on the Argo CD or continuous delivery by introducing additional deployment strategies, which the native Kubernetes deployment is lacking, such as Blue, Green, and Canary. And we launched this feature at KubeCon in May. And Argo Roth has actually widely used it into it, particularly because we have some legacy applications, particularly the Blue, Green deployment is very popular. But we're working on a new version called, right now we're calling it Argo experiments, which allows it to basically concurrently run multiple versions of applications simultaneously and do experimental comparisons between them. And also use that data to automatically determine what version goes to production. And of course, in May 2019, also, Intuit was selected as a CNCF top end user. And the Argo project, I think, had a big row in that. Without the Argo project, it probably would have taken longer for something like that. Of course, Kubernetes use that Intuit has been going very rapidly. Right now, Intuit consists of four major business units along the lines of some of our main products like TurboTax and QuickBooks, because there's kind of a sensual team as well. And all four of these major business units have adopted Kubernetes and are using Argo as well. So the adoption of Kubernetes and Argo at Intuit has been very rapid. OK, now to get into some of the more details. So we're discussing Argo as a collection of three main projects, how are they related, and what makes it a single toolkit. So I mentioned before, Argo makes it very easy to kind of combine workflows, events, and applications to create more kind of complex applications, basically to orchestrate jobs and services related to these applications. And so if you look at events, for example, events can be used to trigger workflows. This was the original use case that motivated BlackRock to create Argo events and to contribute it to Argo. And workflows can obviously also generate events. Events can be converted into messages, which are processed by long-running applications. And those applications can also generate events. Similarly, workflows can trigger deployments and also trigger workflows. We think that in a large complex applications, you'll really be using multiple modes of computing. You're not just going to have long-running services. It's not just going to be event-based. You're going to have some background async or batch processing components as well. So the goal of the Argo project is to really create a single toolkit, which makes it easy to manage these kind of complex applications running on Kubernetes. A little bit more detail on Argo workflows. What is Argo workflows? Argo workflows, all of the Argo components are implemented as declarative Kubernetes resources. That's what makes them like Kubernetes native. It allows us to build layers of abstraction in a very declarative way, starting with containers and pods, deployment services, applications, and so on and so forth. And with Argo workflows, each step in the workflow is basically a pod. So it masks very well to the Kubernetes native abstractions. It does mean that the workflow step is kind of granular. Container does take a little bit of resources and time to start up. So it's not designed for like fast type of workloads. It also allows you to specify workflows in two different ways. You can specify it in more like a step-based, sequential, and fork-join-parallelism type of fashion, or you could use specify arbitrary DAGs. And the DAG form is particularly popular with folks using Argo workflows for machine learning, for example. Argo CD is a decoded continuous delivery for Kubernetes. And in terms of the kind of definition of terms that the SIG app delivery is currently working on, you see a picture of that on the right side. You have an application, which is declaratively specified. It's most likely residing in a good repo. And there are rollout strategies, which basically take those specs and realize them into running workload instances, which consists of like services, deployments, you know, rollouts, pods, et cetera, on our target platform, which in our case is Kubernetes. And you'd be able to do this in a very declarative Kubernetes native play. Argo events, this is the original use case that obviously BlackRock created, allows you to respond to various external, as well as time calendar or time-based events, as well as internal events. So like one workflow, when it completes, may create an artifact. And the creation of that artifact may trigger another workflow, which completes another stage. As workflows become more complicated, it becomes unwieldy to create just one huge monolithic workflow, and therefore Argo events allows you to decompose those workflows into smaller workflows, which you can stitch together using events to create a completely automated system for processing data, or in BlackRock's case, in doing financial risk modeling and other types of analysis. Kubeflow is another good use case for the Argo project. They use both Argo workflows as well as Argo CD as a part of their pipelines for get-offs and machine learning. As I mentioned, they built their own UI. So this doesn't look like the Argo UI, but basically the underlying components of this Argo for, yeah. Also like Selden uses Argo workflows, and also Argo CD to build and deploy machine learning models as part of their product. So another machine learning use case. When we started the project, we saw that there were obviously other existing workflow engines, especially in the CI space. But since the CI space was already well established, we decided to build a more general purpose workflow engine and target it at other emerging applications at the time, which include obviously Kubernetes, but also a lot of folks doing ML and AI. We saw that there wasn't a great workflow engine for ML and AI. So we did make a conscious effort to try to target those particular areas. Some people were using, trying to use Apache Airflow, for example, which a lot of people use for data processing purposes, but the ML folks were really not using it. And what we found is that most people actually prefer doing something more Kubernetes native, a little more configurable, using something like Argo workflows and Airflows in the ML space. So at this point, we have major platforms like Youthflow, NVIDIA's MagLift project, as well as obviously Selden IO. And now internally into it, we've decided to standardize on Argo as the toolkit for doing ML and AI. Some alternatives to Argo, these are the projects that we get most often compared with and someone had described this to provide some context in terms of how the Argo project may fit into the CNCF landscape. So in terms of workflows, we most often get compared with something like Apache Airflow, which is obviously from Airbnb. And they have a wide range of plugins for like big data processing type of applications. So it's very popular in that space, but it's not Kubernetes native. And what I've heard recently is that a lot of people are doing ML today on Kubernetes and a lot of them are actually using, like 75% are using Kubernetes for ML if they're using containers. So we found ML to be a large growth or driver for Argo workflows. Another project obviously that we all think that compared to this flux, which is from Weaverworks, they started the kind of going to GitOps term. The approach to a way we do CDs, of course a little bit different, we designed Argo CD more from an enterprise perspective, managing many clusters, multi-tenant type of environments, but where flux really excels is, if you want to deploy something very quickly, you have full access to the cluster, you want to boost your up the cluster. So it's a very kind of different use cases for driving the core design of it, although you could use either project obviously for applications as well as cluster bootstrapping or other applications. Another project that we sometimes get compared with particularly the Argo events side is Knative from Google. Obviously Knative is more of a fast system rather than a workflow system. We're focusing more on course-grained workflows and that having good integration that is between events workflows and long-running deployments. So those are some comparisons. So that's pretty much it. Any questions? Yeah, I had a few. Okay. First of all, thank you for a fantastic presentation, very, very impressive technology and very lucid presentation. Thanks for that. I had a few questions. One is these workflows that are running inside this technology, how aware are they that they're part of it and how do they interact with the Argo framework? That's the one question. And the other one is the data that flows between these various workflow elements. What is the typical, how much of that is provided by Argo and how much do you just leave an external in a storage or queuing systems or whatever? Oh, okay. Yeah, so one of the great things about Argo workflows and the whole suite of Argo tools is it's kind of native integration with Kubernetes. What this means is that you can use all of the Kubernetes features, including like volume mounts, or, you know, being able to create other resources, Kubernetes resources and workflows all natively from inside Argo workflows. So basically you could include a Kubernetes spec in many cases, you know, almost as is as a sub-component and you could, you know, create that resource while running the workflow or you could create a persistent volume and mounted and so on. Argo also provides a native feature called artifacts which automatically packages like the output of one step in a workflow, puts it in, you know, say something like S3 or something and then automatically imports it as a input to another step in the workflow. Or ML type of applications where you're doing a lot of data processing where you have very large data sets. Oftentimes folks will store the data in something like, you know, a cloud file system like EFS or XFS or something like that. And then basically it's just mounted from Argo workflows. Argo workflows can be very sophisticated. Like you could have an Argo workflow that actually spins up complete services. So we have some workflow, for example, that spins up a MySQL database. And then while it's running experiments, actually updates the database. And then at the end, you could generate issue queries to generate a summary and the output of running this workflow is the summary. But in the process of running it, you've created all these other services which you need to process the workflow. So does that answer your question? Yes, it does. Thank you. I think Jacob mentioned in the chat as well. I was wondering about how the, I'm not the whole workflow engine as much as the task definition overlaps or doesn't, or competes or doesn't wear a tecton. Sorry, your last part was garbled a little. Could you repeat that? I was wondering how the task definitions overlaps or doesn't, or competes or doesn't with the tecton project. Okay. Yes, Argo workflows particularly has commonalities with tecton. Obviously a workflow engine is a workflow engine at the end of the day. Where I find that a lot of these workflow engines differ is that they kind of get defined by the original set of users and applications or community that they attract because future features and so on get driven by it. So when we started Argo workflows, for example, there were other workflow engines that existed. Most of them were not Kubernetes native because Kubernetes was just starting at that point. Also a lot of these workflow engines like Jenkins, for example, they were targeted at already a specific use case like CI. And so that's why we didn't want to compete in the CI space. So when Argo started, ML was starting up. So a lot of people are doing ML use it. Similarly as tecton starts up and grows, I think they have to decide like what is the main community they want to serve? Usually as workflow engines become established, it's difficult to like displace them unless something weird happens from the community that's already accepted it. So I feel like there will always be newer workflow engines and they'll just be like targeted towards newer application areas that emerge. I guess the things now, like my understanding from the tecton project is they're actually actively trying to not do that. They're basically trying to say where a low level substrate that people can build these things on top of. And with the name towards interoperability of some of these definitions. And I was wondering if there'd been any conversations yet between the Argo and tecton folks, but it doesn't sound like it. Yeah, I think that's another approach that you can take is to build a common denominator and then perhaps other projects may use it. For example, like Qflow uses Argo for example. So these tools get offered into larger frameworks serving more specific user community. So yeah. If I go ahead, hi, this is one thing. One of our users has actually implemented Argo CD to tecton integration where as you point out, tecton is trying to provide the subset for doing CI CD and to integrate Argo CD into the tecton kind of framework configuration. So they're not mutually exclusive, but there is no overlap. So we can see there is another question raised in the chat. It's actually asking that workflow seems very generic and should it be more of a delivery workflow engine? So I think the question is like, what is the relationship with Argo workflow part of the application delivery? Because I think the question is talking about that the workflow is actually have a larger scope than application delivery. Yes, I mean, you could use workflows for a lot of different things. There's definitely I think an interaction between workflows as well as the application delivery. And it also depends on how you view the term application. Like if you view it in a narrow scope, like it's a set of like deploy bins, pods and so on for running long running services, then workflows is not strictly a part of that. But if you view application delivery or application as building a set of services that accomplishes a particular function, then most complex applications probably would use combinations of like async back-end processing which workflows are very well suited for as well as long running services as well as various forms of event management. So in that broader context, I think all of these are really part of application delivery. Yeah, my question was also how you would relate to what you're doing with and relating to kind of since that project was Brigade. I obviously see the difference in the approach there. But it would be good to hear like your opinion on also not just that maybe the difference between those. I think we all want to encourage the project collaborates together maybe spinning it the other way around. How would you see being part of since that profit or what would be your interaction points with the other work of the projects as well? Other work for a project? Yes. Well, I think that we should all kind of work on maybe creating a more abstract spec in terms of, I think the approach that SIG application delivery is already taking in terms of defining, Kubernetes obviously didn't have a native concept of application when it started. And what people want to do are run applications not in individual pods, that kind of thing at the end of the day. And so you're defining like what are the basic abstractions and terms, what does it mean? What are the boundaries between these components? I think something similar could be done for workflows as well. So we're actually interested in we're creating a draft right now of some basic terms and application areas in this kind of modeling it based on like the definitions that applications SIG delivery has already created for applications. And maybe we might also discuss in the future whether we should take a broader view on applications. Is it just like running pods and so on? Or is it something a little bit broader? So Ed, it's good that you brought that up. It's almost like we need this SIG to actually help us define what applications look like. So SIG app delivery is definitely on time now. Right, yeah. It's a great topic. I think I've really struggled to try and define an application. Think about what it might even be. It's such a slippery concept in Kubernetes. Yeah, and we'd really like to actively participate in this discussion, by the way. I'm sure lots of other folks would as well. But right now we're seeing massive adoption of Kubernetes at Intuit. For the past year and to be frank, we've had our head bounces supporting Intuit's needs. But as the use case grows, our group has been growing and we now have much more bandwidth to interact with the community as well. So we're also really happy to see SIG app delivery being formed because I think there's a really big need here to define these standards and to work together to, at least create a framework that everyone can relate to, even if there are separate projects in the products. Yeah, if I could add just one thing, I think from the vantage point of, especially to being in a large enterprise like Intuit, company has been around for like longer than 35 years and has accumulated varied forms of applications and different technologies. So the way the application is defined is definitely much broader than just the deployment or just the service of Kubernetes. So the technologies that we actually use all three to drive the transformation of Intuit into leveraging Kubernetes natively and cloud technologies in particular, like all the CNCF products to deliver value for running these like myriad applications that we can have with the Intuit. Yeah, and particularly as an end user, we'd also like to help bring in other end user communities and get their feedback and interest and work with them as well to kind of decide, what are the actual needs of the end users who are using Kubernetes in terms of application delivery? Is it only like long running services and pods or are they also looking at eventing and workflows and so on? And do they want some way to integrate all of this so that they could deploy a complete complex application? Yeah, I think there's a fix of the space where there's an, yeah, I think the event-based application piece is something I find interesting. And also the machine learning part is obviously what I found the most intriguing one, honestly, not that the other parts are not interesting, but this is something that we currently don't cover at all. How like a machine learning application would be, how pipeline for this could look like, how are they working for these types of application looks like. I think this adds also a very interesting aspect to the entire discussion here. So there was a question about... Go ahead, Sisha. Okay, there was a question about the underlying backbone that Argo is using for these messaging pipeline. I think it probably got lost because there were multiple questions, whereas, so can you help us understand where Argo is using more of a native Kubernetes storage and communication mechanism or any other external element into that? Yes, actually, you can use both. We discussed a little bit about storage that you could use persistent volume claims or even an NFS file system, EFS, XFS, all of that, Kubernetes already supports. And since Argo is built on top of Kubernetes, you can basically use all of those. And that's one of the advantages, I think, of basing it on Kubernetes and making native rather than creating a wrapper around Kubernetes and then having to reimplement all of these features that you want to integrate. But more specifically to your questions about messaging, we're kind of agnostic about what the underlying message in a backbone should be. That could be just Kubernetes events at a very similar level, or it could be like a NAT service you spin up for messaging. What Argo events does is it defines things like gateways and sensors, which allow you to create, for example, a gateway for input events into the system. And then also define more complex rules in terms of what sets of events can trigger messages or workflows or other types of activity in the system. So that's the basic infrastructure that events provides. But in terms of the actual messaging backbone, you could use NATs, Kafka, just Kubernetes events. So we're agnostic to that. Thank you. Thank you. So Ed or Serati, I was wondering if you might talk a little bit about the relationship of Argo CD to Helm and sort of like the different representations of quote, an application with regards to a Helm application versus what Argo CD views as an application. Right. So yeah, on that Serati answered that, yeah. So we have taken the conscious decision of a choice that made the choice to not be agnostic to the configuration management tool that you define the application in. So in that sense, the customized support or case management support or Helm support is those are things that you just use for defining your application, whichever system actually fits your needs. What we also are doing in Argo CD in the next version on the roadmap is to have first class support for Helm. What that literally means is to be able to have a Helm repository that you point to Argo CD2 much as you would point Argo CD2 to a Git repository that you have your definitions written in. And we will actually understand the Helm chart from the repository and apply the same mechanisms that we would apply if, as if it were defined in let's say customized. So in that sense, we are making more of a Helm repository first class citizen support like you would have from a Git repository support perspective. Right. In general, like Helm has multiple components or facets. One is as a, it's a way of packaging applications. There is a bit of a deployment component as well. But the Argo tools are kind of agnostic in terms of what configuration management tools you use for the application and how we package the application. We want to support all of the popular ways packaging the application where it's a Helm chart or some case on the thing or some other type of thing. So, and then configuration management obviously is another entire broad area, a very complicated set of problems there. So at the moment, we're agnostic to those aspects. Yeah. And in fact, Jesse, who is the principal engineer on this project has actually written a fantastic blog post on various configuration management tools and why we had made the conscious choice to be agnostic to the configuration management tool. Hi guys. Actually, I think we can stop the first topic here because we still have two projects need to be, need to present. So let's go to the next project. I'm really sorry that we just discussed for so long here and I will try to fill up with you guys offline. Okay. So I think the next project is operator framework. Thank you. Yes. Thank you. Thank you. All right. So this is the, since you think of a proposal for the operator framework, my name is Daniel Messer. I work at Retat in the operator framework community space. And what we are looking at here is something that emerged from us noticing about two years ago already that we are now approaching the third wave of Kubernetes applications. Applications on top of Kubernetes, which we call advanced distributed systems. So we started with systems that were fairly stateless and fairly easily deployed with the onboard functionality in Kubernetes. And we moved over to more staple applications with things like stateful set. Eventually arriving at the need to run applications like actual distributed databases, distributed tracing frameworks, message queues on Kubernetes for an extended period of time. And the key aspect here is that these systems actually require active care beyond the onboard functionalities of the built-in Kubernetes controllers. And the key theme here is the two automations and lifecycle management. What the operator framework is about is giving developers the tools to build, test and publish Kubernetes operators in an iterative development cycle and also help owners of clusters, providers of clusters, provide a simple place to manage available operators on cluster in a single location. So on a high level overview, the operator framework is divided to three parts. First of all, it's an upstream open source project aiming at the entire Kubernetes community. It's obviously compatible with other Kubernetes and developed on this platform. The Operate SDK is targeting the developers of operators. This is the build stage where the SDK helps you skip over a lot of the regular boilerplate code that you need to have in place in order to write an operator and not to interact with the API server to register yourself for watches and reconciliation cycles and just focus on the application code, the code that is specific to managing your application, which is the unique property of the operator, obviously. The life cycle manager is targeting Kubernetes admins. So these are the people who are responsible for the stability of the cluster and offering additional services on top of it to the users of those clusters. This is the component that deploys and runs operators. And then there's another upstream effort in order to provide developers and people interested in searching for operators in place to search and publish their creations. So this is operatorhub.io. The history of operators goes back all the way to 2016 but the consequence initially introduced by CoreS starting with three operators here on the SID from it is at the both side. And this then quickly took off in the community. So we had early adoption on very popular open source projects in the storage space, in the database space. Primarily all those workloads require active care and are otherwise fairly difficult to run on Kubernetes with external systems or scripting to try to impose state on configuration on those. What the operator does, it does this on cluster and it does it very specific to the application. In 2018, we actually officially launched the operator framework as an upstream open source project with the SDK 4M and a vTrain operator under the Apache 2.0 license. And we've seen that this really unlocks the potential for running stateful workloads in a very safe and pretty profession on Kubernetes. So we had a ton of onboarding from very popular workloads on Kubernetes and all going to be read as my SQL Postgres. All those leverage the operating pattern in order to define how such a workload gets deployed on cluster and how it is managed, especially when we focus on day two operations. So if you want to do things like backup of your database, restore more complex reconfiguration on the part of the orchestration, the operator emerged as the go-to pattern. What am I for? What am I? Is there a question or is this, okay, no question. At the same time, we also started to form a discussion forum under the OpenShift Commons umbrella. This is an open source community for the origin OpenShift, our upstream project and we formed a SIG there that has monthly community meetings with good participations. And we also have a mailing list in place that has a lot of community contribution from the larger Kubernetes space. In 2019, I think we reached an infection point where at a much broader level throughout the Kubernetes community, there were discussions around how add-ons to Kubernetes control plane are going to be managed. And one part of the operating framework, the Lifecycle Manager is one of the technologies that is on the consideration of this particular C++ lifecycle group. We also launched- Could you mute, please? I think you're unmuted. Thank you. Microsoft, Google, Red Hat and AWS launched Operator Hub.io, which is part of the operator framework in the sense that it is the place where these operators published their own store and using the packaging format that is compatible with the framework. It's targeting Kubernetes users who look for an easy way to discover high quality, well-maintained and frequently updated operators in the central location and also provide a very straightforward experience of actually deploying them on cluster. And today we have a widespread adoption of the operator pattern, as well as the tooling around this, the Operate SDK is a project that is leveraged a lot by ISPs and authors of stateful applications in order to create operators. There's a whole ISV side to this where commercial vendors are using the framework in order to package their operators and deliver operators to their customers in order to provide very good service. And the session attendance at QCon in Seattle, Barcelona really reflects that broad level of interest. Now, a little bit more detail on some of the components. The SDK is a tool that developers use on their workstations to scaffold code and provide a code structure to how to write operators. This is heavily leveraging control of runtime in order to let the author focus on the reconciliation of the operator. This is primarily happening for Go-Based operators. But the SDK also addresses other audiences in the Kubernetes space that are more on the opposite side of things and might not be as proficient in Go in order to, for instance, declaratively define an operator with Ansible Playbooks. There's also a no-code approach using Helm charts. So the SDK supports an almost no-codes or zero intervention conversion from a Helm chart to an operator, which immediately needs to run tenor on cluster and gives your operator just enough permissions to do what exactly the chart requires and also gives your chart now a proper interface in the form of a custom resource definition. This is accompanied by a testing framework. We believe that operators are important and impactful both roles on the cluster and need to be tested very, very well because they use these trustees services to manage their production applications eventually. So the SDK also comes with a testing framework and verification and scoring tools that help the developer judge the maturity of the operator. It also integrates in the rest of the framework in terms of generating the bundled packaging format, which is used by the Life Sector Manager. The Life Sector Manager is an on-cluster component. It's implemented with two operators, providing a central place to discover, deploy and use operators at scale on the cluster. And I think at scale is the keyword here in a world where you only have a single operator and a single version on a single cluster, this all might be fine being manually deployed, but as you buy into the whole idea of running operator-backed services at scale on clusters, there should be a central place that controls how these things get installed, how these things get deployed and how users can discover those on cluster. And this is exactly what the Life Sector Manager does. It defines a packaging format for operators and has the kinds of catalogs that can be hosted in standard container image registries. Against those, you can state the intent that you want to install an operator and keep it updated. So the Life Sector Manager introduces a concept where the author has direct control over the update graph of the operator. Again, operators being long running, important, impactful workers on the cluster should be updated fairly frequently. And we provide means to do that with the Life Sector Manager. Operators also always workloads that have a global impact on the cluster due to the global nature of CRDs. So the Life Sector Manager also provides a lot of guardrails in order to do that safely on the cluster. There are checks against ownerships of existing CRDs, against existing operators. There are measures in place that prevent privilege escalation from the usually highly privileged service accounts that operators are using since these are all on cluster workloads. There's also as part of the catalog concept, the ability to implement segregation of concerns with operators. So then application stack consists of multiple stateful services. There's a concept of dependencies between operators that would allow one operator to specify dependency on another operator. And once you install such an operator, OLM would resolve the dependency at install time and make sure this operator is available and running as part of the larger stack. And you can use this very well expressed update model also to control the updates to the managed application. What's displayed here is one variant of this. It's not the only model that we support, but as you can see, it is the author of an application as well as the author of being the developer for the operator managing this application. You can tie the lifecycle of the operator very closely to the lifecycle of the managed application. The third part of the operator is a webpage on that named address. Since you joined that for between that was like so reddit. And Google launched this in February this year. We now have over 75 operators published there and a lot of versions getting treatment updates. This is a community curated place where people can just publish the operators. It comes with automated testing and PR based review process. And it's agnostic to the type of operator and the way they got created. So this is not dependent on an operator being created with the SDK. SDK created operators obviously have the advantage that a lot of the packaging is already created for you. This is also giving you straightforward instructions given over them Mr. Ford on the committee discussion on how to install an operator. And this also provides a public catalog for operators that you can use in their clusters. Couple of statistics from the community. We have two and a half thousand good of stars. We have a lot of clones, especially in the SDK side, very predictable for each cycle. Over 160 individual contributors with 38 organizations contributing. This is also including the community operators that we've just seen. And we have also very active mailing list in the operating framework, special interest group with all of our 180 subscribers. All of the feedback has been very good. So you see here a couple of mentions of the framework, the SDK and all that on Twitter from Brandon Phillips. For instance, and also some of the companies that provide application packaging. So Banzai Cloud is one such company that for instance provides as enterprise supported version of world. And I think in general, the community has reached a point where it becomes clear that if you want to run that type of stateful application that has reasonable complexity, you should use the operator pattern in order to automate this sufficiently with the workload that's actually running on cluster and not outside trying to impose itself on the cluster. So this is a quote from one of the Google software engineers here from the last two come in Barcelona actually. Behind operators, there's a huge momentum in the software vendor space obviously because it's a very nice way to provide very good use experience on cluster, very well integrated into Kubernetes itself, which is also why we call these Kubernetes native applications. So a couple of public examples here are Dynatres, Portworks, Cystec, or Redis and Yeager, all these companies are providing operators and as part of a way to ship their software. But we also have a lot of open source contribution from men that don't ship a Kubernetes products or software products primarily like Volkswagen or Zalando or Colts. In terms of alignment with the community, I think delivery charter lines up very well with the capabilities that the framework brings to a table. So application definition guidance and best practice on application design is something the SDK makes it very easy to type into for operator developers. This also gives you the basic composition of an operator structure and metadata right out of the box. There's a bundling concept in OLM that takes care of the packaging and dependency management of operators on cluster as well as off cluster. And it's, as part of the operator paradigm, very well integrated based on CRDs into Kubernetes itself. So every operator ships CRDs, the lifecycle manager is developed with operators as well. The installation is driven by CRDs and the lifecycle model imposed by the operator lifecycle manager with its defined update graph very nicely tapped into the release management charter of this group. We also align with other CNCN projects. So the cluster lifecycle I mentioned in the beginning, there's an add-on sub-project that discusses how to manage cluster add-ons like Picoxy or DNS and how those come to lives and OLM is discussed as a potential alternative there. We work very closely with Q-Builder these days in order to join efforts on the goaling-based operator side. So Q-Builder is also a project that focuses on helping goal developers writing operators and we look to leverage some of their work as well as contribute jointly into some of the control runtime space in order to move this forward. And a lot of CNCF projects that are already in Q-Builder and graduated use operators as the primary way of distributing and working on clusters. So Rook.io is another example as well as Envoy and Vitas. That's it from my side. Any questions? Daniel, there's a couple in the chat for Jane. Sure. First question, what about stateful versus stateless services? In general, the built-in functionality in Q-Builder covers this part fairly well. However, there are use cases where an operator also benefits stateless workloads. The primary advantage is easy to come into stateful workloads because that's where it's usually not as straightforward to model these scenarios with built-in Q-Builder as controllers, especially when it comes to being able to execute application-specific logic. There's also a question around the service broker project. So service brokers in general is something that I personally do not see a lot of momentum anymore overall from the bigger place in the background there as well as the overall community concept of the service broker. There's largely no place by the operator current, I would say, and there are some commonalities between OLM and service brokers, but due to the nature of operators and some of the specific aspects of running them on cluster, I think it's only at a very high level. Mostly I was just referring to open service broker as a concept of a plan, which is like a bundle of provided services, I guess, and it seemed similar to the bundle-slash-package thing that the one slide about OLM was talking about. That's the reason I brought it up. Yeah, I think that makes sense. There's also an interesting concept in the broker space around binding. So how would you actually consume a service that you just got from a broker and you have similar discussions in the community around how do you consume and inject connectivity information from an operator-managed service into your application because that's eventually what the developer usually wants to do. So we are looking at adopting an application-binding concept in the operator space as well, and the roadmap basically has what I'm also support of. Sorry, guys. We may want to stop the meeting because we just run out of time, and I think if we have more questions about the operating framework or operating SDK, especially there are questions about how to choose operators between different limitations. I think we can actually sort of, if I, you know, at least we seek application delivery and we are very happy to discuss more about all of this project. Okay? Sure. Cool. We have to propose, we have to propose upon the potential to the next meeting and could it be the first project to present in the next day. So thank you guys for this meeting and hopefully we can see you again next week. Okay, bye. Thank you. Thank you very much. Bye. Thanks. Thank you.