 Everyone who's joining us today. Welcome to today's CNCF webinar on OAM, an open application model for OAM, a team-centric app model for applicants, helpers, and operators. Parenthood Community Program Manager at Microsoft and CloudNative Ambassador will be moderating today's webinar. We'd like to welcome our presenter today, Mackenzie Olson, Program Manager at Microsoft. Just to start, a few housekeeping items before we start. During the webinar, you are not going to be able to talk at the attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions there and we'll get through as many as we can. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. We do not add anything to the chat or questions that would be in violation of that Code of Conduct. Particularly, please be respectful of all your fellow participants and with that, I will hand it over to Mackenzie to kick off today's presentation. Great, thank you, Karen. Just as a heads up, we have a couple moderators from the open application model on the chat. So if you ask any questions during the presentation, they'll be fielding those. And if there are any that I want to bring up, I'll speak to those as well. Alright, so as we're all well aware, Kubernetes has provided a really useful set of APIs to help us orchestrate our container primitives. But there are a lot of resources to keep track of. You have your basic objects with your pods, your services, you can separate them with namespaces. And then on top of that, you have different layers of abstraction to choose from depending on your requirements between deployments, statement sets, stateful sets, replica sets, etc. And with all of these different resources, it becomes difficult to see what the original application topology and runtime requirements were. For us that raises the question, you know, how can we stitch all of these discrete resources into an easily operable application. And at this point, you might be asking yourself, hasn't Helm solve this problem for us? Helm is a fantastic package manager specifically for Kubernetes resources. It's great for bundling your files together, but it doesn't provide any guidance on how we'd want to model the application itself. It's still entirely up to you on how you want to fit all of those individual Lego pieces together into your application. So leading up to the open application model, we talked to a lot of different folks and asked them questions about how they do their DevOps practices, specifically with relationship to their orchestrators. And we noticed a couple interesting themes among larger companies or enterprises. And we would see really large ratios in favor of application developers to infrastructure operators or application operators, which makes sense because the way clusters work today, you can host many, many applications inside of the same cluster. A lot of companies even had different teams working within the same pods, each producing their own containerized application and having them live in there. So that came a lot of difficulty in terms of communicating between the development teams and the operation teams. Specifically, it would work maybe well when the company was smaller, but as it continued to grow and scale, the communication problem became much harder. We saw a couple of themes I wanted to solve problem. Lots of homemade paths and fast layers were built on top of the orchestrator to try and separate it away from the developers, while still leaving enough room for the developers to specify what the runtime requirements for their workloads would be. It would work fairly well, but there'd be a lot of leaks of orchestration level concepts up to the developers. Another solution that we saw companies coming up with were really complex CI CD pipelines, meant to really stop the developer concerns that containerization. But it was not enough for a lot of development teams to to specify the dependencies for their services, and they were still having to communicate with the operations teams. Not to mention if something went wrong with these really complex CI CD pipelines, it becomes very costly to fix. So with all these these these concepts in mind, we came up with the open application model. So it's an open source specification to help us define cloud native applications. We're really trying to solve how distributed applications are composed and then transferred to those who are responsible for operating them. We're currently in a pre release v one alpha one stage, but we're working on a second release that should be coming out here soon. So there are three main guiding principles for the first being we really want this to be an application focused model, bringing the attention back to the developers and their applications and removing the focus from this container infrastructure. We also wanted to provide a model that worked well for separation of concerns. So I'll be diving into the personas in a little bit, but essentially we wanted this model to work well for larger companies that may have this this clear separation between the development and the operations, but also not be so complex that for smaller smaller companies, you know that the model wouldn't be overwhelming that if you had one person doing everything from app development to cluster ops, they could still use this model and gain benefits from it and set themselves up for success if the company was to scale and grow. Lastly, there's this growing growing ask for the application to be brought to different environments, whether that be cloud and Edge between multi cloud hybrid deployments. So we wanted to write a model that would help folks have a very consistent application that they could plug across all these different environments. So who are we trying to cater on towards we have three main personas. They don't necessarily need to be separate people. As I mentioned earlier, they could be one person doing it all, or it could be three separate roles. The first and foremost being the application developer and their their main job is to focus on delivering code in a platform neutral setting. And then there's the application operator who is essentially adding runtime characteristics to what to the code and that the application developer is producing. So this could be things like auto scale that could be applying traffic management identity, etc. Lastly, we have our infrastructure operators. So those are the folks who are configuring the environments to satisfy any unique operating requirements for the application. So, ohm has four four main constructs that you really need to get familiar with before you can get up and running. We'll start with components. This purpose is to up encapsulate application code. So here we're defining runtime requirements so like what the workload type would look like. We've also provided a place for developers to specify parameters that might be overwritten by application operators. You'll see resource requirements, health and liveliness probes fairly similar requirements to what you would see in a pod specification. So once you have your components, we want to add additional runtime functionality to them and we do that with something called traits. So these are discretionary runtime overlays that help us apply this operational functionality. So we have our component, let's say it's a web server and we want to allow traffic to flow into it, we might add something like an ingress trait to that component. That way you can get the traffic coming in and it's fairly pluggable so you could use a very different ingress if that's what you needed to do. So now that we have our components, you might be collecting a lot of these and you may need some way to group these together. Before we came up with the idea of application scopes, they're discretionary application boundaries where you can group components based on their behavior. So an example of this may be you have a set of components that you want to have live in the same subnet. We would have a specific network application scope that you could place them all in. Another example could have a behavior that you might want to group components by could be health. So if you have a set of components that are dependent very dependent on each other and you want to make sure that they are all running and healthy, you would place them within the same health scope. And lastly, we have an application configuration. So this defines the application deployment. So it's kind of important to understand the distinction between application scopes and configurations scopes is all about grouping components together based on a common characteristic application configuration is how we're going to take these three concepts we've learned about components traits and scopes and instantiate them. It's totally reusable. So if you end up creating an application configuration, your components go live. The traits are applied and they're living inside the scopes and you can use this application configuration as an as essentially a template to stamp this out as many times as you want or make modifications as need be. So putting together these four main constructs components traits scopes and application configurations with the person as we described earlier. We have the application developers who are authoring the components schematic. So this is essentially a container pod spec along with a workload type which I'll dive into in a little bit and any parameters that they might need to have overridden by the application operator. So they write those application operator will go ahead and pick the necessary components that they want to deploy add necessary traits such as ingress another example perhaps could be auto scale. And apply necessary scopes. So a network scope and then deploy it on to the environment that has been configured by the infrastructure operator. These arrows probably don't point towards temporal ordering. Most likely you have the environment already configured before you go ahead and execute these two steps. So a simple example of what an application might look like here. We have two components. A singleton server which is a workload type and then a database component and from here I've added an ingress trade saying I want traffic to flow into this front end component. And then a manual scalar trait applied to the second component manual scalar trait is essentially a way of declaring how many replicas you want for a component to run at a given time. Both of these are living inside of the same same network. So that's the general architecture. What does the YAML look like that supports these these constructs. So here we have an example of a component schematic for both of those pieces I described earlier. So the the web server the web UI. We specified the workload type here which is of singleton server. This means that we want one instance of this running at maximum. We can see here there is a database connection string the developer doesn't have but they are going to parameterize it so it can be overwritten later by the application operator. And they've also provided the container necessary container information. So the image required resources ports and that environment variable that's being overwritten as a parameter. For the back end piece we have a MongoDB containerized version. There are no parameters for this that they are including they're just specifying the resources and necessary ports. You can see here this is of a slightly different workload type this is a server. The main difference between a singleton server and a server is that we'd expect multiple instances of this to be running. Which is why you can see earlier we've added a manual manual scalar trait. The application developer doesn't necessarily care how many how many number how many replicas will be running of this specific component. They just want to surface up to the application operator. Hey, I would like more than one instance of this to be running. So diving a little bit more into workload types. So this is a concept that's specific to components. Right now we have six supported workload types in Elm server singleton server worker singleton worker tasks singleton tasks. And it's sharing a specific specific pieces of information about how that component should be run. So for servers, we expect there to be a service endpoint. And we expect it to be demonized and the main difference between those first two rows. Again is the number of replicas running that same pattern holds true for workers, as well as tasks. The main difference going down the list is that there are no endpoints for the workers or the tasks. And workers will essentially restart whereas tasks will not restart once they have completed whatever they're trying to execute. All of these are containerized as well. So these these will all be represented with that set of parameters asking for image resources. So that's, that's essentially all the information that needs to be provided from the application developer about what and how the component will run. Now, as for traits and scopes. So for components, we have to fill out that component schematic where the developer is surfacing up necessary information for core traits and scopes. We don't necessarily need to fill out that we do not fill out that same same schematic. Instead, these are defined on the own specification because these are essentially core to the open application model. So in order to know what parameters we need to fill out, you would visit the own specification and you can see what what traits and scopes are supported. We have a manual scalar trait and it's fairly simple. You only need one parameter, which is replic account. You can see here it doesn't apply to all of the various workload types, those being specific types of components, which makes sense. You wouldn't want to add a manual scalar trait to a singleton server because by definition you should only have one instance of singleton server running at any time. So we're setting up some guardrails to set to set our relationship between our application developer and application operator up for success. The same same concept holds true for application scopes. Here's an example of our network scope. You can see the required information being a network ID, a subnet ID, and then an internet gateway type. We list out the complete set of required, complete set of parameters, some are required, some are not. So going back to our sample application we had our web front end, our database back end. So here just model them in the same file because they're YAML and the application developer has provided this information. The application operator is going to go ahead and pick out the desired components. So we want to pull out that web UI and we're going to pull out that MongoDB. The parameters have already been filled in there and we're just applying the necessary traits and scopes. So for that MongoDB we give it an instance name. We've also given it a manual scalar trait and we're going to set the replica count to three. We want it living inside the same network scope which we have instantiated above for web UI. We've filled out that parameter there. So our DB secret that we had left just as a parameter. This is where we're actually providing the string. We've also added an ingress trait to allow traffic to flow in to that front end and also made sure that it was applied in the same network scope. Another interesting thing about application configurations is you only need either an application scope or a component to deploy a configuration. So if I wanted to go ahead and actually set up all my scopes ahead of time because I have a good idea of what my grouping constructs will look like. You could go ahead and just instantiate a network scope and then add the components in later by just referencing the name here. So now that we have a better idea of what these main constructs are in the model, how can I use the specification in practice? So we have a couple different existing implementations. There's rudder, which we'll talk about in a little more depth here soon, and then Alibaba's Enterprise Distributed Application Service and Alibaba's Resource Orchestration Service. So rudder is our Kubernetes reference implementation. It is an open source project that works on any Kubernetes cluster. So this could be a managed Kubernetes such as EKS or AKS or just an entire full-fledged Kubernetes cluster you have running on your VMs that you manage all up. So rudder supports all core-owned constructs. So those core workload types I was talking about earlier all should work within rudder. And the high-level flow here, so the application developer is specifying its components, application operators pulling them together inside of an application configuration by applying the necessary traits and scopes. Bringing back Helm, you can pull all these YAML files together in a Helm chart and deploy the way you interface with rudder is through kube control. So you can use the same tools that you're familiar with today if you are an infrastructure or application operator who's used to working with Kubernetes. Kube control and all of the Kubernetes-friendly tools will still be at your disposal. And then you have this great layer of abstraction that you can surface up to your application developers and infrastructure operators. Or sorry, not infrastructure operators, just your application developers. So how does the implementation architecture work? So we have the application model itself, which is the interface to the application developers where they're specifying the necessary traits and scopes. From there, the implementation translates. So you have a auto-scale trait, the implementation goes and decides how it will be, how it's going to be executed upon. And then the infrastructure is essentially determines what platform features are available, how it will be executed on the hardware and the orchestrator and whatnot. So here, the orchestrator would be Kubernetes platform features are all the features that would be supported in Kubernetes today. Hey, I have a question or someone submitted a question. So they said we use the works flux and have written a lot of our own operators and CRDs. Is there a way to adopt Ohm as a standard way of defining operational and security semantics for our components and workloads without adopting something like rudder. Without adopting something like rudder. I at this point you would have to write your own implementation rudder is essentially how we expect folks to interface at this time with open application model. You could another way you could go about doing this is using rudder for the core traits but if you have existing operators and CRDs, we're working on trying to make actually a good segue. The model more extensible to existing resources that you have today. So instead of Not working only within that set of core components traits and scopes, we'd hope that you could model those existing CRDs somehow with extension points. Which which lenses off really nicely to this next cat this next talking point. Ohm and extensibility. So you heard me talking a lot about core core constructs earlier on these must be supported by own compliant implementations. And then there are extended workload types or extensions traits and scopes. These are optionally supported by own compliant implementations. So this could be an example of where you have your existing CRDs you want to plug it in. How can we set up the application model to consume these in a way that that's that's pluggable and you can still model it with the open application model using those core using the same set of YAML with components traits and scopes. So yeah, these are our set of core constructs. So as I mentioned earlier, you have those six for component workload types, you have two different scope types at this point we have our network scope and our health scope. And we have one existing trait core trait type, which is manual scalar. A good example of an extended trait might be an auto scalar. So, at first glance, you might think this seems fairly basic and you would want to include this in your set of core trait types. But to try and model all the different ways people might want to auto scale is very difficult. And we can't capture it's difficult to capture it all within one long list of parameters. Thus we've left that open to be an extension trait to let you define how you want to auto get auto scale specifically and choose what those parameters are that you want to surface up from your application operator to your application developer. So here's an example of a mix of core core constructs and extension constructs. So we have our component which has a workload type of server. And we're having traffic flow in via our ingress trait. So I described an ingress trait a little bit earlier. It's actually not a core trait, but it's an extension trait is implemented in rudder. But that means if you went to go and use, let's say, Alibaba's edus implementation, they could perhaps have an ingress trait, but it might require a different set of parameters or might have slightly different behavior. Another example of an extended workload type might be a database component. So instead of having a containerized Mongo database, I want an actual like a SQL database running in Azure, non containerized how can I model this as a component and have it be part of my overall application. So these are two examples of extension points that so these resources aren't necessarily implemented by all the different home implementations, but they're still useful and I want my developers to be describing these or my application operates to be describing these in the same manner that they're describing all the different aspects of their application with those component schematics and putting them together in an application configuration. So up and coming, as I kind of mentioned earlier, home is not well positioned for infrastructure operators who are interested in implementing extended workload types or extension traits and scopes. So in our second draft we're focusing on making home more flexible for infrastructure operators by including existing resources into the old runtime. We want to make this easier for folks and find a way to also make sure that we can share these extended workload types. So if someone's coming up with a great idea, we want the community to be able to grab on to it and use it as well if they're interested in that same implementation. I have more questions for you. Do you consider home as a new and quite opposite direction versus home? Don't you think that the path from home, that the path from home description to Kubernetes is more direct than home to home? Don't you think the path from home description to Kubernetes is more, I guess, what does more direct mean? I would need a little bit of elaboration there if any of my moderators feel like they have an answer to that. Okay, we can come back to that. Another question was, is there a source to image app contract wrapped into the model now or in the future? Or would this be an extension that would be implemented by others? So there is no source to image construct right now, but we're looking at ways of incorporating technologies like Knative that have build, but I guess it's now known as tecton pipelines, ways of describing those with home so that you could leverage existing solutions that would help you take source to image there. But now right now it's not exactly, it's not a first class citizen in the open application model. Cool. Yeah, I guess whoever asked that first question if you want to kind of flush that question out a bit, we can come back to that. Otherwise you are free to continue. Sounds good. Lastly, community. So if you're interested in getting involved, there's a couple of different ways. We have a Gitter channel which is linked above here. We have bi-weekly community calls. The next one is coming up on 225 1030 and PST where we're talking a lot about these upcoming topics that I mentioned earlier. We have a community calendar that you can subscribe to. We will be at KubeCon at the end of March beginning of April. The talks are not on there yet, but we have a couple so stay tuned if you want to hear more from us. And lastly, contributions to our repos. We have the specification repo itself as well as a rudder, which is our open source Kubernetes implementation. And lastly, I didn't link it here, but we have a samples repo. So if you are enjoying rudder or have tried it out and have a cool application that you've modeled with us. We'd love to have more contributions there so we can have different reference architectures to point to. So those are all great ways to get involved if you are interested. And with that, I'll open it up to any additional questions anyone might have. Just a reminder, if you have questions, you can drop them in the Q&A box at the bottom of the Zoom window and we'll see if we can get a few questions. Someone asked, you mentioned some changes coming up in the next release. Can you elaborate? Yeah, so the next release, again, we're going to be focusing on making all more flexible for infrastructure operators, allowing them to include existing resources. So we're trying to make that extension extended workload type extension traits and scope story stronger. If you want more details beyond that I'd really suggest coming to our community meeting on the 25th. We'll be talking to that in some more specifics. Let's give it one more minute for questions. Alright, well, great. Thanks, Mackenzie, for our wonderful presentation. That's all the time we have for questions today. Thanks everyone for joining us today. The webinar recording and slides will be online later today, and we look forward to seeing you all at a future CNCF webinar. Have a good day.