 My name is Daniel Liguero, and I will be the one giving the tutorial to give you a little bit more information about me. I'm the CTO, the company called Natif, where we provide a services service developer platform. We use Kubebela internally to as a part, as a building block for that platform. And that is how I became interested in the project, how we started learning about OEM and Kubebela. And I am one of the Kubebela maintainers. My background basically is on big data technologies at the very beginning of my career. Then I moved into a streaming machine learning model, building platforms that enable you to basically run that. Then I moved into edge computing. And after that, I moved into more of the developer platform area, right? This is going to be, as I said before, an intro to Kubebela. If you have further questions or more detailed specific questions that are outside of the topic of this tutorial, feel free to come by the booth. There is a CNCF booth. I will be there later after the talk, tomorrow also. So feel free to come by and just ask any question, right? So the agenda for today will be, first of all, we will talk a little bit about what is the open application model? What is Kubebela? Identifying two different elements. Then we will move into creating a small cluster with kind to install Kubebela and start deploying our own applications. We will go through basic operations with the Vela CLI, Vela UX. We will go a little bit about the workflows that are automatically available in the applications. We will give you a brief description of how you can enable multi-cluster deployments and how you can integrate within a GitOps environment. And finally, some extra contents. After that, any questions that come by, I will be happy to answer both of them, right? So for this tutorial, this is the link with the Asset City content. We will go through the elements of this repository. This is where the step-by-step instructions of how to get the clusters are located. So basically for this tutorial, we'll be switching in between the code editor, the terminals, and things like that, right? So just bear with me and we will continue through that. Okay, so let's start first by explaining a little bit what is the open application model. Because it's something that is quite related with Kubebela, but sometimes it gets easily confused, right? So the open application model is basically the specification on how you are going to express your own application. The idea here is to provide a high-level abstraction called application that will enable you to describe your application in a way that is agnostic from the elements that you have as an infrastructure and as a deployment target, right? All information related with the specification itself is available at 1.dev that will lead you to the GitHub repo that is where the actual spec is available, right? And the community around OEM is the same community around Kubebela, right? So you can ask questions in both directions, both to the more on the specification side, more on the implementation and how you make it run on top of Kubebela. The open application model was born on 2019. It was an effort by Alibaba and Microsoft initially. They put, let's say, the first version on top of OEM. And the idea there was, as I said before, try to get an abstraction that is easier to work with and is extracted away from the infrastructure, meaning we would like to have an application description that you can deploy on any type of infrastructure and on any type of cloud provider. If you switch between cloud providers, there shouldn't be a need, ideally, that require you to chain, basically, your Jamel files to add a tip to the small details, nuances, annotations, things like that that are required when you switch, basically, between cloud providers. So the focus on this is about reusability of components and make it easier for people to understand what is going on on the system. So things, we will talk about understanding the application status and things like that. This is highly related with the spec in itself, right? It is highly extendable by design. We will see some examples of how can you basically extend the specification to tailor it to your needs and to make the components make the things that you want actually within your company or your use case. And let's start for which is the problem that we are trying to address first, right? So we are here because we love Kubernetes. We believe that this is the technology that we need to use to run our applications. But sometimes we tend to forget which was the time that it took for us to become proficient in Kubernetes enough to be able to run those workloads, right? The main difficulty right now we see is that Kubernetes is difficult to learn for a lot of people. There is a huge effort in the Kubernetes community to make more tutorials, to make more content, to make these more approachable. But maybe there is also a complementary way of using another type of abstraction that is on a higher level in such a way that not everybody requires to go through those steps to learn all of this, right? So imagine the idea around this is an analogy is the same as if you need to become an engine designer to drive a car, right? In Kubernetes sometimes that happens. So this is a way of bypassing that and enabling a lot of other new users that are coming into our infrastructures that maybe are not developers are other type of users that would like to use the infrastructure but maybe doesn't have the requirements of learning new technologies such as Kubernetes, right? We all know that Kubernetes work with a collection of low-level entities. And I just created a small map trying to relate the major ones, right? The main issue with this type of entities is that at the end of the day you are the one creating them individually and you need to make in your head the map basically of all of them being connected and working properly coherently. Sometimes this is where the errors appears and this is one of the places where this type of approaches such as the OEM1 will fit in and will provide a way to automatically create all of those low-level entities without you needing to make sure that you have proper annotations, you have properly label everything in between so everything works together, right? So one of the classic questions that we usually ask is, okay, is my app running actually? Like, how can I see if one application, understanding application as a collection of services running in a cluster, we usually need to go through each of the different components, remember to monitor them properly, label them, find out what is going on, right? The chain in here is that if we use things such as OEM, we instead of going with a bottom-up approach, which is a classic one of deploying the low-level instances that you figure out in your head what is going on at the top level or you need to put other tools for you to do that, this is going top-down, right? We have an application that will generate a set of components on the cluster and we will be able to reason about the status of an application. That's the main objective of all of this, right? So as I mentioned, classically what we do is we just label everything and try to remember to label everything. Problems tend to appear when we start integrating with other elements. You may have a chart already there that put another different type of annotations that is not the same that you want to standardize for your own cluster, things like that, right? So this is something that we need to have in the back of our heads to think about it, right? So as I said, basically OEM starts with a top-down approach and is a way of having an application as the central entity in which we are going to reason about on the system. Like if we want to check if the application is running, we will go to the application CRD, okay? So this is about OEM, the specification, what aims to do. Let's see now what is Cuevella and how does it fit in this picture, right? So Cuevella is basically the open application model runtime for Kubernetes. The specification in itself may be in the future runs in other type of infrastructure, runtime, anything you may think of. But right now, the only operator that uses Cuevella and deploys stuff in Kubernetes. It's a CNCF incubating project. It has reached this estate just a couple of months ago. So it's a Kubernetes achievement for a young project. And it provides three main features. It supports multi-tenancy. So you can integrate and deploy Cuevella in a multi-tenant environment. And it will ensure that your role-based control works as expected. It's capable of deploying applications in multiple clusters. So you will be able to create kind of a control plane that manages a set of clusters and applications get deployed into different clusters with different parameters. We will see an example of this on this particular tutorial. It is a sensible. We want people to write their own components, trade extensions so that you can basically tailor the use of Cuevella and OEM to a particular use case. And that enables us basically to integrate with other projects. So from the point of view of Cuevella in the center, you can see that this is the classical operator workflow. Like you will get a CRD, which is going to be called application. You will process it. You will go through the different components. You will render the low-level entities, the classic deployments, pod services, et cetera. And you will orchestrate the deployment. This is something also that is included. We will see an example of how you can also create some workflows to manage the way that you want to deploy your application. And basically, as every other operator, it will continue to reconcile the state of the cluster, making sure that whatever is written on your application level gets translated into low-level entities. You have an add-on catalog where the community has provided integration with different technologies of the Kubernetes ecosystem. And we will see one with the GitOps example. Okay. To give you a brief idea of how long this has been going, this is the timeline for both OEM and Cuevella, just here that it started on 2019. It was accepted mid-2021 into the CNCF. It has reached an incredible stage, just as I mentioned before a couple of months ago. And there is a steady adoption of the project by many enterprise users. Many users around the world, especially in Asia. And it is getting bigger and it is getting much more mature each day, and which is released. So with all of that said, let's start going to the details of the tutorial and let's see what is an application, right? So the application, as I said, is going to be our top-level entity. This is the one that we are going to work with. And as you can see, basically an application is going to be composed by a set of components. Each one of them will have their own type. We will go into those details. And you will be able to do things like applying traits to modify the behavior of the components. That will enable us basically to start separating also the way that we define the different responsibilities of who is doing what and who is defining which in the general, right? So you can have a development team producing a component, an application, and maybe if you are debugging an issue with that application, you may want to deploy that application with a trait that enables you to get a higher log level of things like that. We will see that later on, okay? So from the point of view of the components, the important thing here is that each component has a type. You are able to define your own types. By default, there is a set of predefined component types that kind of resemble the classical Kubernetes one. So it is more familiar to get used to it, right? Web service, you can see it as an equivalent, basically, of a deployment plus a service. If you put this port and expose equal to two, that will automatically generate the service for you. And one of the main features of all of this is that components define properties. And this is a good thing because when you define your own component, you will be able to define the properties attending to your use case or the semantics within your company, for example. So this is a good way to limit, basically, which is the exposed API that you want to give to the user of that application or the one deploying it to limit what they can do with the application and to try to standardize and make it reusable, right? To put an example, imagine that you would like to create a database component. And maybe for that purpose, you will only like to provide three parameters. One could be which is the log level of the database. One could be whether you want to trace those queries or not. And another could be like whether you are working in a developer mode or not. And maybe that means if it is true, I will only deploy a single replica with low resources because we are running integration tests. I don't know. Maybe if I set it to false, I will deploy the full database properly as I would like it to have it on production. So components basically allow you to limit what you want to do. And you will be able to define, actually, which are the low-level entities that are generated, both in components and traits. Traits, as I said, is a way of modifying the cell or augmenting the functionality of a component. And classical examples of these are going to be things like adding, creating secrets, mounting secrets into the containers, creating ingresses, registering our component into observability tools, creating sidecars, sporting logs, anything you can think of about modifying actually the component, you can do it with traits. And when you have it defined, you can reapply it and reuse it among the different components. Next, we have a concept called policy, which you can think of as the same that you will do with traits but on an application level. So if you want to make a configuration chain that needs to be applied to all components, instead of copying the same trait to every component, you will do it as a policy. And the policy will enable you to do things such as defining which are general parameters, if it's related also to multi-cluster deployments. This type of configuration will become and appear as policies. Finally, the last element of an application is a workflow. And here you can see, like, let's think of a simple workflow engine. And that will enable you basically to define how you want to orchestrate the deployment of an application. To put that in a sample, you may think of if we are working with a database component, we will maybe have in the workflow. The first step will be deploy the database. The second step could be preload the schema in the database. Next, you will deploy the final component. Next, you will send a notification through Slack. All of that can be achieved by means of workflows and workflow steps. Everything, as I said, workflow steps, policies, traits, and components is configurable. You can make your own and you can adapt it to your own use cases. For that, you will write this type of entity, a trade definition, a component definition. The schematic, the template that will be applied is written in Q-language. And you will be able basically to define which are the parameters that you accept, some context, some help on that, plus which are the outputs, which are the entities that you are going to generate combined with the fact that you are able to make some coding in that template. Like you have looks, you have ifs, you have the ability to create some simple structures. So it is easy to manage all of this and define what you want to generate for your particular trade component, et cetera. One nice feature of this is that sometimes we are seeing that this can replace basically the need of writing your own operator. Like if you are just reading an operator to limit the supposed properties, for example, to make it easier for a user to use a component or deploy an element into your system, you can use this approach to basically avoid the need to write your own operator. Google will take care of becoming this meta operator as you wish and will apply the trade for you or the component for you. So with that said, let's go now more into the hands-off part of the tutorial and let's start with the very beginning, which is installing Kubebela. For that, we will do two things. We will create a kind cluster and then we will install Kubebela on it. All of these steps are on the GitHub repo that I used pasted before and we will follow the first document which is installing Kubebela. Bear in mind that for the tutorial as a whole, we will be using two terminals at the end. So get one terminal, I'll call it terminal one for now and that one will be the one in which you need to deploy all of these. So I will give you a couple of minutes to get you started and to deploy the cluster and then we will continue. So we are following this particular document here of installing Kubebela. Well, this is installing, you will see here that there are two alternatives actually to install Kubebela. One is just using the kind cluster that you already probably are familiar with. The other one is with Belladie, which is a tool created by the community. We are not using this today basically because the second scenario of the multi-cluster approach becomes significantly difficult basically by the way that the networking is done with Belladie. But this is a tool that you can use if you want to spawn, let's say a starting cluster and you want to use that to create then other clusters. Belladie is a good tool for that. It works well in air-gap environments also. You want to start deploying applications into other clusters. Maybe you want to test Kubebela just without installing the operator in a cluster. That could be a good approach like having this Belladie or kind of you wish to deploy Kubebela in here point that deployment to the other cluster so that the application can deploy it in the other way without interfering with the workloads that you already have in terms of operators that are launching the cluster. Once everything is ready and the CLI is installed you should be able to basically execute Bellacomp and that will give you the list of elements into the system, the component definition. Right? What happened? I mean, I did it yesterday. It took a minute or two, but it ended well, so I was hoping connectivity will help us. Let me know. If continuous crashing or it's not able to download it if everyone can also provide kind of a feedback you give me a thumbs up. I can continue. But I would like to get this ready. Is everybody having issues downloading the images? Well, it should be doable. I'm also using ARM. I'm running on ARM basically, so you should not have any issues with like all of this kind of create and so on should be available also for Mac. Okay. Okay, let's give it one minute but I will continue. So at least this gets recorded and you can reproduce it later on. So while this gets installed, yeah, okay. Let's continue a little bit. So basically you have the Vela CLI which is the tool kind of equivalent of give control things like that. It is also available as a preview plugin I think, so you may want also to check it out. Basically it's a way to interact with the cluster through the Vela's Tractions. If you want to receive the information about what is going on in the system through a standard cube control, for example, you want to get the component definitions. The equivalent of Vela Comp is going to be like cube control on the namespace Vela system, which is where the definitions are stored initially and get the components which are the CRD that is just stored. It's important to mention that you can create your own definitions and you can deploy those in your own namespace. So definitions are not let's say cluster level entities. They were by name spaces. So if you want to test something out, you can just test it on a particular namespace and it wouldn't affect the rest of the cluster. So moving on to the basic operations in here we are starting to deploy the Hello World, let's say. And we are going to deploy a basic OEM application. To do that the first thing that you are going to do is execute this command with this Vela M and that will create an environment, which in fact is just a namespace but you would check like which are the labels. You will see that it's been labeled. It has some labels so you can differentiate between the environments that you want to use from the Vela point of view and the others. But it's just an namespace. Once you do that you can deploy applications with Vela App and what will happen in here is that the operator will take care of the definition of the application and if we take a look to the application this is just the simplest application I was able to think of. So you can see that first of all we have the CRD for the application we have components we are using the web service component the web service component resembles as I said before a deployment we are going to deploy endings exposing port 80 and we are going to create a gateway we will create actually an ingress for that service. One that is done and available you will be able to get to the two endings through this command. You have here just in case somebody has issues with the ports that are now occupied and things like that. The first kind cluster has redirections to port 80 and port 443 so if anybody has something running already just be aware that you may need to change the port later on and for the second clusters we are going to use 8080 and 8443. So that got deployed with Vela if we go now through the Kubernetes standard API and check what is going on you will be able to see ok it's happening same that you were mentioning before you will be seeing this when it is able to pull the ok we will continue whenever internet want to collaborate with us it will get deployed we will trust Kubernetes right so let's continue with that but as I said you will be able to access the same elements through Qt of control as usual it's just that we are using the application if we get the app you will see here like it's not healthy because the image is not pulled but this healthy status is being calculated based on the status of the different components that are linked to the application so if something fails one of the components fails you will get immediately changing the status of the whole application so it is easy to detect basically something is working or not so yeah that's the expected one as you may think now to delete application you can use vela delete and you will issue the name of the application every time you deploy an application with vela it will tell you how you are able to get the logs a shell into the container get them points that wouldn't work because the container is not launched but anyway I hope sooner it gets deployed oh yeah okay you are correct okay cool I was not expecting that time to move to another registry probably so let's imagine that the image actually is available we'll try to think if anybody else is yeah that's true let's connect it at least you will be able to see it let's try to bypass the back off okay okay we'll use the local hostbox thanks so let's continue with this so basically you want to delete the application you can just execute this vela delete as I was telling before if you want to check out the logs you can just go in here and copy this and you will get the logs from it yeah you want to check what is created you can do it by labels and yeah there is a dry run this is something that is getting improved by a lot by the latest revisions of the community you go in here remember exactly this whole dry run there is option even for the dry run to analyze the different trades how they apply and all the components so you can you want to check or pre-check let's say what is going to be generated you can do it with this dry run okay sorry oh yeah because there are some timeouts actually involved in the application in the workflow engine so it's telling you basically that it was unable to complete the deployment step on the required time that's what is basically aiming to tell you about the app it said that is ready is not healthy probably in the next run it will recheck itself if not we need to take a look to this from the community side thanks so yeah I was sorry you can tell the logs the application and that will delete everything let's try to put the dry run example if you issue the dry run with the same application definition you can see here like this is what is intended to create in the cluster and you will see that everything gets annotated by the application name and the parent component there is one limitation with the component names so the name that you give cannot be duplicated must be unique within the same name space but it gets annotated by everything basically so if you want to get like all resources from Kubernetes with a specific label you can do it by application name which is typically the easiest way possible to get it okay but yeah that is just just in case somebody missed it this is just dry run instead of saying just change the command to dry run okay okay let's continue a little bit so we have gone through these taking the status and so on next move now to an example with at least two components that's the second example with the PlayComplex app on the tutorial in here I actually put the sample that you asked for now you get all the components in Kubernetes with a particular label you will get this this is the spec that output from the next exercise right let's move in here this is getting deployed seems to be working this is another view of what's going on on the cluster by component and by application this is also a nice way of looking at applications they will be also divided by clusters and things like that so you can take them out we go into the details of what is going on at the workflow level there's a typo okay there may be another typo I miss so let's take a look to the application that we are trying to deploy this is just the classic Hello World WordPress plus MySQL component in here you can see things like apart from exposing ports and things like that you can set up the environment variables you can use a lot more trades that are pre-defining the system such as service binding for example to mount secrets or more config maps as environment variables you have storage related trades to set up the storage for you in particular this one take this as an example I'm not telling everyone to store passwords in plaintext on github it's a way of creating examples of data for config maps and secrets directly through the trade so let's continue with the other one which is the dependencies and dependencies is a way of saying like how you want to orchestrate the deployment of the application basically to try to mitigate these container restarts because other dependent services downstream are not available at the moment where we deploy the application first so first option that we have is embedded within the component definition the specification you may would like to use this depend on which is at the same level as the component name in that way you can link dependencies in between components it's probably not the best way to do it but it's a way to start with as you can see here it's just saying like WordPress is dependent on mySQL and if we deploy this application and check the status of it you will be able to see at some point like the workflow being stopped because it's waiting for the other one to be available right now everything is preload ok we'll see here a little bit more information about this with Vela UX which is coming right now Vela UX is a community add-on so you can get a nice graphical interface to work with Kubevela you want to offer this more in the place of you are building your own developer platform so to install this Vela UX you just go to the next item on the list and that will get you basically into see related to the add-ons there is a catalog of add-ons which were kind of you can think of as a marketplace kind of thing where you can submit your own add-ons and you can also use your own catalogs for that they will enable different integrations from Kubevela to other places Vela UX is one add-on but there will be add-ons for things like flags, grafana, backstage things like that are already integrated as components or traits within the Kubevela ecosystem let's see that this deploy let's see give it some time to deploy everything for the second one the password, the default password is on here and the default username is in there also so in this way you will be able basically to go through the environments the ones that we just created for this scenario pipelines and applications will be deployed in here so let's see let's deploy this and let's deploy something with a workflow so we can see it on the Vela UX if you speak it up so application workflows as I mentioned before is just an easy way to orchestrate the deployment of applications so they are composed by workflow step definitions and there are also many of them available from the community to do that you will need to deploy these workers with workflows application you can see here the application now that is being getting deployed and this is the way that information is being shown you can have here information about the workflow that is being created and how it is being orchestrated in this case what we are trying to simulate is application being registered to a discoverability service so the workflow in itself if we take a look to it is just we have WordPress as before we have MySQL as before we have another ecosystem just to tell you like what is being getting as an STDP request and the workflow that we have associated to this instance of WordPress is just first of all let's deploy this fake discoverability service then we are going to deploy MySQL then we are going to deploy WordPress after that we will send notification to a webhook that is going to be our discoverability service and finally we will bring a message on the status status in application are also configurable so whenever you write your own trade or component you can also specify what you want to see on the status so things like for example if you want to expose something as an ingress the status of that application can point you actually to which is the URL that you need to access to get to that deployment you check out here you can also get like the view Kubernetes view and equivalence something that was asked before you can see it here graphically like what is being generated you can see the different elements that are associated with the application and you can also get the logs from the application here if you take a look to this this is the discoverability service what is showing up basically is by default if you don't specify anything on the notification it will send the whole application definition to it so you can see here like hey this is what was received because it was deployed basically so just take a look to the workflows as I said before they are quite useful they are not intended to become like a significant workflow engine but for smaller things is a good way to start with it so next we have from that already application workflows we've done that and now let's enter into the multi cluster area which is something that is also quite interesting right so this type of approach right now what we have done about it now is basically have kubevela deployed in a cluster and the applications are going to create elements in the same cluster that we are deploying kubevela so we are just using one cluster this method what it does is moves you into the position of having this cluster this cluster one as a control plane of other clusters and there are many different ways in which you can join different clusters to kubevela from the point of view of the applications what you will find are policies that allow you to define which is the target cluster for a particular application component etc. you want to take a look to more specific details check out the topology policy you will have a lot of samples in there to make for example things such as I want to deploy this application in this cluster with this configuration applied to it but in that other cluster with another type of configuration you can do that by means of this topology policy the way it works there are two ways there is a connector with open cluster management so you can use that interface those gateways to behave as a pull push mechanism to I set up the elements in some place the target cluster will pull them and execute them or you can do it the other way around which is the simplest way let's say that we are going to showcase today which is this control plane will actively communicate with the other cluster and will push the applications in there so to do that let's go into the next document the next step which is about multi clusters and I will switch to it because we will need to set up some stuff for that so before we go into all the installation steps let's start with installation steps as you can see here it says on terminal 2 so be aware that you are going to be working with two terminals and they are going to be to cube config files actively working in different terminals so I label this terminal 1 this is going to become terminal 2 well this is happening as I said before what we are going to do is create a cluster that we are going to register with the name manage what we intend to do with all of this is to have a basic application that gets deployed into the other cluster what will happen is that application entities, the application CRDs are going to appear on the control plane while the pods deployments will appear in the other cluster it's taking some time now so basically to continue this while this is working what we have seen so far to remember you what we are here what are we talking about the major change with the application is that we are going to start using applications and we are following a top-down approach instead of a bottom-up approach which is the classical one that enable us basically to start getting more users into our infrastructure and this is an exercise sometimes of putting you instead of having let's say our hat of we are comfortable working with Kubernetes testing on other people that may not need to be a requirement to learn Kubernetes to make their jobs efficiently like for example we are seeing more and more data scientist teams that have needs of infrastructure they would like to run their jabiter plus tensorflow plus any other machine learning tooling on top of the infrastructure that we manage and provide but with this type of approaches we are enabling them basically to have application that they can go to a catalog deploy a jabiter plus tensorflow we will take care of how this is running in Kubernetes, we can monitor then with the standard tools but the final user that is the one deploying the application doesn't need to go through that place so that's a good question and this is one of the classical questions when we start talking about this the issue with Helen Charge you can see like to fall like first of all managing charge and creating charge once they start getting bigger it becomes quite complex quite easily and second of all is that charge we are still going through the need of understanding all the Kubernetes environment to create the Helen Charge and to deploy it so the difference in here is that we are on a layer on top you can argue that you can create a Helen Charge that actually deploys this application because at the end of the day is another CRD but the way this is intended to use is more on the side of this is an abstraction that is on a top level and instead of working with the classical low level entities from Kubernetes to work on top it's the same as people when they ask about Customize Customize is a nice tool to create your Rammel files but from an outsider point of view it's quite difficult to get into it because it actually is intended to be used or classically it's been learned as you learn Kubernetes sometimes so I think the barrier in here or the entry barrier is whether you assume that the people deploying applications are proficient in Kubernetes or not that's the that's basically the place in which I think this type of developer platform you wish actually provide you with a benefit which is like somebody doesn't need to learn the whole stack to be able to deploy applications I don't know if that makes sense for you yeah sure sure sure yeah yeah I understand your point so the case of the Ingress the difference between having a classical Ingress and having let's say a trait is the moment you start getting instruction details away from the user so for example an Ingress trait could be just as simple as saying this is the Ingress this is the port this is the path this is the rewrite path and the domain for example right the operator that takes care of that Ingress could be the one actually saying hey I'm deploying in this particular cloud provider and that Ingress needs to have these annotations whereas if we use the standard approach that we are using right now you as a developer needs to provide sometimes like this is the Ingress for AWS this is the Ingress for Google that complexity from for example from the cloud provider back to the developer gets removed if you put that in charge of if the platform team for example is the one tweaking or configuring that Ingress trait you could be one saying like hey that Ingress trait in this cluster gets translating into this you know that's where I see an advantage of using this type of partition like you are not putting for example cloud particularities into the application it remains kind of cloud agnostic unless you want to have components that doesn't make it cloud agnostic sure I think it depends on the company by company right but you're great I mean this actually goes into the movement of these developer platforms kind of thing right and these developer experience groups or whatever you want to call it a company that are called something or another the thing in here is that it depends on the team at the end of the day like there are some teams that are quite proficient with Kubernetes work well with Helm charge and low level entities they have all the knowledge and they work perfectly on the day to day basis the problem comes when you want to add a new member to that team and you need to get them to the same level as the other people so given these templates to developers it's something that we all do but at the end of the day this is kind of a method to make it simpler for them to maybe just say like hey I put create a container for my application I'm able to deploy it I don't need to get into all the particularities to be able to do that you know that's what I see the the advantage we will continue a little bit I'm happy to answer any question now or after the talk of tomorrow so okay so we were working with a multi cluster approach what we are doing in here in Terminal 2 is just creating and extracting the QF config file for this managed cluster basically because we need to modify it a little bit so that networking works correctly from the other kind cluster okay so let's get the IP address which is associated to this particular container and you will need basically to modify the configuration file you have set up to this one so we move forward since the full context for the QF config has changed because we have installed a new cluster we need to go to Terminal 1 and make sure that we are using still the QF config file that gives us access to the previous cluster okay so to do that you can just get the kind QF config and export it and in my particular configuration you can see here on the right which is the active context and the default name space for that configuration once you have found that the only thing that you need to do is to join the cluster and joining means that you will basically provide the QF config file to this control plane and you will be able to manage it from there you should see something similar to this output with the IP that you need you get and if you issue this a cluster list command you will see now that you are able to access another cluster through a certificate which is the one associated with the QF config file after that let's deploy the simple engines application and we will check out what is happening here I think I forgot one step probably let's remove the other application just to avoid noise so this is the remote deployment element in here if we take a look to the other cluster you can see now that this is a QF con name space that was not dead before so if you take the bots in the second cluster in target cluster you will see the bot of engines running you will see a deployment but you will not see an app in here because the app is actually maintaining the other place so in this case if you change or modify the app on the control plane cluster it will modify actually the deployment on the target cluster so if you have any questions about it you can check out also through the UI if you want it's correct because I was on the other cluster this is on the control plane and as you can see here it's just a deploy method with a topology applied targeting a cluster that you have registered with a name you can have as many clusters as you want as I said if you want to actually work on this check out the OCM because you will probably like to go that way so reaching that step next thing is about GitOps and how you can enable Kubella to work with that type of environment for that in terminal 2 we will use the Flux CD add-on and Flux CD provides us with two things I will go through the slides because if not I will forget it so basically give us two things give us the ability to create these customized entities which are the standard CRDs that Flux reach to pull things from Git plus it enables us to deploy Helm Charge as components of our application so through the Flux components you will be able to do things such as the example in here or imagine your application relies on Redis or any other components that you are using as from Helm you can do that by integrating it with Flux Fairs and then creating a Helm component just be aware that this type of integration is still a work in progress so you may not be able to translate all the labels or apply all the labels that you would like from the application downwards to the final deployed elements from Helm but it's a nice way also to start with your application if you already have your components at Helm Charge you may just want to take one what will be the effort migrating one application but relying on existing components do not require you to migrate everything at first okay so the example I will show you today will be we will create an application that application has a customized component we will specify the target repo and that basically will create an entity that that CRD will get processed by Flux and Flux will go to the repo they will find another application and then the application will get deployed so this is just a hello world on Flux and OEM so to do that once this is ready let's see if it's finished so you will be able to deploy this GitHub application when it gets deployed you will actually see these HelloGitOps deploy a HelloApp application which is the one that is actually on the same repo so you go here to the same GitHub repo go into GitHub this is the HelloGitOps application that we have deployed in the control plane cluster as I told you before it created a customized component in this target name space it is pulling the same repo in itself it will be listening for changes every 30 seconds just for the sake of examples you can configure the branches everything that you are used to do on Flux and in this case what will deploy is this HelloApp application which is just another hello world example that gets deployed so this basically ties everything together and allows you to get integrations with the GitHub environment ok so just to finish this up mentioning again like you have an addon catalog there are plenty of addons that are already available from the community and you can use them if you wish to integrate with your own component in there it is also important to mention that you can deploy Cuevella in multi-tenant environments what will happen if you start Cuevella with these parameters is that every time you will get an application submitted to the cluster it will get labeled by the account identity that is submitting an application and when the operator starts the deployment phase of the components that need to be created in Kubernetes it will impersonate that entity so that every role that you have configured for that account will be enforced basically this allows you to work with multi-tenant environments with the roles and the underlying that you have configured it will work by default and finally there has been a huge effort in the community that it is revision of Cuevella seeing how much that is a scale to which level and which resources you need to run this at scale and I would just like to put this as a reference like if you are having issues deploying this and want to tweak and configure the operator in detail there is a Travis shooting guide also on how to do it and how you can configure your cluster for that for further information just check out the OMS specification check out the Cuevella.io web page there is a weekly meeting both in English and Chinese there are alternate weeks and feel free to attend the meeting and contribute as you want there is a Cuevella CNCF channel on the CNCF official Slack and you can check also the Cuevella GitHub repo okay and if you wish to learn more about Nautif go to our web page and with that I will leave some time for questions I hope that you learned something it was not too boring hopefully so thanks everyone for attending yes I mean I show like probably what you can achieve with all of this but probably if you think of I only have flags at the very beginning I want to manage everything all the installations as anything you are correct you can have the flags and say hey this is the operators that I need in my cluster as any other prerequisite installed then and then those are the applications and just happen that those applications require this operator you are perfectly correct I just try to show like probably a convoluted sample but just to showcase like what you can do basically okay cool if yeah so basically there are two questions involved I think so let me think if I go those so one is about naming of components that was a community decision because from the point of view of the spec you would like to be platform agnostic even so let's imagine that tomorrow there will be a version of Kubernetes that has replaced deployments with another thing then web service will enable you to continue working with that without entering that detail it's just a way of extracting there could be better names I agree with you but anyway those are the ones that they are in case you want to deploy your own CRD there are several approaches to do that one which is not the recommended let's say way which is just to embed the Kubernetes entities as a component you can do that apart from referencing components that you would like to be able to do with a type which is called Kubernetes that actually allows you to embed a Kubernetes resource you would like to do it fast and that's it if you would like to offer these two users the usual way will be like wrap it in a component and that component will enable you to set up different parameters that you would like to give to your users that will be the way either by a component or by a trade that CRD does on your application how it is related and how you think it must be approached sometimes there are people that can argue you can argue both that something could be a trade or a component depending on the discussion you know so it depends but basically you have those two approaches either embed it directly just not let's say the fancy pretty way or the OEM way but you can do it or create your own components and wrap the CRD so there was a discussion on the community side of whether you actually define the spec first and then you make the implementation afterwards of you work on the implementation and then port it back to the spec the decision was to try to be flexible and to try to be cautious about adding something to the spec without knowing like how much does it fit actual use cases so you take a look to the specification from the spec point of view and from the Google app point of view workflows doesn't appear on the spec point of view by now but they will because if being checked let's say or proven that people actually like to use them they have been confirmed so they are going to be ported back to the to the spec but the idea is that OEM as a specification defines the minimum of the three implementers of that specification should at least provide those constructs if you wish others such as the one that we are using right now with Google Bella the idea is to be able to test ideas without compromising the whole discussion on the spec side because sometimes it's easier to say hey this is a standard right now maybe in a couple of months you discover that it was not such a good idea so given that the spec should move probably on another page the discussion on the community was that was the decision basically taken let's consider the spec as the minimum that you need to implement but let allow around time to give more things because we want to test out ideas the workflows are an example of those like there was some discussion around where does it fit and the application whether they are embedded or the application whether they actually are able to manage other applications there is a growing effort in that and probably you will see apart from applications workflows as a top level entity also coming into the spec later on on the version but the whole approach from the community side was let's try to build both at the same time but let's not make the error of putting something written on a stone on a spec if we have not tested out so I think versioning works at the same version as you are doing with deployments and things like that so from the point of view of the app it's just another CRD so it will get revisions on the modification you can roll back between revisions you can configure Cruella to see how much you want to roll back how much history you would like to store and things like that so it's the same management that you will have if you have any further questions just come by here or later to the booth and I will be happy to answer and talk about anything you want the booth is K27 is like on the farther side from here just keep walking you will pass the whole sponsors things and then if you pass to the next building you will find just in the middle those are where the CNCF booth are and you will have all the projects that are part of the CNCF some of them are old days, some of them are just you need to check out the schedule but there are a lot of people there thanks again