 Hello everyone, and welcome first of all to my home office here, second to KubeCon Virtual North America, and last but not least to our session today about navigating the app delivery landscape while solving everyday problems that I'll do together with Lee from Alibaba. My name is Alves. I work at Dynatrace where I'm responsible for open source. Okay, so let's get started. This is the session done by the app DeliverySig. As you might know, there are a number of six in the CNCF, the app DeliverySig is dealing with pretty much everything that's required to delivering applications in a cloud native way. You can engage with us in various ways. You can obviously go to the GitHub repository, you can join our bi-weekly meetings, engage with us on Slack or the mailing list, and obviously read more in our chat about what we're doing and where you might want to contribute. So when we look at the CNCF landscape, we see that there is a lot on it. What started out pretty small has really grown super big. Some people find other words for it, but there's like a lot on this landscape right now and it might be, or it most likely even is, pretty much overwhelming when you're looking at it for the very first time. Also from an application delivery perspective, you might want to figure out which tools am I going to take, how do I use them, and it almost feels like a puzzle which led to people actually creating a puzzle with the CNCF landscape, it's a 1000 pieces puzzle that you can assemble, and especially for a newcomer, it might really feel like this. So from an application delivery perspective, we can obviously scale it down a bit and if you restrict it to only CNCF project, there's still a number of projects there. We have some in the application definition image build area, some in the continuous integration area and some in the scheduling and orchestration piece. There's Helm, obviously Kubernetes, the operator framework, Argo, and quite a number of sandbox projects like Captain, Flux, Porter, a number of others, Kudo and so forth. There's also the CNCF tech radar, and in June this year, the end user community of the CNCF did a tech radar on continuous delivery, which is a subset of application delivery, and categorized tools in three different categories, adopt, which means people are using it, trial there in the stage of rolling it out, they're experimenting, and assess meaning that they are looking into it. So I also encourage you to look at the CNCF tech radar. What we did within the app delivery stick, we also created like a reference model to categorize tools by what they're doing and also to break down this app delivery space into smaller chunks. On the top left you see application definition, so this is pretty much everything you need to define your application and your application model. Then there's application packaging, so once you've defined it, you need to package up all of those components that you need and work with them. Now we have to roll them out and orchestrate them. This is the entire area of life cycling management, rollout strategies, traffic management and so forth, and then we have obviously the workload instance healing, scale out, sharding, life cycling management and so forth. So this is just another way to look at it. What we have decided in one of the recent SIG meetings is we should showcase how you can use different tools for different areas of the application delivery space. This is how we ended up with Project Potatohead. Potatohead is a very simple cloud-native project, probably the simplest cloud-native project you can build. It just consists of one single service, but it allows you to explore different delivery tools and how you can use them, and last but not least, it's kind of a funny name for a project. So let's start super, super, super simple, just with two manifests. The easiest way we can write an application in Kubernetes is to write a manifest file that contains a service and a deployment. And this is exactly what we will look at right now. So in the Potatohead project, you can find this folder called manifest, which contains that manifest that has a namespace, service, oops, and the deployment over here. I'll say the only thing we need to do right now is to apply the manifest. I'm exposing it right now as a load balancer, so it might take a while. So just trying to get the IP while this is created, taking a bit long. Here we go. So here we have it. Now, we can switch over and we have version 010 running. Now we would have to get back into this file. We modify it, apply it again, we'll update, and we will see version 1.1. So I think this doesn't come really as a surprise to anybody. Just get rid of this one here. Here we go. While this is easy and straightforward, it doesn't really provide us with a lot of flexibility, right? And we always have to kind of get into the manifest files, mess around with all the details. This is not what we want to do. So the next step would be to use a Helm template. And the Potatohead project comes with a very elaborate example of a Helm deployment. This is just to showcase what we're trying to do in here. And thanks to Matt Farina for providing it. What we're doing here, we're basically replacing the namespace by picking one. And the images, we also replace in the values file. And then in the values file, we specify our Helm namespace here. So that's pretty much what we're doing here. And if we now get back to our example, just move out of this one here. We can see in the charts that for our hello service, we have pretty much everything in there. And for template, our charts file, our values file, which we'll need in a second. This is not what we should put in here. And again, we will be deploying version 1.1. So what we can do right now for the charts directory, we are deploying our Helm release called KubeCon, sorry. That's already what we did. We just need to expose the service to the browser version 1.0. We get back upgraded to version 1.1, switching to a Helm upgrade. And again, exposing the service. And now we will see that it's running version 1.1. And as we now no longer need it, we're going to use Helm Uninstall to uninstall our release again. So not a lot of rocket science here. Already great that we only now need to change the values file. But what if we want our changes to be really applied automatically? Right now, we always have to run KubeControlApply or we have to run HelmUpgrade. So is there an easier way to do this? Exactly there is. We can use a tool called Flux. So Flux is a GitOps operator, which means it takes whatever is in a Git repository and uses what is in this Git repository and applies it for us. Helm, sorry, Flux can do even more for us. It can do the same thing. Obviously with Helm, it can also point to a container registry and check for container registry updates that we want to use. So what we're doing right now is we will be configuring Flux to do for us what we wanted to do. All right, directly Flux. In order to make this work, we have to fetch the Flux deployment. There are a couple of things we want to modify in there. Let's get rid of this one and get into the Flux directory. In the Flux deployment, we have to tell it the branch, the GitHub user. We just used the manifest here. We could have used the Helm version as well, but it doesn't really matter. The Git labels to be used and the polling interval, which we're putting down to one minute. You might want to keep it usually the way it is. And again, we have configured Flux now. And the only thing we need to do is, again, wait for IP. Again, we have now this one minute waiting interval. Now we're getting there. Give it a bit more time. Interestingly, for Flux to work, you also have to provide a deployment key, which I have done already before here, but this is what the GitHub repository uses. And you can see it's using my local fork of this GitHub repository. Wonderful. So in this case, we're pointing it to our hello server. See its version. And now I'm doing something that I should not be doing, but still for demo purposes, I'm allowed to do it. And editing my deployment file here right in Git, doing it directly. Obviously, what you should be doing is not editing this file directly. You should have the PR workflow behind it. And you can have point Flux there. And then handle it that way. Those are the updated container image to be used here. And we have set our polling interval to one minute. So pretty soon we should see the updated version of our deployment here. Okay, just double checking here. We set it to 1.0.2. And give it a bit more time. So you see, this is always pretty convenient because now we only have to commit things to the GitHub repository. And that's also why GitOps is becoming so, so popular. You don't really have to know a lot about it. Here we go. Here we have it updated. How the application is working. It's just happening. It's just there and everything is fine. So just let me quickly fetch my deployment again. Obviously, it doesn't work. I'm just getting it and setting the interval to 20 minutes again. And I don't want to run into any rate limiting. And also this one here, name space, because we don't need it anymore. Okay, so this worked great. Now let's move on. So what do you want to see what happens? Flux is really great because it's this lean tool that can do all what it's actually doing. And it's working great. But what if you want to have a leaner way of more, let's put it that way, but have a more interactive way of looking at it. And this is where Argo comes in. As you can see, Argo has a very nice UI. And it, okay, you can also do GitOps with Argo. So this is what we are going to do right now. We call it demo. Call it hello server Argo. See, it has this very nice UI or to create namespace. Take the revision you want to take. And we also specify the path in our case is again delivery manifest. We could do the same and we will do it then in another example. So it was a Helm chart, but for the time being this is totally fine. Demo space Argo. Okay, now let's sync it. So we see that all of these resources right now are out of sync. And now we have this interactive way on how the application gets created. Service is still under creation, but we see all the resources that get created individually and honestly very appealing way as I feel. So I hope you like it too. Let's give it just some time again to create the service. Moving in here. Okay, this one browser. And we have version 0.1.2 which we set before here because we're using the same file as before. Now let's switch it back to 0.10. Commit the changes. Perfect. And what we will now see in Argo, if we refresh it, obviously we can set Argo to automatically refresh. It shows that some resources are out of sync. Allows us to re-sync it. See that for the new version test created a new service. And now when we refresh this one here, we see it's back to version 0.10. Very nice thing here is we can simply go in here and also delete the project and all of its dependent resources, which is pretty neat. So that's great. What's next? So far we have really worked from a manifest, parameterizing it, using different GitOps approaches from Flux, which is for Reline to Argo, which has all this visual appealing things around it. But what we did, we more or less applied everything to the cluster. And we didn't really care whether this was a good idea or it wasn't a good idea. We didn't do any canary releases, blue-green releases or anything. This is really where Argo rollouts now comes into play. With Argo rollouts, we are going to replace our deployment with a special CRD called a rollout in Argo. That looks pretty much like the deployment CRD, except that it has this strategy part in here where it defines on how the application should be rolled out. It again has this visual appealing view, which comes with a nice command line tool here as well. So let's switch over here, creating a new application with Argo. I'm going to type it again. Now we pick delivery rollout, local cluster, demo space Argo. And with Helm, it finds the values file, our version tags and everything in there. And now we click again on create and sync. Now we're creating five replicas simply. Well, it's kind of more helpful that five of them, especially when we should want to show canary releases. Okay, here we have our service. It's not hosting yet. So not a big surprise. App is working again. What we will now do is we go into the rollout. Again, this is something you should usually not do directly by editing in line, we switch it to version 1.2. And yeah, Argo already comes with a very, very handy command line tool. Here we have our canary release. Argo get rollout. Coping a rollout. Argo rollout, obviously. We have to specify the namespace demo space Argo and we want it interactive. Here we go. So here we see interactively how this deployment is going. Now we go back to Argo and refresh and sync. And we see that a new POTS gets created and the status has been changed to POTS, which we can also see over here. The reason why this one here is paused is something we can easily find. And a rollout, because the way a rollout is defined, that the first wait, the first pause here doesn't have a fixed duration, but rather than having a fixed duration, it will wait for us to manually promote it. So this is what we're doing here now. This is cube control. Argo rollout. Just a rollout called again. Sorry. Well, then let's get it over here again. There we go. Argo rollout. Promote in space demo space. Perfect. So what we can see here now that the releases are switching over and we can see the same over here as well. Releases are progressing really nicely. Good. We're not going to wait for this to finish here really, but rather delete it. So now there's one more thing to do. Now we have automated quite a lot. We have used roll-out strategies. We have shipped different versions. We have switched to github. So what if you want to do this in a multi-stage environment? And this is where Captain comes into play. So what Captain provides, Captain provides a control plane on top of what you have already seen so far that support github that can create stages and can build your entire environment for you. Captain uses a file called a shipyard file. This is what you see over here. So the shipyard file specifies which stages you want and the deployment strategy and how it should be promoted between the stages. Most of this is set here to automatic with one exception for production. So we do the captain directory. We can just say initproject over here and we should tell you what we actually wanted to do, which is createproject. So what happened now is when we switch over to the captain UI, just to refresh, that we have this project created that has exactly these three stages in there. So as of right now, this isn't really doing a lot. We have to do one more thing over here which is linking it to an upstream github repository. So captain fully supports githubs and what I have handed over here. There should be one more. Here we go. That's the right one. I had an empty github repo in which we see that captain has started to create a branch for each stage and is storing its own file like the shipyard file and everything in here as well. So far, we have created this and also created all the githubs content out of the box. The next thing we need to do is we need to onboard a service into this project. For onboarding the service, you just pass in again your values which in this case doesn't really need to contain a lot and your templates again for the deployment and the service. That's pretty much everything that you need and you see that a lot of the chart creation that happens inside of captain and this is obviously still ongoing. Now we already see our hello service here which has been successfully created. If we switch back to git, reload here, we also see that there have been pushes to the individual stages because now we created all the helm charts and started to execute them. We also see that there is automatic Istio configuration in here because we're using blue green deployments and now it is time to actually onboard the service which I think is the first deploy service and we are deploying version 1.1. We see that there is a configuration change so captain received a new request to deploy something and also the environment will get updated in a minute. It just takes a while to rewrite all the helm charts obviously with the right values files in this case and start deploying things. Here we already see that deployment finished or we can jump right there. You see it has everything configured out of the box. It's running version 1.1 and even having the stages configured in there and obviously as part of the environment we also see where that service is already right now and it's currently now propagating through the stages. Captain also comes with an automated test execution and an evaluation of SLIs and SLOs as we haven't specified them right now for our demo purposes. It automatically passes them because there are no tests. We now see this being propagated to our staging environment and in a second it should be in staging as well as self-deploying which should be done in a second. So the whole idea is that usually you can also onboard SLIs and low files as I mentioned and test files so you can just give it for example a gmeter test and the automaticity text to execute that gmeter test has to be executed as part of the deployment. Still waiting for this one here to deploy. Now we're done. Here we go, same thing here and now what we can see here is an approval triggered. If you remember we specified in the shipyard file that for a production deployment we want to have manual approval and that's why we get this approval which you will also see over here. We see that there is one... Think of this like as you have like GitOps in place here these are obviously two branches but this branch is like one commit behind the other one and as we're accepting it we're pushing it through now. So and the last thing we do is running an init project deploy service I'll do 0.1.2 see the new configuration changed things I'd rather see a bit quicker and see that the new version has now been deployed in depth already. Here we can see how it's propagating through the individual stages and well, that's it for captain and now I'd like to pass over to Harry. Thank you, Lois. It's really awesome demonstration to show off the application delivery ecosystem using such kind of unified demo application. So I will continue with Lois talk to fill up with more interesting user cases in the ecosystem. So this is basically about in many cases that you actually want to deliver and manage the application by hiding all its details. For example, the first tool you can use is using operator this time you will use operator to package your application instead of manager application. So the reason you can do this is because the operator SDK gave you the way to convert a Helm chart existing Helm chart into a CRDN operator. So in that case you can define CR to run application and the specification of this customer resource is exactly generated from the values of your Helm template. So this is basically how operator enable you to package any software into a software distribution. So in order to do this you basically just need to play with the operator SDK to generate the CRDN operator for you. So if you do for example like this the operator SDK will handle everything for you. And then we can actually try to deliver the application by using a customer resource. Let's see the example here. So this is the customer resource we have now. And this is exactly a distribution of our software because the user can only configure for example the given parameters here. The user cannot change the Helm template because it's now a CRD. So in that case we can definitely apply this CR to our cluster. Just like you deliver your application. Okay. As long as we do this the HelloService HelloServer operator will actually create an application for us. So if you check the cluster for example get deploy because okay the HelloService is running. So if we check the service you can see the HelloService already there and it has to be exposed. So if we use MiniCube to say okay I want to expose HelloService okay now we got the running application right here. So this is basically how you package your software into an operator. So you can distribute your software by using a CRD tool cluster. So for now we have introduced a bunch of independent tools to solve your problems. But what if I want is a full platform just like something like Herok at the top of Kubernetes. So I can have pretty awesome user experience but on Kubernetes. Right. So in that case I will introduce you the Kubevilla project which is very interesting because it's an extensible application platform based on Kubernetes. So essentially if you are using Kubevilla what you need to define is a very simple YAML file like this which name is app file. Something like Docker Compose. We define application here and we define the operational configurations like these. And then the system will translate this YAML file into open application model. Then generate the Kubernetes resources for you including deployment, service, ingress, control revision, certificate, key to scale the object because you define the auto-scaler here because you have a rod configured here. That's all. That's the Kubevilla. You can see here it's essentially upper layer or application essentially abstraction at top of Kubernetes. But what's more important is Kubevilla is highly extensible. It's not just a simply abstraction. It's a highly extensible abstraction. For example if I want to add a new feature like Canary Rollout in my system which can be provided by Flagger. The only thing I need to do is to install this trade definition which reference Canary Roll CRD from Flagger as a capability. That's all you need to do. And after you've done this you can just define the rollout section in app file directly after you have installed the trade definition. That's all. That's how you add a new capability even as complex as the rollout into your Kubevilla system. So this is Kubevilla project. So what's next? For the POTATATO application we really want to extend the same application to more complex user cases including multiple services in one application including staple workloads, database like that. And also to add more tools and user cases to run the application from the ecosystem like Chaos Engineering and other kind of application operation automation system. So this is the goal for this same application. And the next step for the application delivery is also exciting because we will go to the direction which show you the code instead of white paper. So we're trying to add more real world demonstration to answer the question that what the project can be used for as well as a series of technological blogs about the deep diving to the project how different companies are using that as well. And of course we are working on the CNCF radar which is for application delivery it's quite similar to the CNCF radar we have today but focus on application management. Okay, so if you're interested in all of these topics in this talk or in this roadmap feel free to join the meeting to join the community. We are looking forward to that. Okay, so we hope you have a great journey on the Kubevilla. Thank you very much.