 Hello and welcome everyone to the KubeCon Europe session of SIG App Delivery. This session will be co-presented by Lei Sang or Harry and myself. So let's get started. When people look at the CNCF landscape they easily get overwhelmed by what they see. Some people even made a joke and created a 1000 piece puzzle. There is a large number of tools in the landscape. Some seem to be doing the same thing and it seems to be kind of hard figuring out what the individual tools are doing and how they are helping. And especially when you want to get started on something like App Delivery you might not know where to start because there is just so much or so many tools to start from. On the other hand there is a great variety of tools so for your specific purpose there might be just the right tool there. And you can also try different tools and see which one fits and needs best. And there is always growing functionality so new tools are constantly added to the landscape which might help you for your very specific use case. Still we understand that it's hard to keep track of this. That's why we started for the last KubeCon so for KubeCon North America to build a little project called Potato Head which you can find on the URL that you see here on the slide. It gives you an idea of what cloud native App Delivery looks like in the CNCF landscape. So what it does it provides a very simple small application but the uniqueness is you can use App Delivery concepts with a variety of tools from the CNCF landscape. You can obviously start with Manifest, you can use Helm, Flux2, Argo, CDN Rollouts, Captain, KubeVela and recently new tools were added like Gimlet, Litmus Chaos and there's also support for very specific deployment approaches like CNAP using Porta. So you can see that you have this simple project and you can try a variety of tools easily and this project is also constantly growing. If you're interested to learn more about Potato Head, how it's working so I highly recommend you either go to the Potato Head GitHub repository or you watch the talk from KubeCon North America where we walk through those examples. Now for this KubeCon we took it a step further and we talked to the individual projects, what trends they are seeing, what developments they are seeing and put this into this session to give you an update of how the landscape for App Delivery is really evolving and here we also want to thank those projects who shared some trends, some stats on how these projects are evolving, what they are seeing in the market which we condensed into this presentation. This is more or less a very easy step forward approach to get like an insight of all of what's going on in that space. So let's look at the trends now. We categorize them into three areas. The first one is application definition. So how do we define applications in Kubernetes? What we can clearly see that templating is becoming more and more prominent and more and more front and center mainly for reuse and simplification. As more people are starting to use Kubernetes and Kubernetes related tools they slightly get overwhelmed with all the configuration they need to make especially when it's about production ready workloads. That's why teams start to build templates that contain the major parts that you need to do in order to set up a secure and well-functioning workload in Kubernetes and you as a developer only need to bring your development specific artifacts like your containers and so forth. Also Kubernetes starts to be used as a management plane for more resources than just the Kubernetes resources. There's tools like cross-plane that allow you to manage workloads that are not Kubernetes specific but also configure cloud resources. So Kubernetes becomes more of a management plane beyond just containers. Also application definitions is moving beyond infrastructure. As we move more towards a developer experience we also need different constructs. We want to model an application an API service and so forth and there are numerous projects to do exactly this for you and help you to do this in a better way. On the app delivery side we see that GitOps adoption is constantly growing and also the use cases get more mature so it's no longer about just having a simple Git repo synced into your Kubernetes cluster and the proper files being deployed. Use cases with multi-stage delivery and other scenarios are becoming way more front and center allowing you to handle way more complex enterprise scale delivery scenarios. Also progressive delivery is becoming more front and center so people started from simple deployments on Kubernetes then moving to blue-green deployments and now using more and more data to decide whether deployment should stay in production and how traffic should be shifted to this workload. Also operational concerns take more front and center. As more people keep running production workloads in Kubernetes what we're obviously seeing that there are some operational concerns that need to be handled that go beyond what Kubernetes does as a platform and also go beyond deployment simply coping with traditional operational things you have to take care of. On the tooling side Kubernetes is really becoming more of a platform than just running Kubernetes-based workloads. It's really becoming this platform to build platforms. It starts to introduce higher level abstractions by adding additional tools on top. Also chaos engineering is becoming more front and center and being tighter integrated into Kubernetes application delivery processes. Last but not least service catalogs help to really scale the adoption of Kubernetes. As we move more into the templating space you need to store those templates somewhere easily fetch them and apply them to your project. Now I'm passing over to Harry who will walk you through especially what is done from an application definition from a developer-centric experience and how different projects are emerging there and which trends we're seeing especially in this space. Thank you guys. Hello everyone so it's really great for me to continue to discuss about the trend of the application delivery and management in 2021 of course. So yes I will continue from application definition first which is very close to developer and operator experience. So when talking about this I think we already know that Kubernetes is a great platform if we want to deliver applications or manage applications at top but we also know that Kubernetes is not that easy to learn it's actually kind of complicated especially consider it actually expose a lot of infrastructure or details like networking security storage to the users but yes Kubernetes is not an end game that's why a lot of people trying to build upper layer attractions at top. Speaking of building attractions templating is actually one of the most widely used technologies because it can allow you to defy a bunch of face practices at templates and then expose the only needed parameters to your end users for example you can defy a component which name is the web service which expose the image field the image tag the replica number as well as the exposed ports to your end user so your end user can justify three parameters to defy a web service which can be deployed to Kubernetes and exposed to external world this is a very simply fun experience but what's more important is the way you use template is actually a way how you end the users to customize this abstraction for example if one day your end users say okay I want to be able to defy security policy to this web service component you say yes go ahead I can give you a new version of template and you can do that so you can see here this is a actually a very interesting way to build platforms at top of Kubernetes because you never ever don't want to introduce restricted abstraction to end users just like the traditional platform services and many many application engines they are too opinionated to serve all the user cases of your developers but with the power of templating you can do that this is actually a lot of very interesting project right now we're doing for example kubi vela it allows you to declare your best practices as templates by using q-language which is a dsl and of course hammer chart so your end users will have a highly I will say highly extensible way to defy application abstract abstractions based on their own needs another interesting project is gimlate it actually have a similar experience based on hem chart allow you to defy abstractions with hem hem chart and then deliver applications use github's manner so it is more like a complementary tool to the existing github's workflow and in order for you to defy templates which can be assembled into application deployment which in in turn give you or your end users the full extensibility there are several other projects for example open application model it gives you a specification or I will say principle for you to follow to categorize your templates for example you can say these bunch of templates are for workloads for example web service bank and worker, connective serving workload, fast workload or even virtual machine workload they are all templates that describe how you run the workload and the several other category of templates which named for example traits which are all operational behaviors for your workwork, for example how to do blue-green deployment how to split the traffic how to declare the ingress rules based on your user cases so this is what open application model brings to you so in that case there will be never abstraction lock in your system as long as you have the model to declare any categories or templates and allow your end users to use them another project I want to mention regarding to the application deployment is of course cool cross-pan because it allows you to defy or declare the color resources application needs in kubernetes value just think about the previous previously I mentioned the application right the application actually composed of multiple components including the web service I've already mentioned but it can also have a component which is the cloud database may be provided by bwa's in that case how can I defy all of the things together in a single source of a choose YAML file and allow people to do that in kubernetes manifest so this is where cross-pan comes in and what's more interesting part is it gives you the way to implement the cloud providers so in that case you don't need to say okay I have to implement everything if while I want to declare the cloud services from some other cloud provider don't do that you just write a very simple cloud provider based on cross-pan runtime that's all you're all set and then you can use open application model or kubernetes to defy the application which is composed by your web service workload and they'll add the cloud database provided by cross-pan this is the most convenient experience if you want to bring developer focused experience to any users and make everything work smoothly with high offensive and speaking of the developer experience and I think the most widely approach which the community is trying to pursue is of course application-centric so kubernetes itself is complex not because it's around just because it cares too much too many things but for developers it only cares about application it only they only care about application they only want to deploy the application that's all that's why for example project like open application model and kubernetes try to bring back the application context to kubernetes so as a platform builder I can actually build a application-centric platform by leveraging kubernetes with kubernetes very easily and you can think of that kubernetes just leverage the templating mechanism to allow you to pre-defy a bunch of kubernetes manifest templates with queue and a help and use install all of the pre-defy templates in your system and your end user will assemble them into application and the reason they can assemble them into application deployment is because open application model is there it provides your end users with a bunch of higher level abstractions including for example components right what are the components that compose the application you may say okay including the web service including a bank and worker and including a cloud database which is provided by cross-bank course and the second concept from kubernetes is actually traits which are operational behaviors for you so in that case application is composed by components plus these traits aka the operational behaviors for example you can declare your web service will be rolled out fully in a blue-green deployment strategy which is a trait that is how you allow your end users to defy application deployment by fully leverage developer focused primitives which are enforced by templates designed by you and the last concept in this cut-off system is the environment because your end users will also be able to declare that my application will be applied to staging cluster or production cluster that is where the abstraction of the environment is trying to solve so in that case you can see here this project which is pretty new but it's getting fully adopted by a lot of users this project is trying to provide your end users with a bunch of abstractions but these abstractions are enforced by templates and in that case your end user will have a simplified approach to declare their application deployment in a single YAML file which name is the application that's why we say it's totally an application-centric experience okay so speaking of the chain I have talked about the application definition how all of these technologies are simplified the experience of your developers and operators but speaking of that we also have another approach which needs to cooperate with these application abstractions to make the deployment of your application very easier which name is GitOps so it allows your developers to declare the deployment of your application by filling the Git-based process which is quite similar to them quite familiar to them so this is where GitOps come in so I will pass these parts over to Alois so he will continue to explain how the GitOps works what benefit it comes from and how the developers can leverage this workflow to deliver application at a scale so okay Alois here we go okay welcome back and thank you Harry for this deep dive now let's look into GitOps related screens GitOps is becoming more and more front and center when we talk about deploying cloud native workloads this also led to the establishment of the GitOps working group within seek app delivery the GitOps working group is working on a common definition some standardization around concepts training material and so forth and there is lots of people who are already actively working there if you're interested in GitOps or are actively using GitOps I highly recommend you engage with the GitOps working group or look at what they're working on this is also massively driven by the fact that we see more complex GitOps scenarios emerging so we see GitOps being used in multi cluster environment scenarios people usually separate their pre-production and their production scenarios by running them in different clusters and GitOps tool chains obviously need to support this and also we need to model stage to stage propagation which is not just coping the artifacts from one stage to another so you need to have stage specifics in there and you need two ways to model exactly this and how you propagate this across your Git repositories as well what we see also emerging more and more which still doesn't exist that much today is that different tools obviously use proprietary definitions of how they are deploying workloads so when we talk about cannery and blue-green releases conceptually GitOps tools are doing this or progressive delivery tools are doing this in a very similar way still all of them use their individual definitions and it might be a bit harder to get started there so we hope that as the space matures more and more that more and more standards and more will be emerging there and more and more agreement across different tools which leads us directly to the next topic progressive delivery progressive delivery adoption is also growing we see people using more sophisticated approaches and they emerging more and more as people are moving more and more workloads to Kubernetes and there's also this trend of not just using blue-green deployments but really using progressive delivery for rolling out workloads one key driver here is that workloads themselves become more and more applicable to be used in a progressive delivery scenario so two versions can really productively run alongside each other and the application services are built in a way that they can deal with this rather well also testing and quality gating becomes tighter integrated into this process so when a new workload gets deployed in a stage tests are automatically triggered so this might be load tests security tests and then SLOs are very often also used to validate in a form of a quality gate of whether this workload should stay in the current stage or it should be propagated to the next stage so here we see the concept of SLOs of service level objectives not just being used in production but also being used already during deployment to validate whether all the key criteria of a healthy application are satisfied so this moves us definitely beyond just checking a health endpoint but looking at the service way more holistically there's more complex scenarios as already mentioned support for multiple environments it becomes way easier to configure different environments and move workloads from one environment to another in the progressive delivery pipelines and having this in a more easier to manage approach and also the the individual environments are easier to manage and it becomes to be more of a say agreed upon approach or a better to manage approach on how to model workloads for specific environments and how to do this in a in a way that other people and other tools would understand as well as we start to integrate more tools it becomes more and more obvious that adding a new tool usually means working with a proprietary api that's where the cdf so the continuous delivery foundation started a SIG on events so what the event SIG is doing they're standardizing across application delivery events across the entire application level life cycle so where it's testing deployment validation new artifacts being available that are event that will eventually be understood by a wide variety of tools out there which allows you to easily plug in a tool into an existing pipeline or make it also much easier to exchange tools for different scenarios and as we obviously move more to a progressive delivery approach where we use more sophisticated ways of rolling out the applications not just from a traffic switching perspective but also deciding which user traffic we send to which version we see a much tighter integration of what a progressive delivery does to how a service mesh is configured so they will be much closer collaboration between the two and they will also share much more of those definitions going forward now moving to the next topic about operational concerns so so far we talked a lot about day one operations but obviously as we run more of closing production day two operations becomes way more of a concern as well and interestingly again we see SLOs being also used for validation of remediation actions so instead of just executing actions to get applications back to a healthy state we use SLOs to really validate whether the action had the desired effects and did not have any additional side effects that we might not be be checking for so this really allows us to build massively improved reconciliation by validating against SLOs so as you can see SLOs really take a front and center stage here on how we talk about the health of applications as we talk more about operational concerns we also have to talk about runbooks and runbooks also start to get more automated we want to automate this entire process for our workloads so runbooks start to get modeled in code instead of just being verbally described which means they're shipped along the artifact and when you have a certain service available it comes with its runbook that tools can then directly use to mitigate for certain actions these actions they also move beyond just being a scale up or a rollback actions one prominent example is he automatically using feature flags that developer have added to mitigate certain scenarios so there is a feature flag within the application what to do when load gets too high or when certain other situations arise these are then described in machine readable runbooks and integrated into this process also actions are no longer like one step actions there seems to be more and more the trend that people are modeling more complex operational workflows just the way they used to so you pick an action you validate whether it actually works if it did not work you look up where there's another action to take and automate this process obviously eventually escalating it to a human operator if none of the automation really helps but also there's other operational concerns that we have to take care of which also ties into their progressive delivery and github scenarios we have to manage how to run an application separately from the core application definition and the qvailer project take using oem takes this approach of modeling them into using traits so traits more or less define how an application a hard workload a service should be run in a certain environment and how it should be properly configured so having this modeled explicitly and not being part of the core manifests on the one that increases flexibility but it also creates a separation of concerns between defining how a workload should be run basically and how a certain environment wants this workload to be run from an operational perspective chaos engineering is also one of those increasing trends as already mentioned just to have some numbers here we talked to the litmus chaos project which is also cncf project and they are seeing a four times increase in the number of experiments that are run there's also a trend of moving left into pre-production so instead of running chaos experiments solely in production they are tightly integrated into continuous delivery pipelines and github scenarios where chaos experiments are run against workloads when they get deployed in a progressive delivery fashion to see how those workloads behave in those scenarios or whether they still are able to cope with certain scenarios most experiments today are still run at a pot level that might be a bit surprising because you would assume that kubernetes is already taking care of most of the pot level chaos like killing a part and things like this still it is very widely used mostly to identify configuration issues where workloads are not properly configured or there's no high availability configuration available so this can then easily be surfaced early on and these chaos experiments help to validate exactly these scenarios hypothesis checking front and center to a chaos experiment so you always have a hypothesis you want to check against and here also SLO modeling becomes central so we really start to see as SLOs being established as the central language spoken across the application deliver life cycles you have these observability metrics really become a key component to chaos engineering beyond the pot level experiments there is an increase in network and serviceability experiments again we are talking about distributed applications here and testing whether distributed for a distributed application whether all services are working properly or not is obviously key so integrating this into chaos testing makes sense and last but definitely not least there's also this emerging trend of security chaos so chaos experiments that focus specifically on security related issues and as we start to talk about def saccops and integrating security close into the life cycles it obviously also makes sense to run security related experiments as we are releasing our work so as we start to have more high level components and more prepackaged components available and we move from pure infrastructure definition to high level service definition we need to store those definitions and when we need to make them available to people to easily consume them and that's why service catalogs are becoming more front and center so obviously in a corporate scenario you don't want anybody to run a q-control apply minus f on some file they found on the internet how to run a Cassandra or something like that and also most services even if they have an operator available they don't come in a way that they're ready to be run in production they still require operational configuration and goes to configuration specific to your environment so how you want to run and operate those workloads and these needs to be applied and not everybody immediately knows how to do this or it might not be their core job on how to do this so there are usually configurations added on top even on these prepackaged components that are provided by operators for example by the way this makes the life of actually both easier for developers because they get a service that actually works the way they want it to work and also for operations because they still have the control and they can define the environment the way they want to define it this helps us also change the mindset moving from a infrastructure related mindset to a service related mindset users start to really consume higher level services as a service like as a developer i want to get a sequel star database i don't necessarily know how to configure it for all environments that it's going to be deployed to nor do i need to know all of these details i'm just going to a service catalog i'm getting the database that i want as a service that i can then use in my application so that's it from the sick app delivery trends what we're starting to see in project i hope there is something there for all of you some interesting trends or some validation of things you're already doing or you're already planning to do also feel free to engage with sick app delivery directly if you have questions if you have real world examples that you'd like to be sharing you can obviously find us on our github in our github page or you find us on the cncf slack there are also meetings every two weeks so every first and third wednesday at eight a.m pacific time so feel free to reach out to us ask any questions or engage with the wider app delivery community within the cncf that's it for today and thanks for your time