 Hi, everybody. Welcome to my talk here about closing the developer experience gap of your container platforms. My name is Timo Sam and I'm a pre-sale specialist for developer experience at VMware with a main focus on our internal developer platforms running on Kubernetes and the commercial spring products. Before we start talking about technologies, let's first have a look why we need them. In the traditional software development process, it was really like the illustration below. So the software was handed over to the operations teams that were responsible for deploying the application, supporting it and doing also day two of it. Long release cycles led to increased project risk and increased costs also. And because of that, modern software development moved from an HR, from a waterfall process, more towards an HIL approach. And also huge application, huge complex applications were split down into smaller loosely coupled microservices implemented by smaller teams. Because of the fact that operations of such applications is a lot more complex than there is modality applications, there's really a need for collaboration between developers and operation teams, which is called a DevOps culture and supported by automation and self service capabilities. Also regarding those deployments, it's important that you have rapid application deployment and provisioning capabilities to be able to release your application fast and early. And for that, we have, for example, technologies like CI CD containers and Kubernetes because of the fact that developers or application teams are now responsible for the full life cycle of the applications. They need a solid observability solution because at the end it doesn't mean that they are responsible for deploying the application, etc. That should be still the responsibility of the operators. For sure, there are things like you build it, you run it, but at most of the organization that doesn't work because of the missing expertise. If we talk about Kubernetes, so container orchestration solution, it not only provides the benefit of short and software development cycles, it also provides additional benefits like improved resource utilization compared to a virtual machine and also, for example, reduce costs. The fact that Kubernetes is now the infrastructure abstraction standard comes from its capability to be available on more or less any modern or any infrastructure, so from on premises to public cloud. And because of that, Kubernetes is the defector infrastructure infrastructure abstraction standard. Kubernetes also provides a really huge ecosystem of tools and you may have seen some of them already here or in the C and ZF landscape. And yeah, Kubernetes really provides for software development a low abstraction or a too low abstraction and especially if your application is building applications at scale, so a lot of teams working on applications, it's far too low for them. And yeah, you should consolidate that and provide a higher abstraction. Kubernetes with all the ecosystem around that and missing capabilities you actually need to run applications at scale is more a tool to build a platform than a platform itself. If we have a look at the developer experience of Kubernetes developers nowadays in most of the organizations, they are responsible for defining the container image. With a Docker file, as you can see in here, for example, with a spring boot application, it's just about four lines of code. And then you have to package the container, push it to a registry, and then you are able to run your application via interactive commands or via YAML, so it's really easy to get your application running. But to run it in a secure way and really at production, there's a lot more to do. So you have to define the ingress, you have to define TLS. You have to define the scaling and a lot more. And therefore it's not or most of the developers don't have enough expertise to do that really in a secure way. And especially, as already mentioned, if you're implementing or developing software at scale, a lot of time will be consumed to define this stuff. And yeah, every second developers are spending on working on those infrastructure specifics. They are not able to provide any business value because the only business value that they can provide comes if they add new functionality to the software or, for example, fix bugs. Kelsey Hightower, one of the VIPs of Kubernetes, just mentioned in March that the experience that he's talking about the future of Kubernetes should be like that the developers, they just define or provide their source code, define some minimal configuration like, for example, which data services they need. And then the platform should configure everything for them that the application just strong. So container building, et cetera, all this stuff should be abstracted away for them because they are infrastructure specifics. And if I'm coming back to those containers and Docker files, would any operator ever thought about in the past that a developer defines the virtual machine image with applications running on and which website version it should use? I don't think that that was the case and it shouldn't be the case. Such an experience, we call it an app aware platform. So the developers don't have to adapt their application for a specific infrastructure in instead of also with a higher abstractions or abstractions, frameworks like, for example, Spring Boot provide to developers. They shouldn't about care about the infrastructure and the platform is then able to use the software code and package it accordingly that it can run on its infrastructure. And most of the cases or most Kubernetes platforms I see at our customers, it's more like on the left. So the platform expose a lot of infrastructure specifics, which shouldn't be the case. Now we will have a look at several solutions to abstract away infrastructure specifics and that you can start providing an app aware platform experience now. As you can see on the left, that's the already mentioned really basic example of running a Spring Boot application as a container. And yeah, if you search it at Google, you can get this four lines of code within seconds. But as I said, this is not secure in any way and it's also like not how best practice spring application should be packaged in a container. On the right, you see an example of how it's a little bit better way of grading that container. So first it starts with a choosing the right base image. So which is the right JDK also the version you can see Java eight versus 17, which is a supported long term release compared to the eight one is not supported anymore. Also that you, for example, in best case here would define the ditches that you really can show it's a specific container image and then also, yeah, ensure in a way that you know what's what's in the base image. And then you can see it's for example using a functionality spring provides a layer char here where the business code is in a different layer than the dependencies because the business code changes. You can see in small frame frequently than the dependencies. And with that and catching mechanisms offer example registry and the docker on time you're saving a lot of this space and also it's a lot faster to pull and push those images. And that's not only on your local laptop also on the Kubernetes nodes. You can also see we are using a multi stage built years with that you also in a way reduce the attack factors because you're not including all the files and the JDK into the image. And this also makes the image lots smaller. So the question is how can we in a way abstract that away and as mentioned, it's not the best practice that the developers define how the applications run. They need the flexibility regarding languages and frameworks they can use. And yeah, for that problem of defining the base image and putting together container image Heroku and pivotal together created the cloud native build packs standard. The idea of content of bullpacks is that those bullpacks for specific languages and capabilities of your container image. They detect based on the source code whether they can contribute something to the container image. So like, for example, if you have a spring boot cloud native bullpack or parts of it, they can detect. Oh, it's Maven because of the POM XML. And then it knows so the Maven bullpack then knows how to build the application surrounding Maven package, for example. Also, there is a JRE bullpack that then knows, oh, it's a Java application, so I can provide the runtime for it. And with those different bullpacks, then at the end of developer don't have to specify anything. Maybe some configuration properties like, okay, I want to use this version of the JRE. But most of the stuff is abstracted away and those build packs are responsible for building a container image. And then if you push it to registry, you can deploy your application as you know. And it's also like that with those cloud native bullpacks, you have a stack, a base image coming with it. And if you then, for example, update both of them, you can just recreate the container image with those updated base images. So that abstracts away the building the container image from developers or defining it. But we still have the problem that, yeah, we have to build it in a way. There is a pack CLI available for that. But for doing that at scale, for sure you could integrate in your CI CD. We have another solution called K-Pack, which at the end abstracts away the building of your container images from source code with cloud native bullpacks in a Kubernetes cluster. So it's running in Kubernetes, developers or the CI CD can define the configuration for K-Pack. So where's the source code, some additional configurations maybe. And K-Pack is aware of the build packs and operating system or base images in the Kubernetes cluster. And it also has the capability if there's a new base image available, for example, to rebuild the actual container images or all of them that are configured in the environment. And with that, the big benefit is not only that developers don't have to care about how the container image looks like. It also abstracts away or the base image updates or put it in the hand of the operators in a fully automated way. K-Pack is currently not part of the CNCF, but we are working on contributing it as of right now. So I think in the next few weeks or months, it will be also a CNCF project. Here's an illustration of that. So we like if there are new build packs available, a new operating system or base image available, or your application source code changes. K-Pack is capable of rebuilding container images or all the configured container images in your Kubernetes cluster. And so, yeah, let's have a look how that looks like. I'll give you an idea. So instead of using YAML and putting YAML together, because that usually takes some time, I'm using a CLI for that, which is also open source. It's called the K-P CLI like K-Pack. And I'm running a K-P image create command. And you can see the only thing I provide is a name for it, because it's a Kubernetes resource at YAML. The URL where my source code is stored in this case is simple, a really basic Spring Boot application and a tag, so where the application should be automatically pushed to. And it's also then in the Kubernetes cluster resource, and as mentioned, if one of those aspects like the stack or the build pack will be updated, it automatically recreates an image. So that's not just for now, as I said, it's continuously watching where there are updates. And yeah, as long as it takes to build the image, we can have a look at, for example, the image or the YAML that I created. I'm using a block in here, exporter, so it's removing all the status stuff to be able to better discover it. You can see it just has the source code defined, as mentioned, the tag. In addition, it has some configuration like the cache size and a builder. The builder is really the combination of the stack and the build pack, so you can have multiple builders with multiple types of space images and stacks. But the idea is that a builder includes all the different technologies or build packs for the different technologies the developers want to use. And I can also have a look at the builder here that's running in my cluster. And you can see, based on information, it has a detection order, so based on which order the build packs will be in a way or try to detect whether they are able to contribute something. And you can see in this environment, we have a variety of build packs, so they are commercial ones. But with the PAKETO project, which we are also contributing to, there is, for all the different build packs you can see here, we also have open source versions available. And you can see, they are really like for more or less every language, also a web server one where you can, for example, build and deploy then your Angular applications or something like that. But they are really modular, which means that you can see here, they are for every aspect of those different languages and capabilities you usually need in a container. So from providing CA certificates to, for example, the MAVEN, one I mentioned, or Datadoc, there are really small build packs just focusing on one functionality more or less. And with that, you can also, instead of like it was before with ground foundry, et cetera, rebasing them, you can just add your small build pack if you want to add some custom stuff for your organization. If we scroll to the top, you can also see that there is a reference to the stack. In this case, the default stack or base image here is Ubuntu Yemi, which we use with them and, yeah, that's really defining how your application is built. If we have a look at our image, hopefully it's built now. So list, we can see it's built. We can see the digits. And we can also, for sure, because at the end it's a part running the build of the container, you can also have a look via kubectl getPots, but for now we want to use the kpcli. And there's a list command to see all the builds of the Hello World application. As of right now, there's only one. And also the reason. So for example, if there's a base image update, you see the reason stack so that you already always know what was the reason for building that you contain image. And with a logs command, we can have a look at all the logs where you can see, yeah, adding all those different layers from the different build packs. For example, here the JRE, some special spring boot stuff that's happening, CA certificates, et cetera. And if we scroll to the beginning, you can also see how, for example, the Maven build pack just built the application, et cetera. So it's really like from source code to container image without any configuration. Okay, so now we have a container image built for us. The next thing we want to do is we want to run our application. And for sure, we could just create a deployment, a service, an ingress, and then we have it running really easy. But again, it's like, yeah, we have to apply a lot of best practices, like define the security context, et cetera. And that, as I said, is usually something developers don't have the expertise for. It's even really hard to do that in a proper way for the operators that are, in best case, maybe have more expertise on Kubernetes or containers than the developers. And, yeah, so we try to provide or use a higher abstraction on top of Kubernetes to run our application. And, yeah, what we want to use for that is a serverless runtime. And serverless runtime, so let's first define what serverless is. At the end it's like you can group it into two areas. The first is backend as a service, where server side self-managed components will be replaced with all the shelf services. An example is, for example, a use that you use for authentication or as an authentication server, something like Octa or Auth0, where you don't have to care how it runs, et cetera. You just use the functionality, so it's abstracted away from you. And, yeah, your operators don't have to care about it. The second thing is function as a service. It's a new way of implementing software and deploying them as functions. So really small pieces that are focused on a specific functionality. And usually you use them in an event-driven architecture, so they will be triggered by an event and then autoscale based on the number of events. The key of both is that at the end, by abstracting away the management of server hosts or processes, you can really focus on business value. So all about what I'm talking about is about business value, because every second your organization doesn't have to care about it. At the end, you're focusing on business value or adding business value. Or can add business value in the time you're not working on that. And, yeah, for serverless runtime, I think most of you have heard of AWS Lambda, which is a serverless runtime at AWS. And, yeah, with Knative, we have a serverless runtime running on any Kubernetes. So with that, it's not only available in the public cloud, it's also available on Kubernetes. Knative itself has several components. So the first one is serving, that's really the serverless runtime. So it provides a high abstraction. It provides, which is typical for serverless, autoscaling capabilities to scale to zero to thousands of containers within seconds. It also has an eventing part, which is really the enable or more or less on an infrastructure side for event-driven architecture, so abstracting away brokers, et cetera, triggers of events. And last but not least, we have functions. That's a new functionality. So really providing a functional experience like AWS Lambda with lightweight container images. And as I talked about about cloud native build packs, we are also currently working on providing that open source of cloud native build packs together with a function experience for Knative, because currently they are using source to image. And so, yeah, let's now have a look how Knative works and how to deploy our application onto our cluster. And for that, I'm also using a CLI just to make it easier to see the experience. So Kn is the CLI for it, like a native. Then service create, hello world, and you can see the only thing I do is define the image. For sure, there are a lot of more things you can define, so the min scale, the max scale, et cetera. But in our case, it's just like here's the container image deployed for me. If I run that within seconds, because of that of the, okay, you can see within 11 seconds my application is deployed. And it's also like, you can see it's HTTPS. So by default, it's TLS enabled because it's configured in my Knative configuration, so for the full cluster. And with that, I have actually certificates that the traffic is secured. And you can provide a lot of additional best practices to this application. You can see I have the URL, so I can just curl it, get a response. And let's also just have a second, for a second, have a look at the YAML file. So the resource name is KService. So it's a Knative service, which is maybe not the best choice, but yeah, it is like it is. And what you can see is, we like, okay, so here's the container image, and it's like, as you can see, it's a template. So at the end, it's like the pot spec. But additionally, with all the capabilities, you get additionally with a Knative like the outer scaling. You can also see that you can split the traffic so it can also fall back to previous revisions. And do a blue, green and canary deployment with Knative. In this case, because I only have one revision, 100% traffic is going to that one revision or application. If we have a look at what it did for us, so all the resources, it created because I said it's a higher abstraction. We can find out with a tree block in, and you can see it's a lot. And that's the KService or the Kubernetes as a Knative service that was created, which always defines a configuration for the different revisions. So if I would change something in my application, for example, or that's just a defining container, it would create a new revision. You can see the deployment it creates. You can see a pot outer scaler it defines, which you configure in Knative. You also then see a route configuration, which defines really how the increase is configured. So the traffic from the outside and the service. So all of that stuff was generated with just one YAML here. The next thing we want to do is really have a look at the outer scaling because that's one of the features of a serverless runtime. And as you can see, there's no application running because it's scared to zero. And if I know, go to a virtual machine here because of the fact that my computer is not able to handle too much requests. And yeah, I will just run this hay command here for a second. So not too long because at the end it's like we can see that one is creating more. And yeah, I have to cancel it because it's sending a lot of those thousands of requests. So you can see how my serverless runtime outer scales here, which is really amazing because with that we are able to handle a lot of demand. For sure, if you're running on Kubernetes, you in a way have to up front provision your Kubernetes classes for that. But it's like, yeah, if you have a lot of applications running, usually they are not all scaling up at the same time. So that's it for our serverless runtime. The next thing I want to talk about is the ICD because usually now you need a way to bring your source code to a container image, build a container image, deploy it. And for that, we have continuous integration, continuous deployment. All the common ICD tools you're aware of, they have one or the same challenges. And one of them is that they are synchronous and there's an orchestrator. So also a problem regarding the fact that if there's a problem with the orchestrator, nothing of the parser production works. And addition to that, each of the applications usually has a different parser production even if you're using a template for that. And that's a problem because if you update something, a template, for example, add a new tool, a CVE scanner, etc., how do you inform all your developers that they should add this CVE scanner? Because usually there are no mechanisms to update the ICD instance based on the template. There's usually no separation of concerns. So at organizations, maybe they split the ICD. But yeah, who's responsible for that is always a huge problem and it develop experiences lacking. They create a new tool for that called Cartographer and it's all several of those problems and added new concepts. So one of them is that it's asynchronous. So like an event-driven microservice application, everything is handled by events. And the big benefit of that asynchronous functionality is that, for example, if you have something like that KPEG, so for automated container image updates and the capability that if the base image, a new base image is available in a Kubernetes cluster, it can automatically recreate a container image and just send an event to the cartographer that forwards it to all the downstream steps that are interested and run it through the parser collection and deploy it. So that's really about those asynchronous functionality. Other examples are CVE scanners that are continuously watching the CVE database for new updates. We also provide separation of concerns by a defined interface. We call it a workload, which really defines, like I said, the story before the developers say, here's my search code, here are my data services I automatically want to bind to. It will be applied to the cluster and every of those steps will be handed over or created for developers. So the CICD, the parser production, is in the hand of the operators. And they don't have to care about CICD anymore to developers. For sure they maybe define the test, but that's it. It works with existing tools in the Kubernetes cluster and for integrating tools outside of the Kubernetes cluster, we are leveraging Tecton, a traditional CICD system, even if it's cow-native, but with those challenges I said before, to use the ecosystem of Tecton and integrate external systems. Because of time, I will just show you for a second how the developer experience looks. For that, I have a look at the workload. This is what the developer would define. So first, we also have the concept of running every specific type of application, other applications through the same parser production. So any web application, whether it's an Angular application or a Spring Boot application, is running through the same parser production, ensuring that they are secured in the same way and also, yeah, secured in the same way and running in the same way. And, yeah, this developer defines, okay, my application is from type web and here is my Git repository. It could also, for example, define service bindings, automatic one, a standard created with Red Hat together. And that's it. So if I apply it to my cluster and you can, for example, abstract it away via GitOps or provide developer experience via UI and then if I run the tree command with my workload to see which resources are created, I can see that it already started based on the workload. Okay, it's using Git repository. That's a Flux CD source controller, CRD custom resource, which continuously fetches source code from Git repository. I have my KPEG image, which I showed you before and after the image is built, it will deploy it in an easy example to K-native. So I have the full cluster production handed over or created for one application instance without developer defining anything regarding that cluster production. And, yeah, the last thing developers maybe need is an ICUI. And for that, there's Backstage is a tool open source by Spotify and part of the CNCF. It provides some basic block ends, like, for example, for documentation purposes and providing an interactive way for defining and discovering APIs for developers so that they have everything available to, for example, consume internal APIs to get onboarded. And additionally to that, it has a really huge block end system so that you can provide integration points and all the different services you are using from observability, for example, to CI CD into one portal so that developers only have that one place to go to experience everything and what's important to know is that Backstage is really, it was used by Spotify for a so-called golden path and the idea was to have all the documentation for everything in one place. It's also why they currently, in general, the ecosystem is working on providing proper rule-based access because it was not meant to use for automation like most of the companies are currently using it. And yeah, that's it for today. You can see I only was able in those 30 minutes to show you some of the tools that are available. There are a lot more to really, yeah, provide you a nice developer experience on top of Kubernetes like, for example, Cribe for CVE scanning, there's also Trivia or Sneak. Contrary and increased controller of us, KaniCo, if there's still the need to build some applications with Dockerfuzz, cross-plane and a measuring solution which also, I think, was presented in parallel to, for example, provision data services from your Kubernetes cluster that are outside of your Kubernetes cluster. And yes, I showed you a little bit of that for GitOps and, yeah, Carvel Tools are tools for us for the management of Kubernetes resources. And yeah, that's it for today. Thank you very much for joining my session.