 Okay, hello everyone and hello to Los Angeles. Welcome to our first ever KubeCon talk. Unfortunately, we cannot be with you today. So we sent you some cold greetings from autumn in Germany and hope the weather is better in California at the moment. So let me just share my screen real quick to dive into that. So today we're going to talk about Kubernetes, what a surprise here at KubeCon. And as part of this new business track, we're going to talk about our experiences in adopting Kubernetes for clients and talk about the hidden costs and the things that we encountered during our various experiences in the Kubernetes journey and called it like, how free is Kubernetes really? So a few words about us. We both work at a company called Novatech in Germany. My colleague Torsten, who is a cloud technology consultant at Novatech, we're basically both consultants in the cloud space. Torsten has a bit more focus on the side of security things. I do also give lectures at university in the time when I'm not working at the company and on the community side, I'm also engaged in the Cloud Foundry community as an ambassador and an organizer of a meetup. So much for that. Before we kind of dive in into the topic, we want to give you a bit of an idea of our background. So what we do normally for living is helping clients move to the cloud. As a very generic term. So very often we support migrations from legacy, IT implementations to a modernization level, replatforming and making things cloud-enabled and cloud-native. So that comes in kind of a flavor where we do strategic consulting, where we do hands-on implementation and migration and also education and skill enablement. So how we got to Kubernetes, we didn't basically go there all directly, so to say. Like as you can see on the very left, the traditional things we did was mostly Java-based and Java enterprise-based application development, mostly in on-premise and the environments in data centers. And our kind of gateway drug to the cloud was actually Cloud Foundry. So we didn't have as many Cloud Foundry engagements as we do Kubernetes now. But these were like the first steps what we did of bringing workloads into like in a more cloudy environment. Like nowadays, we do a lot of Kubernetes in both on and off-premise surroundings. And so today, we wanna share a bit of our experience and insights, things we discovered on the way, things we probably sometimes did not expect and maybe can give you a bit of an advice how to move forward based on our learnings. So yeah, I pretty much said that already. The why we're here today, of course, we will not only talk about technology and the costs, we also try to give a bit of a business perspective. Torsten will do a bit of a deep dive into some of our projects to see that we're not making this all up. It's really, it has really happened and it's still happening and what we took out of that. So one little disclaimer here is this presentation will like not provide you a how does it say a golden template or whatsoever or a silver bullet to do Kubernetes right. We haven't found that yet. So in case you have, feel free to let us know. We're definitely interested in that. But we definitely had a few learnings on the way and we hopefully can enable you to anticipate issues or plan better with the learnings that we can share with you. Now, to start before we go into the projects, just like in a way, what our first impression was when we started with Kubernetes, especially also looking from a perspective from like a pass layer, it's like that we were used to, we're basically coming from rather development and operation sites. So we build applications, we deploy and run applications and looking at things from a past perspective, we were pretty used of like paying on a per use model, basically depending on the application or microservice or let's say of a running container. And the container runtime was mostly like one big Cloud Foundry instance with multiple tenants. And one of the things that significantly changed there was when we got into like the container as a service layer so to say that this was not the case anymore. So the payment incurred from like the runtime of the underlying VMs. So that means we had to worry and plan a lot more about our utilization by ourselves. And also it was at one big Kubernetes runtime to handle them all. We very often see the situation that we have or managing multi different cluster environments. So despite multi-tenancy options with name spaces which we of course use, it's still a lot more the case that we have like different clusters for different stages to manage. So this is just like one single aspect. There will be more, I mean, some of the cost factors we're going to talk about is things like provider choice, estimation size that are mentioned, technology evaluation and adoption, then toolings around Kubernetes for security, APM logging and tracing. And also one thing we encountered and counter very often is skill enablement. So this is just a bit of a teaser. And with that, I will hand over to Thorsten. Let's talk about our journey along Cloud projects. We had a lot of projects and brought to you some of the most recognizable or the most important for us. On the next slide, we put some kind of timeframe. Next slide, please. We started back then in 2016 with a cloud migration of a connected vehicle backend. That took a long time and was followed up by a cloud migration for a dealer platform with kind of the same technology stack. As of now, we are building a CEO trust cloud platform where we will alter here a bit afterwards for that. We did several cloud assessments for several projects with several technology stacks. But what you also can recognize is that since 2017, we providing trainings for our customers and also for our colleagues with the things we learned along the projects and the things we did not learn but try to get the knowledge there. Before we dive into those projects, we want to provide a big thank you to our colleagues. Those are colleagues who are involved in this project, Adrian Ruben and Corvin. All of them provide their insights, their perceptions of the cloud adoption journey within their projects. Thank you from us to you. Let's start with tales from a connected vehicle backend migration. Next slide, please. When we started into the project, the assessment was quite simple. Is there any chance for cloud for the old fashioned or traditional technology stack with a Java E6 traditional web sphere application server, three terabyte of data in a DB2 database backend, service-oriented architecture, quarterly releases? That's probably a tech stack which you also have in your organization or recognized from some other projects. Our customer asked, is there any chance for that stack for this application landscape to go into cloud? They want to have shorter release cycle. They want not so much money put into operations or KPEX to OPEX migration. That's what they want to do. We have been tasked to find a migration path considering refactoring versus rewrite. Is there any chance for another release cycle in the business unit? Can we have service contracts versus the old fashioned communication patterns? On the next slide, we will see which technologies that we found. We knew there is a migration path and we knew we need to change the runtime because one thing we want to use is one of the five essential characteristics of the cloud, rapid scaling. Rapid scaling is not possible with the traditional web-share application server. We need to migrate to web-share liberty in this project. It's the very same for other technology stacks. We need some kind of cloud-compatible application runtime. We need to design a new container. We need to design a container based on the official Liberty image. We need to integrate old IAM systems. There are many things all around the application which has nothing to do with cloud but we need to integrate them. Once we had those migration paths, we found that there are many moving paths, many moving paths that we need to be put together in order to enable the team for DevOps. Building a cloud platform all around Kubernetes means having, for example, secure pipelines designed prior to deploy anything on Kubernetes. We need to create Helm charts for all those models, all those microservice which are coming up and it also means we had some unplanned challenges. They appeared, it felt like totally randomly. We need to update to Helm version three. That took us, well, several weeks. It was just not easy to make container updates, Helm updates for within a few days. We need weeks and in these weeks we could not put more effort into feature development or something else. We had to handle memory leaks, things that we never had before because customer operations handled those things if they observed that. Speaking of observability, in the next slide, we put parts together in order to become ready for 24-7 on-call DevOps team. We created alertings and what we also created with that are Emancip Logging Costs. Our application landscapes, well, just that particular one has many, many audit logs and those audit logs created costs of half a million euros per month. We created life and readiness probes for applications which were never designed for having life and readiness probes because otherwise we could not handle outages, optimize against outages and understand how our applications handle memory or cause memory on CPU consumption. All in all, this went very well. It took several years. As of now, we have migrated from a service-orientated architecture with a traditional technology stack into event-driven architecture with almost resilient systems. That went very well. Let's see about the next project. Currently, we are creating a zero trust cloud platform for let's say old-fashioned workloads. What's the problem with cloud and old-fashioned workloads? You're predominantly doing some lift. You're lifting old applications to the cloud and what you are lifting with them is security vulnerabilities. We need to take care of them. Next slide, please. Just have a short excursion to what are attack vectors and what's that zero trust architecture? What does that mean? We will start with an example. Imagine an application with internet, public internet-facing web application based on the Struts 2 framework. Struts 2, still alive, almost 16 years old with a lot of security vulnerabilities. We know that vulnerabilities because they are stored in a database, we just can browse through. It's quite easy to get root access in that Struts 2 container, which has been put into Kubernetes cluster. From there, it's quite easy to get the metadata of those instance and probably get root access to those Linux VM. And then, which roles to have the most permissions and are secured the most on dev roles? So the role of a developer has much permissions probably to access the database and they are very easy to get for attack. Next slide, please. Now we see what lateral movement means. We find one weak spot and move to another weak spot. Probably the next spot is our database with customer data. That's then quite interesting. You can imagine that most of our customers do have something like trust networks. Probably you also have that. Once you're in a network, you are trusted or authenticated user. That's very bad with RedJet's two-letter movement. Creating a zero trust architecture means each resource or each assets of the architecture needs explicit role assignments, authentication, and so on, assume compromised. That means zero trust architecture. Each resource can be compromised and needs explicitly role assignments. So having that in mind, we are creating a zero trust architecture for our customer. On the next slide, we will see what is about the weak technologies. We will put applications on the Kubernetes cluster which are going to connect to in core wall monolith which are going to connect to an Oracle database probably application with Spring Framework but five to six years old. Docker images were five to six year old. They will come onto Kubernetes cluster and we will lift security vulnerabilities with them. That means we need to mitigate that risks prior to any Kubernetes related task. Creating role-based access control within the infrastructure but also within Kubernetes cluster. We need to find insecure deployments scan automatically for them in best case, they will not be deployed on the cluster. In worst case, they are deployed and need to be isolated. Put them into a CM having AI based tools like defenders. That are things we need to build around all those Kubernetes cluster. Why does this customer even go into cloud? Well, because he wants to modernize his infrastructure. He does not want to have mainframes included. This is the very start of the cloud migration for this customer. Next slide, please. In summary, we recognize that almost all of our projects have strong focus on enabling teams and enabling teams for security. So Matthias, my question is what is probably the best way to enable teams for Kubernetes adaption curve? Well, thank you. I will try to answer that question and go into that right in a bit. Before that, first of all, thanks. And I think it's also very important for our audience to understand and see what are the business drivers and the business needs in which context we get into those engagements and what are all the technologies and tools that surround Kubernetes, which is basically the common denominator across all the things that we do there. Now, it's going a little bit from business to technology. I mean, if you look at the stack that Tost has already showed, I mean, these were the two that were in there. We have other ones in this example for like with an adoption of Kubernetes and IBM Cloud with monitoring stacks with Instana, Grafana, automation with Ansible and Jenkins. So you can also see there the technology landscape kind of changes. And this kind of revolves all around that Kubernetes piece. And in case you cannot get enough of that, you just take a look here. Most likely you have seen that before. This is the CNCF landscape. And this is the thing that we realized. It's not just about adopting Kubernetes. You're pretty much adopting an entire new ecosystem and evaluating and putting the right pieces together to make a successful stack for your successful architecture client is one part that really takes a lot of time and hence also money. Because it's not like there's one single golden combination which will always work. You really, we found out that a lot of the things we do is evaluating and comparing and making sure we pick the right tool for the right job. Which is a difficult thing because probably at the moment as we speak, there are five new things popping up to that landscape. What we have mentioned before was, of course, the observability factor. So this is just one kind of tiny sub-landscape within that landscape. And even there, it's almost impossible like to evaluate them all. So a lot of experience is definitely helpful there. But still you need to continue learning and stay kind of on top of the curve. So colleagues of ours have actually come up and created a tool called Open APM. It's an open source kind of documentation, so to say, where like parties can contribute to and to figure out which technology artifacts would actually work together and provide useful combinations of how to collect data, how to like store the data, how to present the data of all the monitoring and logging metrics. So I definitely recommend to check this out. So yeah, summing things up, there's the business side, there's the technology side. There's a lot of learning, a lot of things you need to do. And just speaking of learning brings me to the final topic here. And Tosna has already asked that. We try to provide training to all of our clients. Sometimes we just do dedicated technology training, but very often we do like training dedicated when we feel the need for certain participants. But for the eyes of the training, of course we try to prepare people also not only just to master Kubernetes, but some of them, of course, also want to go into a direction and get certified. And this certification also comes with some implications. I mean, I'm not sure how many of you here do have one of those Kubernetes certifications. There's like the application developer, the architect and the security specialist. And we're probably closest to the CKAD. And what I took out here is two sample scenarios. So this is like all hands-on and a very good certification where you have to demonstrate that you are able to handle the API and walk through those steps. Now, looking at the things and say, well, I want to create five Nginx pods, label something like that and label other ones like that or create a busybox pod and echo a message or whatsoever. We'll definitely show that you are qualified to handle the API. But if I play an unfair role, I could also say, none of my clients has ever asked me to do that. So, I mean, of course they haven't, but what I'm trying to say here, these trainings are very much on a how-to level. Like you learn how to do the things, but you don't really learn a lot why you would do the things and to apply which and what. Let me give you two examples. Well, what we saw in the past, and this is, again, we're not making this up, we just cannot put the client names onto that, but we saw scenarios where people were deploying one microservice, a single microservice to an entire cluster. They wanted super high availability, which they of course got, but it was a very, very underutilized environment, as you can imagine. On the other side, we saw people putting all of their microservices into all of the containers into one pod. And the thing would, if you relate to that certification questions, like the Kube CTL API will never complain and say, what you're trying to do here is just wrong. I mean, it might run, but you might not using things correctly. So one of the things that we do is really try to work on that from a logical example, which makes sense. So we have like a coherent microservice application or we take pieces out of the client application, and do the exercises on them to see how do you treat stateless and stateful things and how do you, the communication right? And what is, how do you actually write your application or transform your application? So you can take the most of the benefits out of Kubernetes. So not only, of course you need to focus on the what, but also on why you would do things that way. Now, this kind of brings things to an end. I mean, we probably could tell you a lot more stories of things we encountered here, but time is short and maybe we can get in touch about this anytime later on. So I just wanted to summarize the things here for you real quickly. Now, the first one is as I already said, if you make decision of adopting or not adopting Kubernetes, just be aware it's not about Kubernetes itself only. You basically will, it will buy you in and to an entire ecosystem of new tools and technologies. And it's a lot of fun. I mean, and it's also, Kubernetes is an awesome technology and you can do a lot of things with it that you wouldn't be able to do without it. So just our recommendation is to plan and allow yourself enough time to evaluate those things because you will probably need it and it will also make you more flexible to exchange certain parts, which make your overall architecture more robust. So there will be, you might gonna have decisions about provider options on off-prem container, build mechanisms, security, API, logging, tracing and a lot more. So most of them still relate to traditional IT and would even be there without Kubernetes, but with Kubernetes, your options will multiply. Then this is probably one of the biggest things is it's a lot of fun to play around with the technology but it's also dangerous to get stuck into like a level where you don't provide business value. In another or put it in other terms, it's like, don't try to solve problems which have already been solved because a lot of tools might already have pulled together the abstraction which might be the right one for you. So my recommendation is don't build everything from scratch. There is a tutorial which is called Kubernetes the hard way which does that. And there is a reason why this tutorial is called that way. So if you have the choice, use a managed Kubernetes environment. So that enables you to focus on application development and provide the value quickly to your clients. If you cannot use a public cloud provider, still try to use tooling for on-prem Kubernetes department management. There are things like Rancher, Kubernetes, Giant Swarm, Open Shift and so on that will take a lot of things away from you that you would have to do by yourself if you start doing it all from scratch. And in addition to that, and I need to say that because I am a consultant of course try to get the consultancy from people that have experience with it because that experience will definitely save you some time or stepping into too many problems in an unnecessary way. Or also there are distributions that provide a service along. So you can have the possibility to call somebody and get help on your journey. Now about the skills. This is also that we got from basically from feedback from all of our colleagues. Invest in the skills of your employees and give them the time to learn. The Kubernetes, I hear very often that Kubernetes has a low entry barrier. I don't really agree with that. And especially in the beginning, I find it particularly hard just to understand all the constructs and API objects. You need a lot of time to not only learn them but also learn to apply them. And please combine this with cloud native software engineering aspects because only then you will be able to take the full benefit out. Finally, this might sound pretty logic anyway. Don't go all in, make it on and off the kind of decision. Do POCs, do Lighthouse projects, evaluate, get familiar with things. Evaluate pros and cons which will help you again later on the way. So summarizing again, there is no silver bullet and I think I'm running slightly over time here. So with that, I wanna say thank you for listening from both of us. We're gonna still, you'll probably get our contact details in SCET. So if you want to reach out to us, feel free to ping us. We're definitely happy to talk about the things more. And with that, I wanna say, yeah, thank you and bye bye. And we open up the line for the room for questions.