 Let's kick it off. Just a few words about myself. Max introduced me really well, but I also find it very giving to contribute back to the tech community. That's why I'm here today. And in this talk is very short, so we'll just scratch the surface really quickly. But I also do have a more in-depth walkthrough of what I'm gonna share today. So let's, if you have any questions or would like to learn more, don't hesitate to reach out to me or also check out the resources that I will share in the end of my session. Yes, let's kick it off. When it comes to managed Kubernetes service, then we are talking about Kubernetes in cloud that the cloud providers nowadays are offering. And when it comes to managed Kubernetes service, there may be quite a few expectations on how much is coming out of the box in terms of security, proper configuration, scalability, and so on. And we may think that if it is managed, then there is not so much effort or not so much time we maybe need to invest into planning and preparing this infrastructure and the technical platform based on managed Kubernetes service in order to make it production ready. And here you may see some of the questions that both me and others that have been working in projects with managed Kubernetes service have experienced, which resulted in this talk. And I have been in a project where we actually had to stop the rollout to production, get back to the table and see what we need to change. And we started with building a technical platform on managed Kubernetes service really early back in 2017 in the early days of this offering. So I have been through the whole development until now of the managed Kubernetes service offering in general, and I hope that these insights could help you make better choices from the start of your journey. So the simple question or the simple answer to all these questions is no, reality is a bit harsher than you may think. What the hard fact is, is that managed Kubernetes service is not platform as a service offering. So for example, if I come from Microsoft tech stack background and have been working a lot with specifically Azure Kubernetes service, which is a managed Kubernetes service offering from Microsoft. And if you check the documentation there, the first thing you can see is that it clearly states that AKS is not a platform as a service offering. And the reason for this is that, yes, cloud providers do indeed manage the control plane or the backend of the Kubernetes clusters, but you as a consumer of a managed Kubernetes service are still responsible for securing, configuring, operating, updating both the control plane to ensure that you keep it up to date and are compliant with the support policy from the cloud provider, but also the worker nodes or the, where your workloads and your applications are running in the Kubernetes cluster. And there we see again, shared responsibility model term, which is very regular when it comes to utilizing cloud service offerings. And we need to be aware of that and actually invest time into the day zero phase, into the design, architecture and planning phase of building the technical platform with managed Kubernetes service at its core. And if often there are many discussions these days about Kubernetes, if it's hard or not, and I can say that it is not necessarily hard, but you need to understand that the learning curve is steep and therefore it's important to have a layered approach to building it, building that platform brick by brick. Rome was not built in a day, so it's just worth investing enough time into the planning phase and preparations and getting that whole picture of what you need to handle as a consumer of managed Kubernetes service offering. But if we right now go more into the concrete details and try to scratch the surface a bit of the different areas that you need to be aware of and include and invest some time into researching and planning. The first and foremost component that is important to think about when you consider adopting managed Kubernetes service is application readiness. Those applications that you are planning to run in Kubernetes. And often one of the good candidates that is often mentioned for running workloads in Kubernetes is typically lift and shift the legacy application, for example. But it does not necessarily mean that direct lift and shift of the applications as is will fix all the current problems and issues with those applications. And in the bottom of this slide, we can see some examples of the organizations which succeeded on this journey. But if you start looking into the details, you can see that the architecture of those applications was actually ready to be hosted in Kubernetes. It was scalable enough. It was distributed enough. The applications were lightweight. But if you have a legacy monolithic application, you need to think about how easy it will actually be to scale that application. If we think about the container image size of such a monolithic application, if it consists of multiple things, of base image itself, of the patches and updates that come on top of that base image, and also the size of your application itself packaged on top of all that. And if we think about, for instance, a base image of .NET Framework, it is four to five gigs in size. And then you have one or two gigs of updates on top of it. And if your application is also a few gigs in size, then you have a container image that is suddenly 10 to 11 gigs in size and just pulling it from a container registry. Or if you have some warm-up of the application on top of it when it's starting, you can imagine how hard and challenging it will be to scale. And then just putting it out into a Kubernetes cluster will not solve the problems of scalability there. So you need to think, to look at the current state of your application and take a look at if you need to do some refactoring or re-architecting in order to prepare it for such a technical platform for containers and orchestration. And you would also need to think about your technology strategy overall. What is your roadmap? Are you dedicated to a single cloud provider? Are you planning to go hybrid? This will affect the choice of the tools you're gonna be using. And would Kubernetes be an overkill? This is also an important question to think about. If you have a single microservice or a few microservices that are very lightweight, maybe Kubernetes would be an overkill. Maybe it's serverless solutions like Azure Container Apps or AWS Fargate would be a better alternative where even managing the worker nodes is abstracted from you and you are just focusing on managing the applications themselves. So this is also worth considering during planning and evaluating your options. And from application readiness, we can move on to the organization readiness, which is also a very important part of this journey. And I would like to maybe point out that going for Kubernetes nowadays when we hear a lot about it, it's hot. Lots of organizations are adopting it. It is really great where it brings value, but just making the choice because it is cool or because Spotify does it or because some other organizations are doing that, adopting Kubernetes and are succeeding with this does not mean that you would need to do that. The main point here is to do it for the right reasons. And that's why I think that the quote from Kubernetes, which you can see here that Kubernetes management requires making cultural shifts on steroids is really true. And we have seen it also in one of my projects where I've been involved. We have seen that it is important to start with that alignment and understanding of what it takes to adopt Kubernetes, even if it's a managed Kubernetes service. It's important to understand what it means for different teams. Like for instance, we had an operations team coming purely from the Windows servers background. For them, suddenly understanding how to work with Linux, how to manage those nodes, how to perform debugging, monitoring, how this hangs together was important to start early on with because if they were to understand what it actually takes to learn this and gain those skills right before rolling it out to production, then we could be in a very bad position. So it's important to ensure that you have that alignment across teams. And if you have some changes that may affect, for example, how the sales team is acquiring the new customers or talking about upgrading, if you have like a non-premises offering that is running in parallel with your cloud offering, you also need to ensure that the teams that are working with the customers are also aligned and understand if anything needs to be communicated early on to prepare those customers. So starting with making this change transparent, not only inside your own team, but across the departments is really important. And having in the back of your head a thought that why are we doing this? Will it bring value to me? Will it bring value to our organization and our product? And most importantly to our customers, this is important to think about and discuss during this initial day zero stage. And while we are talking about organization readiness, we also need to think about our dear developers because there is a lot going on these days with a lot of changes in the way we're hosting our applications in the amount of tools that we introduce, which if not planned properly and communicated with developers may end up in actually increasing the cognitive load on developers where they need to suddenly know and do much more than just focus on bringing the value to the application itself. And this I have seen has also been a challenge for many organizations that have adopted Kubernetes because when we have more automated teams where we maybe do not necessarily have enough resources to handle those Kubernetes clusters or building containers themselves, then developers may end up needing to do that. And not every developer would like to learn many more tools on top of what they're doing, especially if they are not going to use those tools on a daily basis. So making this easier for the developers is important. And here is one example of what happens when there are no proper checks or blueprints that could handle the proper configuration for developers so that it can help them do the things right. It is a bit blurry here, but it's not that important. What you can see here is two applications that sneak to themselves into the Kubernetes cluster and they did not have a resource and request limits defined. And what this ended up with is that those applications had issues which started consuming much more resources than they were supposed to. And this ended up in consuming all the resources that were on those nodes where these applications were running, which ended up bringing down all the applications that were running there fine from before in that cluster. And you can see lots of pods and crash loop back off in the evicted state, in the restarting state. And this caused quite a lot of chaos. Fortunately, this was not in production. But if there were checks and policies that would automatically block faulty deployments or faulty configuration like this, this wouldn't have happened in the first place. And it may not be developers fault to begin with because there is so much that you need to remember if you suddenly need to configure all of these things manually. So the main point also when planning this is to look into how we can reduce that load on developers, how we could abstract that complexity away. And having those automated checks, those blueprints, those templates that could be generated for developers to build that container image, to build that deployment definition to Kubernetes is really beneficial. Another beneficial thing is to have playground or production-like clusters that also could help operations to start working with Kubernetes and containers early on. And a great benefit we have seen in one of the projects where I was working is to have hands-on workshops with concrete projects where we sat together and did this live with the resources that will need to work with this. And they compare to just reading about it or just doing something by yourself versus doing it together has proven to give a much more learning and valuable knowledge to those who were a part of it. So this is definitely worth considering on your journey. And we do hear a lot about GitOps which is kind of the golden standard on how you deploy and keep your get updated changes out into those Kubernetes clusters where you have your repository as a single source of truth and then the clusters, they pool all the new changes automatically when they get merged. The main point here is that you need to have all the prerequisites in place like a good automation in place, the proper security management, a lot of automated checks that would help you ensure the quality and security of the changes that are being added. And you do not necessarily need to start with GitOps to begin with, but it's important to think about how you can mature over time and look at it as your final goal when you get more experience with working with containers and Kubernetes. Next point that is also important is of course cost and sustainability. And it is interesting to think about the report that cloud native computing foundation has released in 2022 where 68% of respondents actually they said that they had a cost increase when it comes to using Kubernetes and manage Kubernetes service. And I think most more than 30% said that like during the last year or two of usage they have seen that cost increase. And there may be different reasons to that, but taking it into planning and looking into what your cloud provider offer in terms of cost optimization, using the tools like KubeCost, Loft, CastAI to see that trend on how your cost and resource utilization is doing would help you a lot to save, to save money in the long run and to use the resources efficiently. And as a great benefit of that when you have also focus on the cost optimization you also end up utilizing the resources you have to their max potential which also means that you may need less resources. And this would mean that you would reduce the amount of carbon footprint and have a more sustainable solution that is being used efficiently. So here are some of the examples that you can look into when you are checking the offering that you are planning on adopting. And cost savings is also always a great news for your management. So there is always a benefit to include it into planning. And here is one of the examples that I have taken from Microsoft Azure. Here you can see the different virtual machine sizes and you can compare like virtual machines are where the nodes are running in your Kubernetes cluster. So nodes are virtual machines. And you can see here that the same virtual machine size for Linux on top here and Windows have totally different price. So if you have Windows workloads it's important to be aware of that. And if you can migrate your application to run on Linux you will also get that cost optimization as a benefit. As you can see somewhere it's a 50% for some of these virtual machine sizes it's a 50% increase, which is significant. And another example is also which region you choose to run your resources in. So here is also an example of a one type of virtual machine size but being deployed in different regions. North Europe on top, West Europe in the middle and Norway East in the bottom. And here on the region prices but you can see the increase like from North Europe to be 289 Kroners to 308 for West Europe and 339 for Norway East. So being aware of which region you deploy to and if you can deploy to a region that has less cost and still keep compliant for example with GDPR is also worth considering during the planning stage. The next last I think area that I want to mention today is security and of course the misconception of Kubernetes being secure by default is indeed true. So what you can see here are multiple researches that have been conducted by different security organizations and what they have seen and the numbers you can see here are the amount of Kubernetes API servers being exposed publicly and reachable. It does not mean that you can execute a set of commands on every single of these exposed servers but you can call them, you can get a response. And this means that there is an attack surface that may be unnecessary and this is not a good sign. So understanding how to secure and what is secured by default in those Kubernetes services is also important when you are planning and looking into how you can ensure that continuous security checks that are also automated throughout the whole software development lifecycle and when your applications are up and running in production is also something that is important to consider. And of course it is not always easy to do that in some aspects and an example here for example is when you want to not run your container containers as a root user. And some images, some base images can provide you the capability to do that. Like NGINX for example has their own version of a container image that is called unprivileged which means that by default the root user is stripped out but in many cases this may not be as easy to achieve like an ASP.NET Core base image does not allow you by default to run as a non-root user and often the reason for that may be the port binding if you want to run to expose ports up to 1023 I think it requires a root user. So if you were to ensure that the ASP.NET Core image is run as a non-root user you would need to add some additional implementation yourself to ensure that you can handle that until the owners of the base image can come with an alternative solution. So here are some examples on what to think about and what to look into when you are containerizing your applications and creating those deployments deployment definitions to roll them out in Kubernetes. And the goal here is to think about minimizing attack surface following the list privilege principle and shifting left to start early to find those potential issues and misconfigurations early in the development life cycle. And there are some frameworks and standards out there that can help us with that. And here are some of the examples I would also link them in the end in the resources which you can check out and some of them like OWASP for instance, it even has a cheat sheet that comes with concrete examples how you can handle this security control how you can configure this and this properly. So you could use this as the foundation as a guide to also help you to do this correctly and cloud providers offer also policy services that could help you enforce this checks and even automate remediation of misconfigurations to some extent. So this is definitely worse enabling and following from that start. And finally, the last point that I would like to mention today is of course, data operations we can't quite not think about it even during day zero and data operations is about when you are ready to roll out in production when you deliver the responsibility over two operations or maybe your team starts operating it in production it's good to be aware and maybe even plan some exercises beforehand so that you can test that you have efficient routines before you are up and running in production. And here are a few examples that can be used as proof of that things will fail not only on your side but also on the cloud provider side. Here you can see that an example of data region outages in the US for AWS. And here you can see an example of a faulty update to the Linux virtual machines that happened in Microsoft Azure back in summer last year which ended up causing problems and downtime for services where Azure Kubernetes service was affected throughout all of the regions. So this is a proof for that things will happen also on the cloud provider side and you need to be aware of that and prepared for that as much as you can. And there are some great opportunities there on what you can do and it's, but it's good to know what you actually need to do. When it comes to, I think maybe the most important thing is also to keep your Kubernetes clusters upgraded because there are typically strict version support policies that cloud service providers define. And if you do not have routines on having an efficient way of constantly upgrading your Kubernetes clusters you may end up being running with an compliant version. And in some cases the cloud provider may basically need to upgrade it for you if you deny to do that because this may pose a security risk for them and for their infrastructure. So you need to have and establish that routine on how you can efficiently upgrade your clusters, your nodes, keep your base images for your containers also constantly upgraded. And it's important of course to ensure that you can automate as much as possible of it. And it's good to run disaster recovery and business continuity planning exercises as well to ensure that if things go wrong that you have the proper automation in place that can bring your environments up and running as soon as possible when the things go wrong. So the goal here should be to not fully rely on cloud provider and let operations also be involved and start operating those clusters as early as possible. So this was a quick walkthrough of how you can deploy a container in Kubernetes. And as I said, if you would like to know more about it I think these slides will be shared with you after this session. And I would also like to share this QR code. This goes to the GitHub repository where I have created a list of the useful resources and we'll be adding some labs and some blueprints that you could roll out to play a bit around with some parts of it for yourself. So do ensure that you check it out if that's interesting to you. And I will be happy to check more about it. So don't hesitate to send me a message or ask me any questions you may have regarding the topic. Thank you. Thank you very much, Christina. That was so awesome. I can relate so much with it because we do also all day long managed service migrations and always need to take out the magic. They're like, look, it can still fail even though it's managed. So true, so true. Let me shortly check if you have any questions from the community, not yet. Yeah, so you put some of your resources together in your GitHub repository but what do you think in a nutshell are the three major things someone has to look out for when they really start moving on to managed service? Yeah, there are of course so many things that this can quickly become overwhelming for many. And at the same time, it can be quite challenging to limit it because everything is kind of important. But I guess from my own experience and from the experience of the teams that we have been talking to who have experienced the same challenges, I think the three things that are maybe most important are the readiness of your applications because this has proven to be a challenge that it's not necessarily enough to just package it in a container and just put it out in the cluster. You often end up in re-architecting the application to make it more suited for cloud and for scaling properly as according to best practices in Kubernetes. And the second point is the alignment in the organization because this we have seen also being a challenge when a team works isolated on building that platform and then they suddenly realize that they have not aligned this with the operations team with the developers. And they thought that they have created something that is relatively easy to use. But then when it comes to practical appliance of that, they see that it actually created a lot of complexity and frustration. So having that transparency and continuous feedback from the others and from other teams is also something that is definitely worth to think about how you're gonna do. And the third thing is security and I can't stress it enough because you also need to think that your customers will want to know which infrastructure you're running on regarding how many vulnerabilities are coming out these days. And many customers will get back to you and say, we want proof that you're compliant with this and this and this framework. And you need to provide that proof. And if you haven't planned for that, if you haven't worked on configuring the clusters in compliance with those frameworks, you may need to refactor quite a lot. And in order to get some of those controls in place, you may not introduce it that easily on an existing cluster. It may not be allowed to do that. So you would need to go back and kind of do many more changes which will take more time. So it's definitely worth to think about this three to begin with. Again, thank you very much. I wish you a great day. Stay safe. See you soon. Thank you.