 Hi everyone, welcome to Open Source Summit Europe 2022. Thanks a lot for tuning into my virtual talk. I would have loved to be there in person in Dublin, Ireland for the very first time, but I'm glad I still have this option of speaking to you. My name is Laconde Mwila, but most people call me Luke and I'm a Principal Technical Evangelist at SUSE. And that's always such a mouthful to say. But if you think that's a mouthful, wait till you get a load of the title for my talk, which you should have already seen at this point. How to get your application developers to fall in love with Kubernetes and cloud-native applications. And you might be wondering why I would bother with a topic like this and maybe more importantly, what value you will get out of it. And those are great questions to ask. And I'm going to answer that using the long way around to provide as much context as possible. So before becoming more involved in the cloud and off-space, I spent a considerable amount of time in application development. And I worked on business systems, SaaS products and different kinds of bespoke solutions. And after doing that for a couple of years, I transitioned into a new role to focus on cloud solution architecture, implementation of enterprise systems in the cloud and building out DevOps workflows. And when I moved to the DevOps world, I worked a lot with containers and container orchestration platforms like ECS and Kubernetes. And nowadays I'm in developer advocacy in the cloud-native space to be specific. So I'm still part of the cloud and ops world, I would say. Now you might have noticed that over the last couple of years, there's been an upward trend of container and container orchestration adoption, especially with Kubernetes. I mean, just take a look at the DevOps Institute, the CNCF surveys and Stack Overflow. Among others, I'm sure. Each of them attests to this. Evidently, there's a lot of love for these technologies because of the problems that they solve. And please, hear me out. There are certainly not magic bullets for every single use case. And probably now more than ever, popular technologies like these are more likely to be adopted for the wrong reasons. But if you are in a situation where you're looking to achieve workload portability and you need a robust and intelligent system to help you orchestrate this for large workloads at scale, then Kubernetes can help. And it has a large and growing community and its infrastructure agnosticism better enables workload portability for your applications and workflow portability for your teams. But not everyone is a fan. And I remember saying that at a conference earlier this year, and one guy literally responded with probably the most baffled facial expression I've ever seen in my life. And he looked around the room trying to find who these Kubernetes haters were and almost getting out of his seat. And look, I don't know every reason that Kubernetes doesn't get love from everyone. But when I heard that there's little to no love for it from some application developers out there, I did want to do some investigation, understand why and help potentially resolve some of the issues, especially in situations where containers and Kubernetes adoption actually make sense. Now, as containers and Kubernetes have become the desired one-to-punch for a number of companies running microservice architectures, I think a lot of organizations and teams have failed to create optimal cloud-native development workflows and frameworks to support developers. For some companies, Kubernetes administration has extended beyond operators and DevOps engineers to the application developer. All of a sudden, their purview also consists of node configuration, optimization of manifest files, help charts, or customized templates. And it's great to have cross-functional teams, but it appears that many developers out there are having to be cross-functional in and of themselves. And it's impacting their main responsibilities. Naturally, they don't have the most gleeful response. So assuming the shoe fits and containers and Kubernetes are the right tools for your project, how do you create a system for application developers to thrive in the context of your cloud-native project? And this talk will be dealing with that. It's for development managers, team leads, tech leads, DevOps engineers, and, of course, application developers who are involved in cloud-native projects for Kubernetes. And the aim is to help you develop a framework or workflow that supports the respective roles in your cross-functional team, especially app developers. Because as I mentioned, there seems to be somewhat of a trend where app developers are expected to become cross-functional in their roles to the detriment of their main duties. And lastly, this talk is for a context where it actually makes sense to use Kubernetes. It's not an application for Kubernetes ubiquity in every single scenario. So first things first. As much as we're trying to marry the two worlds of app development in Kubernetes, or find a complementary model between the two, we have to take a step back and realize that at the root is a culture issue where technology is dictating the way people work. Just because you're using KH, that doesn't mean the developer life in your organization should change in its entirety. Now, of course, there has to be some measure of adaptation, but contextually, it should fit within the developer's world and not the developer's world fitting into Kubernetes. A big part of the problem is that K8's operations have extended beyond the DevOps space into the developer world, and we need to re-lit back in. Perhaps teams that are more susceptible to this are the ones that are still trying to figure out DevOps workflows as a whole. And it's definitely not easy, and I know that from experience. And now, DevOps was born out of the desire to dismantle the dichotomy between the software development process and the traditional IT operations that followed it and supported it. Software typically undergoes the iterative process of new version releases, which means this handoff to IT operations was not a one-soft, but rather an ongoing loop. And this division was actually the root of the problem. The siloed nature between the two spheres led to poor collaboration between the respective teams. And as a result, you would end up with an inefficient workflow despite the goal to release and support high quality software. Now, breaking down the traditional barrier between dev and ops is meant to improve communication and collaboration, not shift responsibilities. Kubernetes is a platform that enables you to have intelligent and automated approaches to running and operating your applications because of its self-healing capabilities and its declarative APIs. However, even though a number of the complexities are abstracted away, it still requires hard work. And I really like what Paul Dick said in his article titled, will Kubernetes collapse under the weight of its own complexity? And he said, Kubernetes made the simple things hard and the hard things possible. And I think that's true. Kubernetes makes hard things like resilience, high availability and scalability possible, but the optimization and configuration of Kubernetes requires time, skill, and ongoing work, not for developers, but for operators. And these operators or DevOps engineers should be working closely with developers without any silos, because no one understands the nuts and bolts of the applications like the developers do. And similarly, the developers should work closely with ops to optimize the application for the cloud native context and the operators are in the best position to inform what that will look like without handing over responsibilities. To illustrate this, I wanna talk a bit about a project that I was a part of where I functioned as one of the DevOps leads. And we had multiple clusters for each environment of the solution. And the system as a whole was comprised of several microservices. And these microservices were worked on by a number of development teams that operated with a collective ownership model. And in this project, the development Kubernetes cluster, that is the cluster for the Dev environment was in a cloud context. So it was remote. And one of the benefits of this approach is that it reduces the disparity between your Dev and your production environments. And it gives you more of a really realistic picture. However, we still had to think about creating an optimized and cohesive workflow between developers and DevOps. And this required close collaboration and communication from both sides. Our input and knowledge of DevOps methodologies was able to influence and shape how the developers structured the applications, including other things like common dependency libraries and source code repositories. Similarly, their input helped us understand the best way to create Git strategies that supported our GitOps deployment model and of course, become aware of the Kubernetes resources that needed to be created. Now, looking back on this approach, I think we got a couple of things right, primarily around the outer development loop of integrating, building, testing and shipping the software to the respective downstream clusters for every environment, except the Dev environment. Having a remote cluster and a fully fledged CI CD pipeline for the Dev environment may have given us benefits in terms of a realistic depiction of prod, but it had a negative impact on the inner development loop for developers. The inner development loop is the iterative process of coding, building and running and testing and application. It's the bread and butter of development and is an ongoing cycle. Typically, this cycle is carried out locally on a personal workstation for more freedom and speed. It's where Devs get to break things without worrying about the next person because the blast radius is within the parameters of their machine. Going with the remote cluster meant that developers had to be extra careful about their changes to avoid a cluster-wide issue because that would impact other developers and operators that were busy with that cluster. In addition, having a fully fledged pipelines for the Dev environment meant that the process of seeing changes reflected in the cluster was a lot slower because the builds would typically take time and that's if they passed. So in the context of building cloud native applications, supporting the inner development loop essentially means being able to quickly go through the iterative process of coding, building, running and testing applications. Even in the context of Kubernetes, localizing the inner loop of development works to the advantage of developers. Once you've created a framework around these principles and contextualized it for your use case, you can then proceed to select tools that will be a translation of that solution. If you start with the tools, you're just putting the cart before the horse and it might fail to accurately cater to unique situations in your particular use case. I'm now going to demonstrate a workflow that supports working with a basic local cluster and a framework that automatically builds and deploys my application source code to a local cluster without me having to be an expert in creating Kubernetes resource definitions. And to do that, I'm going to use Rancher Desktop and Scaffold, both are open source projects. Rancher Desktop is very lightweight and has an intuitive UI designed to make cluster management easy. In addition, you can easily modify compute and memory specifications of the single node cluster that it gives you. And if you need to upgrade or downgrade the cluster version, you can do that from the UI. And the goal is to give you a basic cluster to test your apps and not have you drowning in cluster administration. And then there's Scaffold, which abstracts the process of building and deploying your containers to the cluster that your Kube config has as its current context, whether remote or local. It's automatic and gives a real-life feel of the outer development loop in a quick and local context. And the Scaffold config file can also be used by your DevOps engineers for the full-fledged pipeline and they can get involved in writing it for the local builds that your developers need to do. So what am I trying to accomplish with this demo? I want to demonstrate a workflow that supports the inner development loop and still invites collaboration from DevOps engineers for my cloud-native app. And I'll start off with local development to illustrate what it would look like for a developer building an application for a Kubernetes cluster. The developer will be able to go through the normal lifecycle on their local machine. And after the build deployment and testing process and the relevant application changes have been reflected to see what it'll actually look like in a local cluster, those changes are going to be committed to a remote Git repository in GitHub. So once those changes are detected, GitHub actions will then go to work and go through a very similar process or the same steps that were actually carried out locally. And after that, it will also build, test, and deploy the KH resources to the downstream cluster which is EKS in this particular case. Now, something that I do want to mention is the commonality between the local and the remote context because of scaffold. So the same configuration file used for the local build, test, and deployment process will also be used for the remote context. And this is the framework or this is the tool rather that is supporting the solution that I'm demonstrating in this particular case. Not the only one out there, but certainly an example of creating an optimized workflow between your inner development loop and the outer development loop as well. All right, so now I am gonna delve into the other pieces of this. So let's start off by just looking at Rancher Desktop real quick. As you can see, it's running and it has already automatically updated the cube config context for me to set it to the Rancher Desktop cluster and I'm going to now switch to K9S in my terminal just to show you that the current context is Rancher Desktop, as you can see over here on the left hand side. K9S is an open source tool that you can use for cluster management, basic cluster management that is. And I love working with it because it does simplify some of the operational tasks around interacting with the Kubernetes cluster. As you can see, I've only got six pods running at the moment for all of my namespaces. None of these represent the application that I'll actually be working with. All right, let me switch back to my editor and I'm gonna close these diagrams because they're not necessary anymore and I'm working with a basic Node.js application for those of you that are interested in replicating what I'm about to demonstrate, you're more than welcome to clone this project from my GitHub profile and the application is called Node.js Scaffold app. So if you just look in the top left corner of the editor over here, you'll see the name over there. You can just go to my GitHub profile, that is Luke Mwila, uppercase L and uppercase M. You should be able to find it and you can just search through the relevant repositories and you'll come across this particular one. All right, so if you're not a Node.js expert, that's totally fine. This is not the focus of this at all. The goal is to just demonstrate using an example application and just show you that inner development loop and that workflow as well as extend that to the outer development loop so you can see a full life cycle of going from application dev through to deploying an application into a production cluster if we could call it that. All right, so not to worry if you're not familiar with Node.js at all. What we have in front of us here is the main file for this application. It is the app.js file. So this is where all the main configuration is for the express framework which just simplifies the process of creating a REST API with Node.js. I have a single root over here. You can see it is called test and I get the response simple Node.js app is working as expected as the string or the text response to a request that gets sent to that particular endpoint. In addition, under the test directory which is under source just so you can see over there just for folder navigation. Index.js is where the test script lives and this is relevant because I wanna be able to test my application locally as well as remotely and all of this will be defined inside of the scaffold configuration file which you will see shortly and just to walk you through the test real quick. As you can see over here based on the request that gets sent to test I expect to see a response in terms of the correct status code which is 200 because this endpoint actually exists. In addition to that, the text response should be a string and I wanna make sure that it matches this exact same wording as well. All right, so the next thing that you should be familiar with is the manifests.yaml file. And so this is a file that can basically be worked on by both the DevOps engineers and the developers. If you want this could solely be the responsibility of your DevOps engineers that makes total sense because again, we don't want our application developers to spend too much time working with Kubernetes resources because it doesn't fall within their purview as I've already shared earlier. However, it's still good for them to collaborate that is developers and DevOps engineers. So DevOps engineers could write these manifest files and then walk developers through to help them understand exactly what is going on. For the sake of context I'm just going to walk through it quickly. This is my deployment Kubernetes resource at the top over here. It is a controller. It's going to ensure that I always have three replicas of my pods running at all times for this same Node.js application, which is called express test. And if I scroll down over here, you'll see that I've also specified the relevant image repository for it. And it is Lekonde F. Moelow, which is my Docker Hub account. And then express test. I've set some resource limits on this. And then as you can see, the container is listening for traffic on port 8080. And because I want my application to be accessed outside of the context of the cluster I'm going to be creating a load balancer service also accessed through port 8080. And it will then forward or proxy the traffic through to the target port, which is 8080. All right, so this is important because this is going to be used by Scaffold as well. And I'm just, I have it locally over here. This is not the only way to set it up, but I think it will still give you a good idea of how Scaffold works. And then we come to Scaffold configuration file and this configuration file defines how the application is going to be built, tested and deployed. As you can see, if you look at the main properties under here you'll see that same kind of framework of build test and deploy. And it allows you to define details such as your artifacts. As you can see over there, mine is for this particular image repository that I've set over there. I'm working with Docker and I specify the Docker file that I wanted to use to actually go through the steps of building the image and this Docker file is local. It's the one that is over there. For the testing process, again, I specify the relevant image and then I have a custom command. In this case it is just npm run test which will run that exact same test that I showed you earlier inside of that index.js file. And then lastly for the deployment process I'm using kubectl and I'm working with raw manifests that live inside of that manifests.yaml file. This is not the only approach you can take. This is just to give you a nice idea or how you can shape your scaffold configuration file. And the beauty of this, as I mentioned earlier is this can be used both for local builds and for remote builds. This file doesn't have to change once you've got a particular blueprint that you're happy with for both of these contexts. Of course there might be situations in which there are certain things that you want to apply for a local context and you can read more about those properties in the scaffold documentation or if you find that what you have locally works perfectly fine for remote context then you can go ahead and stick with that. But as I shared earlier, I don't want you to worry too much about the tools as much as I'm demonstrating using Rancher Desktop and Scaffold. That's not so much the focus. The most important thing is coming up with a solution and then picking the right tools for that solution based on how well they support that particular framework that you've developed. And lastly, we come to the main.yaml file. And this is the configuration file that GitHub Actions is going to use to go through the CI stage. And for starters, as you can see over here, I've got a number of sensitive values or properties that live inside of my GitHub Action secrets and I'm storing them as environment variables in the CI stage. I've got the access key ID and the secret access key for my AWS profile that'll be relevant because I will need to connect to my EKS cluster. And then I've also got the details of the cluster name and the region that it's provisioned in. And lastly, the Docker ID and Docker password because I'm going to be pushing my built image to my Docker repository inside of Docker Hub. All right, let's go through the different steps and you can just pay close attention to the bright purple comments over here to help you understand the flow because that's the most important thing, the high level steps. I start off by installing the Node.js dependencies and then I proceed to log into the Docker registry. After that, I install kubectl because I'll be using that and then after that, I install scaffold because that is the main component for building, testing and deploying my application. In addition, I am caching the scaffold image builds in the configuration. And then I make sure that my CI environment is correctly set up in terms of configuration with the AWS profile that I'll be working with. I do make sure that the AWS CLI is actually installed there and it should be by default. And after that, I just proceed to configure the profile as you can see over here, that's what I'm setting up. I also set the correct region based on where the EKS cluster has been provisioned. And this last command over here, STS get caller identity is just to verify the particular profile that I'm working with. The last three main steps are connecting to the EKS cluster and this command over here, AWS EKS will set the region and update the kubeconfig file to use, or rather to use the context for the specific EKS cluster and that's the name of the cluster that is provided over there. Of course, it's masked by the environment variable name. And then we've got build and deploy to EKS cluster. And for that, I simply run scaffold run and remember because the scaffold configuration file, the scaffold.yaml file that is going to be in the source code inside of GitHub, it'll be using that same file in order to run through those same steps that were run locally as well. And then lastly, I'm just gonna verify the deployment by running kubectl get pods. All right, so let's actually see this in action. I'm going to close everything else to the right and I'm going to open up my terminal. I'm going to come here. And the first thing that I wanna do is run scaffold dev and that is essentially setting up a development environment with scaffold and it will be in a continuous watch loop so that every time it detects a saved changes, it will go through the build process and deploy the relevant changes as containers or a container rather to my Rancher desktop cluster because that is the one that it is set with the current kube config context. Awesome. As you can see over here, these changes have been deployed. I should have three replicas running inside of my cluster to verify that. I'm gonna come to K9S and as you can see over there, we've got three replicas of the express test application. You're gonna open up the browser. As you can see, I already had it open but I'm just gonna refresh that just so you can see for yourself. So simple.nojs app is working as expected. So if I was to come back here and make a change and I'm just going to copy and paste this for the test so we don't run into any issues. I'm gonna switch back to the terminal and you can see scaffold is already going to work because it detected those changes that I saved. All right, looks like my deployments have stabilized. I'm gonna head back to the browser and I'm just gonna refresh this. Oh, it looks, there we go. And you can see that that change has reflected now this simple nojs app is working as expected. Great, so that's just to show you an example of how quickly you can get your containers built and deployed to the relevant local cluster to see the reflected changes and a developer that is honed in on building out new features or refactoring or fixing bugs which is the normal process in the inner development loop still gets to go through this streamlined flow in getting to see quick feedback for what they were doing and they don't have to know all the nuts and bolts or the ins and outs of building an image in order for it and how to build the relevant Kubernetes resources for those resources to then be deployed to a Kubernetes cluster. They can collaborate with a DevOps engineer to create the relevant manifest resources for the particular application or applications that they are working on. And once that is set up, they can have a workflow like this and the focus remains on the application itself even though it's a cloud native application and they are working with Kubernetes. And what I wanna do now is actually proceed to show you how you can extend this to the outer development loop as well by committing these changes to the remote repository. So I'm gonna open up the terminal. Fantastic, we are in the clear. Our pipeline has passed. And so if I was to switch to my endpoint, the DNS domain name for my load balancer that is actually proxy and traffic through to the replicas inside of my EKS cluster. This is the previous response. I'm just gonna refresh that. And you can see the simple Node.js app is working as expected.