 Thank you. Thank you for joining us here. Today we'll be talking about how one can improve a developer productivity environment using tools in the ecosystem like Tilt. I'm Yuraj. I work at VMware as a senior member of the technical staff. I mostly work on cluster API. It's a sub-project in the SIG cluster lifecycle. Hey, folks. I'm Sagar. I am a staff engineer at VMware. And I work on the CAP-V project, which is the vSphere provider for cluster API. So quickly, what's the problem that we're trying to solve? With modern applications getting increasingly complex and distributed, it becomes a little hard to manage these applications. But Kubernetes is great at managing this complexity and managing the infrastructure that comes with the complexity. However, the development environment, the development flow for this is not so great. But what makes a good development flow? This could change a little bit depending on the application that you're working on or the topology of your deployments and stuff. But in most cases, it'll have these four things going for it. Any good development workflow will have these four things going for it. The first, it should be easy to spin up. It should have a quick feedback loop. It should be debuggable. And it should have good visibility into what's happening with the application. Let's take a look at each of them, starting with easy to spin up. What does it mean to have an easy to spin up development environment? It simply means being able to go from your source code that you're working on to a working application environment. Again, this could vary a little bit depending on how your application looks like. But it generally will look something like this. So Kubernetes cluster, and you will build some stuff from your source, some binary, some container images, pull something from your repository, generate some yamls, write some yamls, and then apply them to your cluster. Let's start with the cluster. Depending on the size of your application, if it is small enough, maybe you can just run a kind cluster locally on your laptop, kind, mini-cube. There are many solutions. Or if you are working with a big application, then maybe you would choose to run a remote cluster and then work with it. Next comes the part where you have the source code. You need to build stuff out of it. You may need to compile some binaries. You need to construct some. You need to build some Docker images, push them to a registry, have some yamls written so that they pull the right images with the services and deployments and Rback rules all set up, and then gets applied to your cluster and everything then shows up. All of this should be as simple as running one command. It should not be more complicated than that. And this is very important because it helps new engineers on board onto the project very quickly. And also it becomes essentially crucial if you're working on an open source project. Because having such a simple environment will enable new contributors to come and explore your project, see if they can jump in and see how it works. Just getting hands dirty is the best way engineers learn how things work. So it is pretty crucial for you to have a quick and easy to spin up quick development environment. How do we do it? Tilt is a great tool in the Kubernetes ecosystem that is purpose-built to be able to handle the pains that come with developing microservices. And more. Tilt is used by cluster API and we write Kubernetes customer operators. How does it work? So with Tilt set up in your project, you should be able to spin up a development environment by running a single command, TiltUp. How do you do that? Imagine you have a project repo with multiple subfolders for each of your services. All you need to do is add a tilt file. A tilt file is written in Starlark. It's a dialect of Python. So that means you get the full power of conditions, loops, functions, and everything. Let's take a quick. Here you have a quick example of how simple it can be. So Tilt also comes with a lot of built-in functions that can help you with the process of providing instructions on how the entire application needs to be built and deployed. So here I'm imagining a Go application. So you just need to first build a Go binary, package it into a Docker container, and then apply the YAML that deploys the container onto your application. So just three instructions. Let's take a quick look at how cluster API does this. So I have the cluster API repo here. Cluster API has to deploy like four controller images with that run, 14 controllers or something. It has to install cert manager. It has to install CRDs and so on. So it's a little bit complicated, but you need to get all of this done so that you have a working cluster API environment that you can then play with. Here I just have a kind cluster locally that I'm going to use. I'm not going to use a remote cluster because cluster API is small enough to run on a local client cluster. And all I have to do is just run tilt up in the project. All I have to do is just run tilt up, and it already sets up all the stuff that I need for a working cluster API environment. So you can see that it already has all the providers installed. It already has all the deployments installed. All the deployments are running, and everything is great. If you look at the tilt file, I'm not going to go through the entire tilt file, but just look at some important pieces of it. So you have this one instruction that tells you how to build a Go binary. Go build with certain flags set in it. We'll talk about those flags in a little bit. And once you have that Go build command running under a local resource, then you can just add the tilt instruction to Docker build out of this kind after the binary that you just built. And then you will have instructions to just apply that to your cluster, and that's it. Take it. Thanks, Raj. So now that we have our development environment up and running, it's time to actually make code changes and to have fun. So for Kubernetes native application, how this would look like is that you basically make changes to your source code using your favorite IDE. Then you go ahead and build the image using, say, Docker build. Then you push that build image to a Docker registry or a container registry. You then go ahead and edit your deployment manifests or your deployment gammals to use this updated image. And then you play the waiting game and wait for the pods to roll out. Once the pods have rolled out, this is when you have access to your new application. But wait, what happened? What was I actually trying to debug? This is a situation that I have found myself in, which is because waiting for this much time, I tend to get distracted. And I actually forget the code changes that I was trying to debug, or I was trying to look at. So using Tilt, you can basically make sure that you can update your application in a real-time fashion so that the feedback loop between you making the code changes and you deploying the newer version of your application onto the cluster, that time gap becomes smaller. This is absolutely essential for your productivity or for developer productivity everywhere and lets you build faster applications. If we look at this diagram here, basically, it describes how Tilt enables live update and how Tilt enables you to deploy newer binaries into your running containers. So, say, for example, we have our source code and we instruct Tilt to basically watch these source code files and directories. And anytime there's a change in any of the files, we ask Tilt to basically go ahead and build a new version of the binary with the code changes. We can also instruct Tilt to take this binary and push it into an already running container. That way, we don't have to build images, we don't have to do all of the dance around like editing yamls and then waiting for ports to run out. Tilt will actually live update this particular binary into a running container and then you can ask Tilt to just restart the process inside the container and you have a newer version of your application. Now, Tilt is extensible enough that if you don't use a compile-time language, so, say, suppose you have your projects written in JavaScript or Python and you need to, say, install a new dependency. So, Tilt can also basically do that for you. It can watch those files and then run, say, PIP install or like NPM installs and install your newer dependencies and that way you can have like a smaller feedback loop. In this next one, I'm just going into a bit more detail of what is happening the first, sorry, the first Tilt construct local resource lets you watch these source files or source directories and it instructs it to build a new Go binary whenever there is a change in any of the files and in the next one, using the live update parameter of the Docker build construct, we are basically asking it to push this binary into the running container and then restart the process. So, similarly, Tilt files are extensible enough that they can provide you this ability to have a faster feedback loop and not waste time doing or like building images using Docker build, publishing images using Docker push and then updating the manifests using like kubectl edit or apply and then waiting for the pods to roll up and you have a smaller and a faster development cycle which leads to improved productivity. I'm gonna quickly switch over to one of the files just to show you how quickly this happens. So this is a log line that I have commented out. I'm gonna just uncomment it and then I'm gonna go to the Tilt UI and if you look at it, this particular binary was rebuilt five seconds ago and it was already pushed into the container. So that's how easy it is. Raj, what do you think? Great, okay, now we know how we can set up a development environment quickly and also have a quick feedback loop. Just look at how we can also make it debugable. The first place that most of us start debugging is just by adding logs and now that we can add logs and also quickly check them because we have a shorter feedback loop. What if that's enough? But once you start actually debugging stuff through logs, you'll realize that it can become quickly overwhelming. You'll have to add a lot of log statements just to understand the flow and you always realize, oh, you wanted this other variable as well so just let me print that and oh, you wanted to see what the state of something else was. So you go back and change that. We have to replicate the entire process. So debugging through logs is fine as a quick start, but you would always want to use a debugger so that you can step through your code and then have a much richer debugging experience. However, when you are using debuggers, it can become a little tricky when you are working with containers, especially debuggers inside containers. So how do you solve this problem? So that is where remote debugging comes in. So with remote debugging, you can just run the debugger server inside your container and have some client, which in most cases is your IDE, so stuff like VS Code or IntelliJ or the idea of your choice. Or if you're feeling adventurous, maybe you can even use your command line terminal to connect to the remote debugger and then start setting breakpoints and then you can explore the code base a much better and a richer way. So what goes into setting up like remote debuggers and being able to connect to that, especially when you're running stuff in containers. So the first thing that you'll have to do, this assumes that I'm talking about a Go program, but the same rules kind of apply for almost all programming languages and applications. The first thing that you'll have to do is probably recompile your code into something different so that it is debug friendly. That could mean dropping some compiler optimizations because debuggers might not be able to understand those compiler optimizations, making sure that the debug info is retained. That means the symbol map from the source code to the actual symbols is retained so that the debugger needs that information to be able to set up breakpoints. Once you have this, the next thing would be to modify your actual deployment so that you actually start the debugger inside the container and it runs and takes and then you can connect to it. One thing that most people will have to do when you are setting up remote debugging with debuggers within your containers is when you hit a breakpoint and you pause your application, the liveness and readiness probes might not respond at that point and Kubernetes might consider that pod as unhealthy and try to either kill it or restart it. So just to avoid that, you might have to drop the liveness and readiness probes for that particular deployment so that you don't get hit so that your debug session isn't just killed because Kubernetes thought the pod was unhealthy. Once you have this, you have a debug server running within your container but how do you connect to it? You need a TCP connection from your client to the debugging server and the simplest and easiest way to do it is just use kubectl port forward so that you have a port on your local host that you can just port forward to the port on your container and then access to it. With tilt, it becomes much simpler because all you have to do is just specify the ports within your tilt file and it runs kubectl port forward underneath to take care of it for you. Now you have the debug server running within its expose. You can connect to it. The last step is just configure your IDE, your client to be able to just connect to that debugging server and you should now be able to set up breakpoints and step through your code. And in most cases, you might be working with more than one microservice or more than one piece of the application at a time or together. And you might not want to run a debugger in all of them all the time, so you might want to control the behavior of, okay, only set up the debugger for this one but let the rest run as they are. So to control that behavior, maybe you can just add a tilt setting CML into your project which has flags to either enable, disable the debugger and your tilt file because it knows how to, it has, it is written in star lag. You can use conditionals to figure out, oh, I want debugger enabled for this particular microservice, so let me go into the path where I construct or build a compatible binary, build a compatible container image, set up port forwards and all of those things, right? So just switch the flag to off, just goes the normal way, switch the flag to on, goes into the debug path. Just take a look at how that would look like in real life. So in cluster API, I'm now looking at a piece of code that runs when a new CR of kind cluster is either created or updated. I want to be able to just set up breakpoints here and then step through the code one by one, one line at a time and then just see how the entire controller operates. I already have a debugging configuration setup so that it now talks to the port that I just configured a while ago. And most IDs already have support for like remote debugger configurations. I'm using IntelliJ here but there is similar configuration in VS Code and the screenshot that I had in the previous slide is an example of how you would configure a remote debugging client within VS Code. So now once that I have this, let me just set up a debug point, connect to the remote debugging server which I'm now connected and then just let me just trigger it. So as I said, as I mentioned, this piece of code runs when either a new CR of type of kind cluster is either created or updated. So I'm just going to create one so that it goes into this code path and gets triggered. So I just created this CR, this goes into the API server and you can see that once the controller code is called, it hit my breakpoint and my code is right now past here. So I can like step through the code as I want, explore the variables as I want, have the full rich set of features that I can use with any debugger and then go through this and then debug my problems much better, right? So with all of the setup, you should be able to like have the breakpoint setup in your source code. Just use the rich features within your IDE to be able to have like a richer debugging experience and you can have much more productive debugging sessions than having to set up like log lines and figuring out and then redeploying and all those things. Visibility. What visibility entails is being able to look into the application health and the status of all of the components that make up your application. In terms of your application being consisted of like multiple components, having a single unified way to look at the status and the health of all of your components in your application is very useful and would save you a lot of time. Plus, it'll make development cycles much smaller and much faster. So Tilt basically does that via a unified interface wherein you can configure your Tilt file to expose all of the components and just like this, this is what the cluster API Tilt looks like. You can see all of the components that cluster API uses and the health of all of those components in a single unified way. You can also follow the live updates of the application or all of the components of the application. Like I was mentioning, if you change a code, you can see it on the UI that it is redeploying and deploying a new version of your code. You can also look into logs of each and every component and this becomes much more helpful because you can view the logs of a particular component and not get overrun by logs of all of these microservices together. I'd be remiss if I mentioned logs and metrics and I don't talk about any of the observability tools. So on the screen here, I have some of the famous ones like Grafana, Prometheus, and the Loki stack. So these observability tools do provide more insight and more in-depth knowledge about the application, the metrics data, and so on and so forth. But running them always might drain the resources on your local machine, which is true because these parts are heavy and they take up space. So you would want to pick and choose what tools you want to use or maybe you don't want to use any of them and you're happy with Tilt UI itself. So you can basically program Tilt or Tilt file in a way that you can choose any of these tools any of these tools in any way you want and not have to run all of them always. That way you can have a particularly customizable development experience and just use the tools that you want and save the resources that you don't want all of these tools to take up. Just on the screen is an example wherein my Tilt settings YAML file says that I need all four of these to be deployed and then I can program my Tilt file in such a way that it looks at these keys and then depending upon the presence of keys it goes ahead and deploys this observability tool and this case will be Grafana. Again, turning it on should not require a lot of knobs to be moved or to be turned on turned off. It should be as easy as like just flipping a switch. If you want it, just include say a key in that particular YAML file and you have Grafana deployed. If you don't want it, just remove the key and boom, Grafana goes away. So like Tilt does give you this option of configuring your developer environment with these tools in a way that you can have the best experience that you need. With this, I'm just gonna recap through all of the themes that we talked about around the question of what makes a good development workflow. Spinning up environments should be very easy and Tilt enables this particular thing via the magic of the Tilt file and running a single command like TiltUp. You can have a quicker feedback loop or you can see your code changes get deployed in real time via the live update functionality that Tilt provides. You can have easily debuggable applications with the way you set up Tilt file and not have to jump through hoops to set up debuggers. And you can use the Tilt UI to look at the application logs and again, like you can also integrate observability tools to get more in depth insight into the application. And with that, like we have come to the end of our session, thank you for attending and there is some cluster API swag on the side if you are interested. And if there are any questions, please feel free to go ahead or be here.