 All right, we're going to go ahead and get started. I'd like to thank everyone for joining us for today's webinar, Container Native Development Tools Compared, Draft, Scaffold, and Tilt. I'm Taylor Wagner from the CNCF, and I'll be moderating today's webinar. We would like to welcome our presenter today, Mickey Boxell, a cloud-native developer advocate at Oracle. Before we get started, there are a few housekeeping items that we'd like to go over. During the webinar, you are not able to talk as an attendee, but there is a Q&A box located at the bottom of your Zoom screen. Please feel free to drop all of your questions in there rather than the chat window, and we'll get to as many as we can at the end. Also, this is an official CNCF webinar, and as such, it's subject to the CNCF Code of Conduct. Please do not add anything to the chat or to the Q&A that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and presenter. With that, I'll hand it over to Mickey to kick off today's presentation. Thanks very much. Hey, everybody. Today, we're gonna talk about container-native development tools, and I'm gonna compare three of them, draft, scaffold, and tilt, and then talk about a few more afterwards. I am coming to you live from beautiful downtown Oakland, California, where, fortunately, the fires have abated and we're not getting quite as much smoke as we have in the last few days. So, one thing I was asked to talk about was an event that you may have heard of. KubeCon is coming up. So this is the Cloud Native Computing Foundation's flagship event, and it's gonna be here next month in San Diego, California. Fortunately, unlike most of the US, San Diego's actually pretty nice this time of year. So this is a great time for the community to come together in person to just advance the Cloud Native conversation. And I was also asked to bring up the first Kubernetes forums in Seoul, South Korea, coming up December 9th and 10th, and also Sydney, Australia, from the 12th to the 13th. These forums basically are there to connect international and also local Cloud Native experts, adopters, developers, and end users, and are present in plenty of cities across the globe. So, a little bit about who I am. My name is Mickey. I work for Oracle. I work in product management, and I also do a lot of Cloud Native developer advocacy. The team that I'm a part of is designed to share some best practices and also build out solutions related to Cloud Native and container Native content, also open source projects, and with a focus on DevOps as well. So just to let us say that one thing I wanted to bring up is that, of course, it goes without saying that microservice environments are a bit different from traditional monolithic ones. If you're on this call, there's a good chance you know that already. But again, it's worth noting that the adoption of container Native and Cloud Native development practices present a lot of new operational challenges you might not have run into in the past. These microservice environments are played on container orchestration platforms like Kubernetes. They're polyglot. They're in a lot of different languages. They're typically distributed systems. Containers are the primitive of choice, and also they're highly scalable and ephemeral. And as such, this has an impact on the way that we operate services and also in the way that we develop them. So a traditional development workflow looks something like this. You write code, you build code, you run code, and you identify issues and return to step one. And hopefully at some point in there, you're running tests as well, probably during the build and run phases. Those can be functional tests, unit tests, and integration tests. It really depends on the system that you're working on. But when you switch over to cloud native development, there's a few more steps included. So sure, you're gonna start off writing and building code in a similar way, but after building your code, let's say a jar file, you're gonna have to build a container image of that code as well. If you're connecting to a remote Kubernetes cluster, you'll have to push that container image to a registry service somewhere. And then instead of running code by just executing a jar file, you'll have to create a manifest to deploy to the Kubernetes cluster. And then and only then can you identify issues and return to step one. So what does that look like in practice? So if we were using a traditional method, what you can see here is that I am generating a project source using an archetype for Heladon, which is a set of Java libraries for writing microservices. I'm using this Maven archetype, which is basically just a tool for project templating. And the result is that I'll create a simple projects that has a web server and some basic routing roles. Now, after creating an archetype, I would change directories into the one that I just created, which has my source file. I would create a package of the applications, so actually I'll build it, which is the application jar file. And then I can simply run the application jar file. Now, in this case, the example is a very simple Hello World greeting service it supports get requests for generating a greeting message, also a put request for changing the greeting and the response is encoded using JSON. And I'll talk more about this application later. But right here we can see what this looks like in a traditional deployment option. So we have literally just gone through all the steps, creating the application, building it and deploying it. But if we wanted to do so in a containerized way, we would have to take another step, which is build a Docker image of the file and then run that image. Unfortunately, what's cool about this Heladon open source set of libraries is that it also contains a Docker file, which makes it very easy to build and run a Docker image. Now, if we wanted to go one step further and instead of deploy the container on Docker and deployed on a tool like Kubernetes, we would also have to create a manifest file. Again, what's cool about this service is the ability to actually get a manifest default, the Xenian manifest file by default, which is this app.yaml file, which makes it easier to deploy the application to a local cluster. The next step would be to actually deploy instead of a local cluster to a remote cluster, something that's managed by a cloud provider. And if we do that, we need to make sure that our image is in a registry service. So that's adding additional steps of tagging the image, pushing the image and then modifying that manifest file to match the registry. So that's a few more steps than just simply building and running a jar file. The whole flow looks something like this. We're writing code, we're building code, we're building an image, we're pushing the image to a registry, and then we're deploying to a cluster. So that seems to me like a lot of typing of the same set of commands over and over. And as any software engineer, what we wanna do is automate away as much of the repetitive steps as we possibly can. So why did I care? Well, for me, I was finding that simple code changes were taking too much time and too many keystrokes. When I was just getting started working with Kubernetes, I was trying to instrument tracing into my application. I'd already added it to the application itself, I pulled the correct libraries, but within my manifest file, I was trying to point that trace data to a service like Zipkin. My problem is that I didn't know what the endpoint was supposed to be. So I went to Stack Overflow, poked around, did not find a satisfactory answer. So I found myself doing the most annoying possible thing, which was just changing it and trying a few different options over and over. The problem was every single time I made a change, which literally was something like Zipkin.monitoring. So the service name followed by the name space and then the port number. And every time I'd switch something like that around, I'd have to go through this whole process of building the code, building an image, tagging the image, pushing the image, and then applying that manifest to a cluster. And of course, that wasn't very much fun for me. That was a lot of extra clicks and a lot of waiting. So then I got into this position where I was thinking, why not use a full-blown CI CD system, continuous integration and continuous delivery tool? Well, I need something that operated at high speed. I didn't want to click a button and then wait for an hour while we kicked off this long CI CD process where code was checked into a version repository, tests were run, things were reformatted, and then we go through this whole cycle. I really wanted something that would be performance that wouldn't screw up my flow. And the point was made to me where I can't take a CI CD system that's designed to take minutes or hours and instead make it take seconds or milliseconds. That's just not what it's designed for. This is also a different problem for how do you ship? I wasn't trying to push code into production. I was very simply trying to do some hacky troubleshooting to figure out how something was supposed to work on my end. So again, just to elaborate on that last point, this is part of the inner loop of the container native development workflow, which is simply just while you're writing code, but maybe haven't actually pushed it to a version control system. Put more simply, it's when you're iterating on code pre-commit. It's really just when you're hacking around, but maybe don't wanna put all of those attempted changes into your version repository. Now, luckily enough, a few weeks ago, I talked to one of the people who was responsible for one of the tools that I'll talk about today, Dan Bentley, and he made a great quote, which was, what you do a few times a day is different from what you do hundreds of times a day. So I may not be committing code to a version repository or a CICD system hundreds of times a day, but I certainly am going to be saving and updating documents that I work on, saving and updating files that I work on hundreds of times a day. So I need the tool that's gonna fit that. Another question is asked is, if I can just deploy this file locally as a jar, or if I can deploy it locally as a Docker container, why deploy to a cluster? Well, honestly, it's because there's a lot of benefits to using a cluster. You know, a lot of these other CNCF webinars will focus on all the different tools available to people using Kubernetes. You know, you'll see tools for logging, for tracing, for monitoring, and for a whole host of different services. And once you get that up on a cluster one time, you can leverage those tools and not spend all of this effort setting it up on your local machine. When these tools are used in conjunction, it makes debugging and understanding your systems way easier. And when I, myself, look at it, and also when I talk to engineers on my team, I ask if they have these same tools available to them in their dev environment or on their local machine. And the answer, of course, is often no. Another point that's worth making is that it can be really helpful to run integration and dependency tests if you have all of these services deployed in the same environment. Especially in a microservice world, if we're not trying to deploy a single monolithic jar file, if instead we have a bunch of really small ones, we wanna see how they interact with one another. Another question is why deploy to a remote cluster? You know, of course, we all have access to Docker for desktop or mini cube or kind and all these different ways of getting Kubernetes running on our desktop. Why not just deploy to that? Well, at the end of the day, these tools that we're talking about are not necessarily designed to be run on individual developers' machines. You know, of course, you can get a cluster up and running locally, but if you throw on the Elk stack to monitor your logging, excuse me, to log your system, or if you add in Prometheus and Grafana or if you add on Zipkin and Yeager, you are really going to run into issues of resource exhaustion. Also, a remote cluster can really help you match a test environment to a product environment. And so you're gonna have at least a slightly more repeatable version of your deployments, which allows you to avoid the whole, it works on my machine problem. And also for compliance reasons, not everybody is allowed to have a local cluster or have this code running on their local machine. So aside from the why, I also wanted to talk about something else that's going on that were already, most of us, at least, they're taking advantage of. So we're already including automation and we're already getting rid of a lot of these clicks and a lot of this toil. And we're doing so by means of Docker files. You're not every single time you wanna package up your application. You're not going through and importing an image and packaging up that image and running that image in a jar file. I mean, literally you can see the entire contents of my Docker file right here. And I don't find myself typing in those 11 commands every single time. And given that's the case, why not take a similar approach to pushing and deploying our application? And so that brings me to the focus of this talk today, which are these build, push, and deploy tools. And on the right, you can see real world containers, which have nothing to do with Kubernetes. So what are the tools I'm gonna talk about? Now the first one is draft by Microsoft Azure. The second one is scaffold by Google. And the third one is tilt by windmill engineering. And what do these tools have in common? Well, they're all used to build code. They can hook into existing Docker files and some of them have a couple of other build options as well. They also allow you to build an image of your project. You can then push that image to a registry service of your choice, deploy it onto a local or remote Kubernetes cluster. And also fundamentally, all of them help you save time and clicks. And the added benefit of these three is that every single one of them is open source. So if you find issues in the software or if you wanted to make requests, feature requests, or something's not working, you can actually file a PR and you can even make the change yourself. And I think that level of transparency and collaboration is fantastic. So some prereqs for doing this. Of course, if you're gonna be building Docker images, you need Docker. You'll also, if you're gonna deploy it to Kubernetes, you'll need to Kubernetes cluster. And again, that can be local or remote. The local ones, Docker for desktop, MiniCube are both great choices. For remote clusters, you can use something from a cloud provider. For instance, Oracle has a container engine. You'll also need Kube CTL or Kube Control or Kube Cuddle, which is just the command line utility for interacting with Kubernetes. And you'll also need an image registry service. You can bypass this step if you're just deploying locally. But if you wanna take full advantage of this and connect to a remote cluster, you wanna have some sort of Docker V2 compliant image registry, for instance, the Oracle Cloud Infrastructure Registry Service. As part of my example, and I talked about this before, I'm using the Heladon framework. So of course, Java is traditionally used to deploy massive monolithic applications, but I thought it'd be a cool idea to create lightweight jars. So in this case, I'm using what's called the Heladon framework, which is simply Java libraries for writing microservices. I'm using an archetype they already have, which is the Quickstart SE sample application. Again, this is great because it comes built in with a Docker file and also a manifest file. And all I've done here is instrumented for a few other things and added a colorful front end, which will come into play later. So first up, we have draft. Draft is a very cool service. It has something called draft packs. And these draft packs provide a very low barrier to entry. So if you're somebody who isn't accustomed to deploying to Kubernetes, this is actually a really good place to start. The command draft create is used to detect the language of the application you're developing. For instance, Java and Jennifer 8 artifacts needed to deploy that application to your cluster. It does this by use of what are called draft packs, which are simply Docker files and Helm charts specific to a given language. And there are good examples for most programming languages out there. And what's great about this approach is that it saves you time that would otherwise be spent writing this stuff from scratch. Now, as far as using draft, this is really the lion's share of what you need to do. The prerequis here, in addition to the ones I mentioned before, also include Helm, which is simply a package management tool for Kubernetes. You start off by running draft init, which is going to install packs and plugins and then configure your draft home. Next up, you change into the directory that you want to end up deploying application from and run draft create, which will then import that boilerplate based on the application language in that directory. The next step is to set a registry for your service. In this case, I'm connecting to the Oracle Cloud Registry and that'll create a hidden draft directory in a config.toml, which saves the registry that I'm trying to connect to. The next step, of course, is I need to actually log in to Docker in order to push the registry. And then I simply run draft up to deploy the Helm charts that I have and make my application show up in a Kubernetes cluster. So rather than spend all my time talking at you, I'm going to see if I can show you this live. All right. So right now I'm in the directory in an application that I have called Skellidon, which will make sense a little bit later. But this is just a directory that has a sample Helidon application. If I wanted to see what a draft.toml looks like, let's see, I will just hat that. And then you can see right here that I'm going to be creating an application called QuickStartSE. It'll be deployed to the default namespace. And I'm going to reference a Dockerfile in the current directory and also a charge directory also present in the current directory. Running draft is as simple as running draft up. And then this will kick off the build, push and deploy process that will result in me having an application deployed onto my Kubernetes cluster. And now the nice thing again is here, I'm not going through the process of building the image, pushing the image and deploying the image manually every time. I am simply waiting for this to do its business. And anytime I save my application and then run draft up, it will kick off this pipeline and go through the process again. Draft does a good job of hiding what's going on under the covers. If I wanted to see what's going on live, I can just tail the logs or as soon as this command is finished, it'll give me a draft logs command to see everything that just took place. Cool, there we go. So if I wanted to see everything that just happened, I can run draft logs. I can see the entire step of everything that went on in the Docker file. I can see the actual push to the registry service. I can see some tests that are built in. And then I can also see the image that was pushed here. So right here, I have my image pushed to a Skeleton directory in my repository service. Cool. And now if I wanted to actually see this in my cluster, I can run kubectl with pods. And I can see that I have a pod running in my cluster that matches that QuickStart SC application. If I wanted to, I could curl this service. It does have a front end, but I'll save that for later. And getting rid of all this is as simple as running draft delete and then everything that was created in that draft.toml that's referenced in those Helm charts will go away. Cool. Some other helpful features of draft, and these are consistent with the other tools as well, are the ability to configure a port forward using DraftConnect. So that saves you the time of having to write kubectl, port forward, pod name, and then the local and remote ports. And also, again, of course, as we just saw, you can get logs from draft logs. In a nutshell, draft has useful boilerplate to get started. One thing that it lacks is a watch or continuous deployment feature, which I'll talk about with scaffold and tilt. Also, sometimes Helm can be overly complicated. And unfortunately, that is the only deployment option enabled for draft. One thing that's cool is if you already used VS Code, there is some very lightweight VS Code integration. It is very, very lightweight. There are only a few commands that are available to you, but it does exist. So I'm going to go ahead and show you that it does exist. And one other drawback is draft doesn't seem to be actively worked on these days. You'll see a few commits here and there, but for the most part, all of the focus seems to have gone away from the service. All right, so switching gears. Next up is scaffold. Scaffold is very uniquely flexible. It comes with a ton of different build options. So you can use a local Dockerfile, a Dockerfile and cluster with Kaneko. You can use Dockerfile in the cloud. Some of these benefits are limited to just using the Google Cloud platform. So not everybody has access to this. You can also do things like using Jib locally as well. It also comes with a bunch of deploy options. You can use Qt Control. You can also use Helm or Customize. And it comes with a bunch of different image tag policies, which can be helpful if you're trying to arrange your images in a slightly different way. Scaffold, you'll see, doesn't have that same Helm requirement that Draft does because it does give you those different deployment options. Unlike Draft, which creates a lot of the workflow step files for you, you actually do need to create your own scaffold.yaml file. And again, that's just used to specify the workflow. Just like Draft, you'll set your default repo and that'll create a hidden scaffold file. You'll log into Docker. And then once that's all done, you're good to go. You just have a choice between using Scaffold Run, which will deploy just one time, or Scaffold Dev, which will do continuous deployment. And let me switch over and show you what Scaffold looks like in practice. One sec. So in that same directory, and if I wanted to look at those workflow steps, I can just cat my scaffoldyaml. And here it's a little bit more complicated in that Draft Toml, but not dramatically so. So because I have those different options in place, I can do things like set a tag policy. So here I change it from just having a default tag, which is based on the get repo that I've checked out to having a time zone format instead. I've also configured a port forward, unlike Draft, Scaffold will automatically port forward based on what you have in your manifest file. Cool. So Scaffold Run, it doesn't hide things in quite the same way that Draft does. It'll actually show you all these steps right here. And it's just going to kick off that same build push deploy process one more time. So I can see it's pushing that same repository that we went to before, the cloud native DevRel skeleton repo. And now it is deployed in my application. So it's created a service for me and also a pod. So let's see. So I can see right here, I have my pod running. And let's see. Actually, I'll show you what the front end looks like in the next section. But basically right here, again, you just saw that with Scaffold, it was as simple as running Scaffold Run. Now Scaffold also has another option, which is Scaffold Dev, which will update the application every single time I make a change on the other side. Tilt has the same feature. So I will just show you that when we get to the Tilt section. Now as far as using Scaffold, similar log situation to what we saw with drafts, you can just get logs very simply. And also you can set port forwards based on the pod spec or with a port forward flag. Now a couple other things that set Scaffold apart. It has this very cool profiles feature. And profile is simply a set of settings stored in the Scaffold YAML that allows you to override the build, push, and deploy sections of your current config. So as we saw before, as I mentioned before, it is a very flexible tool. And there are a lot of different options you can choose for building, for pushing, for deploying, and for tagging. So one way you might use this is you could set up a profile called local development, which uses a local Docker daemon to build images. Maybe it skips the registry step and then uses kubectl to deploy them to a local cluster. After you actually finalize your design, you can switch to a production profile, which instead uses Jib with Maven for your build tool, pushes to a remote image registry, maybe with some sort of image scanning built in, and then uses Helm to deploy to your cluster. So again, all we're doing here is just enabling developers to very quickly switch between those different options we talked about before as a template, which we just call a profile. Scaffold is also helpful because it allows you to deploy multiple microservices at once. This is something that drafts can't currently do. It's sucked to one, although I think you can get a little bit fancy with your Helm files. Scaffold also has something called file sync, which is the ability to copy a changed file to a deployed container, allowing you to avoid the need to rebuild, redeploy, and restart a pod. Now, this is helpful, but currently Scaffold lacks the ability to run a command after doing that. So you'd have to write in your own logic for rebuilding that container, excuse me, rebuilding or recompiling on the container if you are using a compiled language. And again, one thing that's cool about Scaffold is the ability to have the option to deploy once with Scaffold Run or continuously with Scaffold Dev. Now, the next step is, arguably, my favorite out of these tools, Tilt. Tilt sets itself apart by having a heads-up display and a browser UI. This is really cool for a lot of reasons. Tilt, it gives you some really in-your-face information about your services. For any of you who have deployed applications into a Kubernetes cluster, you know that if something fails, it's not necessarily going to clearly blare out warning signals telling you that something's not working right. You have all the information you need, in fact, Tilt very simply is just using a lot of those built-in kubectl commands to return data to you. You know, it's using the same stuff you get from kubectl logs or describe. But here, it's putting it in your face in a way that allows you to very quickly tell when something is not working the way it's supposed to. So as you can see here, we have little green dots next to the resources we're deploying. And if it's green, it's good. If it's yellow, it's typically in process. And if it's red, it's something that's failed. So I don't have to, you know, run kubectl, get pods and see that I have, you know, zero out of one running containers. I can simply look here and very quickly get immediate feedback that it's not working properly. What's also nice at the moment is that we have a heads-up display which is in the command line, it's in a terminal. So even if you're doing something like, you know, on a headless host and you don't have access to a real web browser, you can simply look at it here and get a lot of that same information. But with that said, the browser UI is pretty snazzy and makes it very easy to have a, you know, in-browser preview of your frontend. So just like the other ones, similar prereqs, Docker, kubectl. In this case, life scaffold, you have to create a tilt file from scratch. This is a file that's going to specify the workflow steps. And tilt files, unlike the other two, are not written in a markup language. They're written in Python Skylark. And I'll talk a little bit more about that later on. Maybe a slightly different method of setting the registry path. You either do it in the tilt file itself or in a tilt-option.json file. But again, after that, you log into Docker and then you'll simply run tilt up to deploy everything and then tilt down. We'll delete everything. So what does that one look like in practice? I would say this is probably the most fun one to show. Let's see. The first thing I'll do is just show what this tilt file looks like. And so in this case, I have a YAML file that I'm referencing, target-forward-slash-app.yaml. I have my Docker build step specified, so that has my registry endpoint and also is pointing to the local directory for my Docker file. I've set up a live update. So unlike file sync, tilt actually allows you to not just sync files between your local directory and a running container. It also allows you to run commands afterwards. So in this case, I'm not just syncing, let's say, an update to a CSS file. I am also going to run Maven package and restart the jar after making that update. Also, I have a port-forward configured, which is just going to port-forward 8080 to 8080 on my local host. I mean, what's nice about this file type is basically it's a little more extensible than some of the other ones. Okay, so if I wanted to run tilt and simply run tilt up, this will open up the heads-up display and then also bot me over to the browser. So let's look at the HUD first. So as we can see here, I have some ability to see my history of edited files. I can look at either of my resources, see the build, and then I have a view of my logs below, which I can expand if I want to. Over in the browser, what's cool here is that not only can I get that same information in a slightly more interactive way and also see alerts that may or may not have popped up, but I can also pop to a preview section, which at the moment doesn't seem to be populating properly, and see a preview of my application. And because I think it's cool to show you what that preview looks like, I'm just going to run a port-forward. That'll do it. Always the fun part of live demos don't exactly go according to plan. And go figure. Okay. So, again, with tilt, you have this cool ability to be able to very simply connect to applications. And then also as I mentioned before, it has this cool service where you can go in and basically just redeploy automatically whenever you make changes. So, if I ended up making a change to one of the files in this directory, let's see, and I'll do that on the right side here. So, for instance, if I were to go in and change my CSS, and I habitually click save, so it's kicking off the build process again. But if I wanted to change my background image from the image of a palm tree that should be showing up right now to an image of a skeleton, which is very Halloween appropriate. So, I'm sorry, it's not popping up for y'all right now. I could simply go here, change things, quickly save that file, and then it will go through and kick off the build process again. So, as you can see here, it's going through that entire step, and I haven't had to click any additional buttons. All I did is on the right side here, wrote quit bang out of my file, and when it was written, it kicked off the whole process again. So, that meant that I didn't have to go in, type a bunch, click save, go maybe even like draft up, or more simply go through all of those different commands to compile my application, create a Docker image, deploy the Docker image, or write a manifest file, deploy that to a Kubernetes cluster. All of that was done completely transparently. So, that wasn't even something I had to worry about. I'm going to go back one more time and just see if the preview has popped up, but I think right now I have conflicting ports, so it's not going to show up for me. Oh, here we go. So, here is the Halloween-specific application for all of you, because those of us in North America will be celebrating Halloween tomorrow. This is just an update to the greeting application. It says Halloween, happy Halloween. And if I wanted to update this greeting, I can update the greeting here, create again, and have a spooky Halloween. And so, that was all I was looking to get across. But as you saw, Tilt rebuilt the file, updated my background from what I was written as a tropical beach to instead having Skeletor here in the background. And it's worth noting that even though it didn't come through in the UI right now, we are getting this port forward here from Tilt, which is where the localhost 8080 is coming from. Cool. So, using Tilt, if you are in the Heads Up Display, you can open a port forward simply by clicking B, but also within the browser UI, there is a resource preview page, which does the same thing. And logs are available both in the UI and the Heads Up Display. And once again, those are logs you could get simply by running Cube Control, Get Logs. But what's nice is, again, having them up in front of you in your face and available to scrutinize rather than having to drill down yourself. So, what sets Tilt apart is that it has this Heads Up Display, it has a browser UI. That's something that neither of the other services really have. Also, it uses Python Skylark, which is a very concise and also extensible language option. And that's as opposed to doing things with just a simple manifest file. I've seen people write tests into this config file. There's a lot of different options for what you can do given that it's Python and not just manifest. Also, one thing that's very cool is this live update feature. So what you would do is use a sync method to copy a file or directory from outside of your container to within your container. And even though the normal Docker build process or behavior in Tilt is to watch all the files in the build context, here what you're doing is, you know, if you specify that build step as later on in the process, you don't have to rebuild every time. You can simply bring in particular images and save the time that you'd otherwise spend on a full rebuild. And again, what's cool about it is the ability to run shell commands in your container after syncing those files. So if you are using a compiled language, you get the benefits of having that run without having to read a full container. One thing that's also cool about Tilt is similar to Scaffold. You can deploy multiple microservices at once. And the Tilt team created a very cool example application to show this work, it's called Cervantes. And if you're trying to just dip your toes into the microservice architecture world, this is a very cool application to look at because I think it's 10 different microservices all deployed, of course, using Tilt at once, all of which relate to one another. One thing that Tilt lacks is a single deploy option. You know, there may not really be a need for this, but it is a nice option to have. And I think one other thing that really sets Tilt apart is that it's not a cost center for a big company. Developer productivity tools are the entire focus for their company, which means they have a very dedicated and focused development team trying to solve problems rather than just a few folks making open source contributions occasionally. So some key takeaways. Developer productivity goes without saying is very important. And we can achieve it by automating away countless manual steps in a similar way to what we've done in the past. All of these are also client side tools, which can be helpful. That means that nothing is installed within your cluster. The exception being that if you have to use Helm, of course you'll need to have Tiller in the cluster, or you can use Helm 3, which is Tillerless. These tools both deploy to local and remote clusters. So you have the option if you want to just have everything running in your local laptop, to connect to something else. If you're using the local cluster, you can save yourself even more time by bypassing that registry push step. These tools are also really useful as a step before pushing to a source control or a CI system. And they're meant to complement but not replace a full CI CD system. I am by no means saying that you should rip out your Circle CI or GitLab or whatever you're using for CI CD. I'm simply saying that if you're doing something 100 times a day or more, like saving code in your local machine, and you want to make sure that it works the way that you intend it to, and leverage some of the benefits of developing for Kubernetes environment, it might be worth taking a look at some of these tools. Boiling this down even further, draft, great getting started boilerplate. If you're new to this space, and you need help writing manifest files or Docker files, and all you have is your traditional jar or maybe a .py file or something else, this is a very good place to get started. Scaffold brings a lot of flexibility to the table. If you want to have different configurations for different aspects of your development process, this is a good place to look. And tilt, really the heads up display and also having a dedicated team working on this product really set it apart from the competition. So some other development tools that are worth looking at. Even getting into the tool side of things, I would like just to mention that it's worth doing things like setting up aliases on your command line. So if you wanted to skip typing out things like kook control, get pods every single time, you can simply write aliases. I think that's probably one of the best productivity tips I've gotten lately. Also, if you're somebody who uses Visual Studio Code, Visual Studio Code has a Kubernetes tools extension which has some really cool features that allow you to visually interact with your cluster. You can also run some basic commands from it. It can do things like point out what different parts of a Kubernetes YAML file actually mean. And it can simplify YAML creation by giving you some sort of auto-filled-in boilerplate. Another cool tool that's worth taking a look at takes a completely different approach than the three that I talked about today. Telepresence is a tool that runs a pod in your cluster as a substitute for your application and then uses a two-way proxy to intercept traffic and route it to a container running in your local machine. What's cool about this option is that you can make changes locally and have the application running locally participate in your remote cluster without having to deploy a container to the remote cluster. So that's another option for this. Also, I've seen some people use Code Server as an in-cluster IDE. I think it's very cool to use remote development servers. I think there's a lot of benefits to it. You can scale up and down the resources running on there. I don't think running this in your cluster and basically deploying applications as if you were running them locally and just happening to be doing this on a container is the best idea. I think sometimes people can get a little bit fixated on containers being the coolest thing and treat them almost as if they're virtual machines. So if people are doing this in a clever way that makes sense, power to you, but I think it's important to recognize the differences between running a container and running a VM. And the last one is ksync, which is simply a file sync between a local directory and a running container. It basically transparently updates containers by running, excuse me, it transparently updates containers running in your cluster from a directory that you have locally. This option bypasses the version control step and also the container build step and sort of like with file sync was scaffold, you'd have to actually run a build on the pod. It wouldn't do that for you. And this is something that's run as a demon set in your cluster. Also maybe not my favorite option, but certainly an alternative. Cool. Well, that is the line share of the content that I had today. If you'd like to read more about what I've written or what we're doing over at the cloud native space in Oracle, you can check out our medium page. You can also at me on Twitter. I'm not the most prolific Twitter user, but I will try to get back to you. I check my LinkedIn annually, so feel free to message me there, but don't be offended when I don't get back to you right away. And also if you are curious about what we're doing in Oracle Cloud and want to get hands-on with it, we do have a trial and also a free tier of our services as well. All right, well, that wraps it up for me today. I'm going to pull up the Q&A, and so please feel free to ask questions. Thanks, Mickey. Everyone, if you have any additional questions, please do drop them into the Q&A box at the bottom of your Zoom screen. And Mickey will go through and start answering some of those. All right, so the first one I see is from Daniel. How to handle complex deployments that are a composite of multiple microservices. Are these tools a practical choice for deploying more than one container microservice at a time? So I would say tilt and scaffold certainly are. It's very cool. Within a tilt file, you can just reference multiple manifests to deploy. And the same thing is true for the scaffolding as well. You can do things like deploy them in a given order if you need to. You know, not all services are going to work if they don't have one of their dependencies deployed alongside them. But yeah, these services are at least tilt and scaffold are good for deploying multiple microservices at once. And in fact, that's kind of one of their value of propositions. One person asked about the rational behind choosing draft because of how long ago it was since the last release and the diminished contributions. So draft itself, even without the contributions, still does some really unique stuff with those draft packs. If you really have not used Docker files before or Docker generally and certainly not Kubernetes, the ability to simply run draft create in a directory is a really cool way to get up and running quickly. I think that for all of its other deficiencies, just that makes it something that's worth looking at for people. If you're a little bit more mature in your journey, looking at draft probably doesn't make a lot of sense. To your point, I remember seeing a contribution or a PR that was filed probably 12 or 13 months ago for a pretty fundamental feature that never ended up getting fixed. So that is a good point. It's not something that's actively in development, but it can be a good place for people to get started. Another question was, do you need the Docker daemon installed or is there a daemonless build? So depending on which service you use, you have the ability to avoid building with just the Docker daemon. So I think with Scaffold in particular, depending on which environment you're using, you can use a Docker build in cluster or local or somewhere else like within the cloud itself. And one of their options is Canico, which I see Daniel you mentioned in your question. So that is something that is available to you. I see a question about, does draft support pre-build or post-build custom actions? That is not something that I believe draft is able to do. It's certainly not something that I've done myself. So if that is the case, I'm not familiar with it, but I don't believe that it is possible. Let's see. There was one question about how these tools differentiate themselves from the CI CD approach. I touched on that a little bit earlier, but I think it's sort of a right tool for the job situation. I think it's called DIN from tilt wrote a great article about, and I'm going to butcher the name, but something about particle accelerators. And I think it's called right tool for the right job. Basically you don't need a full blown CI CD pipeline. If you're trying to do something like I was doing, like changing that Zipkin endpoint, you don't need to commit to a version repository and keep that old version of your code for all time and then run through a bunch of tests and go through all this stuff if you just want to make a simple change. So if that's the case, you probably want a tool that's going to be a little bit more performant that's going to allow you to do things more quickly and not kick off some long enterprise-grade production pipeline. So I just think of them as doing something similar, which is a sort of repeatable set of steps as part of the software deployment process, but maybe on a different scale. One question was, are there additional steps required to integrate your current container with existing services in a larger cluster? That is, in fact, something that is being handled by Kubernetes just because you're deploying these different services using these tools doesn't mean that they're going to behave differently than any other sort of pod you might deploy to Kubernetes. So you can really just think of them the same way. There was one question about managing microservice dependencies. For example, Scaffold having independent microservices. I'm not entirely sure I understand the question, but if you are talking about deploying multiple microservices at once and making sure that you're ordering them properly, that is something that Scaffold can do. If it's anything more in-depth than that, I wouldn't simply just recommend looking at the Scaffold documentation and trying to find an answer there. So I see a question about for local development in particular, how do these processes compare to using map storage or in-container application restart loops? And there's also a reference to the Scaffold file sync. So fundamentally these are going to be creating a brand new iteration of your application every single time with a new runtime with an image that's stored in an image registry. So you're getting a bunch of different things to roll back to. If you wanted to, you could very easily roll back to a previous iteration of your image because it's all stored in that image registry. If you were doing something like just having a sync file system and restarting the container using some sort of like container side script, you wouldn't have the benefit of being able to roll back because you wouldn't have the iteration stored in an image repository. There are a few other reasons I don't necessarily like that remote file sync method. That's what caseink does. In some cases, you end up really opening up a lot of potential security risks to your cluster. You know, if caseink is deployed as a daemon set and it's actually writing to the file system or the storage volumes of the underlying worker nodes in your cluster, that's giving it a lot of access. Whereas when you deploy everything as a container, you can make sure that, you know, proper security policies are in place like pod security or container security policies. I see a question. Can these tools manage config map or secrets required by the deployment? So what I would do is within the manifest file that you're going to deploy. So for instance, in this example, scaffold until both referenced a app.yaml file, you can simply put the image secret there and then have that connected to your cluster. So that wouldn't necessarily be handled by the tool itself, but it would be handled by just the manifest of the tool it's deploying. There's a question about my personal view on using the same tool set from a CI CD pipeline to avoid multiple ways of kicking the same builder deploy code. I don't mind the idea of using the same skills or tool set for this kind of local interloop development as I do to a CI CD pipeline. It's probably worth trying to match them as closely as possible just so you get a kind of production grade experience. With that said, some of these tools do take longer or some of these tools can be a little bit more complicated. Again, if developer productivity is the ultimate focus here, it can be a lot easier to just deploy a simple yaml file than to do something like having a values helm chart and other helm charts that are all referencing one another. So I think it ultimately comes down to what's going to be easiest for the people running in your organization, especially if this is something happening a bunch of times every single day. I see one of the questions is, what's the best tool to put in your CI out of these three tools? I don't know. It's a good question. I would have to come back to you on that. I think probably not draft for some of the reasons mentioned before. I've seen some people do some pretty nifty things with tilt integration with CI systems, especially because of that Python Skylark. But I would have to think a little bit more about that. And then I see another question, what do the communities look like? That's a very good point. So just because Google is doing it, does typically give products a big bump. So the fact that Scaffold is a Google product does mean that it's probably going to get a bit of a bump over the other tools. We saw the same thing with Istio. Even though there are a lot of other service meshes out there, Istio is definitely one that gets a lot of attention because it is the one that's backed by Google. I would say that Scaffold itself is a good tool in its own right. There are certain aspects of it that don't work as well as they maybe should. I've run into a lot of issues with the port forward feature in the past. But I think it is something that is, it is a tool that has legs. It is a tool that's worth using. But with that said, I think the fact that Scaffold does have a development team and they do have funding and they're not going anywhere, I think they are a tool that's going to stay viable over time. So I don't think getting started with that would be a bad idea. I think in fact it would be a pretty good one to start with because it's not the most complicated tool and should be easy enough to get started. All right. We are at the top of the hour. That is all of the questions. I think that is probably the best timing I've ever had. Perfect. Thank you so much, everyone. Thanks Mickey for a great presentation and thanks everybody for joining us today. The recording and slides will be online later today on the CNCF webinar page. Thanks again for joining and everyone. Looking forward to seeing you at a future CNCF webinar. Thanks so much. Thanks everyone.