 Okay, I think we can get started. Hello everyone. I'd like to thank everyone who's joining us today. Welcome to today's CNCM webinar. YAML is optional, exploring and app developers Kubernetes options. I'm Karen Chu, Community Program Manager at Microsoft and Cloud Native Ambassador, and I'll be moderating today's webinar. We'd like to welcome our presenter today, Paul Burt, Technical Product Marketing Engineer at NetApp. Just a few housekeeping items before we get started. During the webinar, you're not going to be able to talk as an attendee. So if you do have something to ask, there's a Q&A box at the bottom of your screen. Please feel free to drop your questions there and we'll get through as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. And with that, I'll hand it over to Paul to kick off today's presentation. Great. Thanks, Karen. So yeah, as Karen noted, we're going to dive into an app developer's perspective, working with Kubernetes. So this is a brief outline of what we're going to cover today. We're going to look at some level setting with why containers, why did we get into this whole mess to begin with? We're going to look at why YAML is seen as somewhat of a tragic tool that we're forced to use in this space. And then we're going to return to the dev perspective and kind of highlight some of the things that they lost when they made the transition over to container technologies. After that, as the big chunk of this presentation, we're going to explore some of the, I guess, a smattering, a tasting plate of some of the developer-oriented options, and then we'll end with a summary at the end, naturally, I suppose. So, you know, before containers, we all lived in a world similar to this. We may have deployed a Node.js application, and in order to simplify things, our production or operations engineers may have told us to pin to a specific version of Node. This complicated things in that the libraries or the modules that we depended on sometimes didn't support those versions. Sometimes those versions were tough to upgrade from once we had so many developers depending on them. This was a relatively messy world. And this is a phrase that anyone who's from that world is probably used to uttering quite frequently. I think of the benefit of containers as taking this phrase, hell is other people. Hell is other people's development environment. So for me, one of the big things that containers solved is this conflict that we have of things working locally but not working in production. You know, it doesn't solve everything under the sun there, but it's one of the biggest features of a container-based workflow where things are isolated neatly and packaged up very nicely so we can convey them over to folks on the other side of the team that are helping us run our systems. The advent of containers happen to coincide with a couple other things that popped in. So, you know, some of the difficulties that we experience are certainly not only due to containers, some of it's due to the advent of cloud computing. You know, microservices are another thing that containers tend to be aligned with. So transitioning to those philosophies in addition to the transition to containers and Kubernetes can be part of the complexity that gets added in when we look at what's required to move to this container-native or cloud-native mindset. So one thing that developers lose is things like hot reloading. So if you're a Node.js developer, as in our example, Nodemon is a very popular application for hot reloading your application once you save your file. It'll automatically refresh the new version in your web browser. Once you introduce having to compile a container and build that way, you lose some of that quick feedback loop. So that's unfortunate. Container developers also have to, or sorry, the new developers working in this container paradigm also have to learn Docker files, which, you know, you might say is not that much, but it is just another little nudge in the direction of complexity and difficulty compared to not having to think about all this stuff. And then, you know, otherwise, it's just a lot of tools had to be replaced to work with containers in this new mindset. So, you know, Kubernetes has been called the Linux of the cloud and that's just because back in the day we might have talked about the lamp stack, which was Linux, Apache, MySQL, PHP. Linux was part of the base layer that we targeted. Kubernetes is sort of becoming that new base layer that we target. And with that means having to learn everything that Kubernetes brings along with it. So that can complicate things a little bit. If we're a developer just trying to get our app running. We're not hoping to, you know, delay things too much. Most of what developers tend to be interested in is velocity. When it comes to speed and your ability to innovate. You see how containers maybe slow some of that down learning curve of all these things we have to relearn and retool in order to work in this new context and environment. So, you know, monitoring has changed with Prometheus kind of bursting onto the scene and a lot of the other tooling on security and RBAC and all those other things, storage, connecting storage to your application that all requires new sorts of thought process for how you're going to deploy your app and get it working. And our language for interfacing with all of these new resources that Kubernetes is bringing to us is YAML. Now, I like YAML generally, but when you work with it day in and day out, some of the words sort of become very apparent to you. So the spacing can be an issue. I'm not going to try and convince you of all of the things that are a struggle with YAML. Instead, I'm going to rely on more prominent voices than myself to make that plea. Like particularly Joe Beda here, one of the creators of Kubernetes saying that it's tragic that YAML is sort of the thing that we all work with. I also like Brian Miles and Kelsey Hightower's take here and particularly calling YAML Kubernetes assembly code in that it's an intermediate sort of value between what humans can understand well and what machines can understand well, but that compromise ultimately means no one is fully happy with how YAML is working. So I think Brian is totally right here in that YAML asks for a higher level construct to help you sort of manage it. A particularly good example of some of the challenges of YAML, you'll note that this is just a couple of weeks back, October 9, 2019. There was a recent CVE that was discovered. So this comes from a chain of attacks that I think have been known in the XML world. Basically, when you have a markup language that allows you to use anchors or things that reference other things. You can wind up with a recursive set of references as pictured here that when a server tries to process can quickly sort of spike the CPU to 100%. And that's exactly what was happening. There was a part of the API server that was going to process and look at the YAML file before actually validating that someone should have access. And yeah, it caused a big issue in the community. So you've likely updated your Kubernetes cluster recently as a result of this attack being announced. I particularly, again, enjoy Joe Vita's take on this. You know, I think he's thinking in the open here, asking other people to help him brainstorm more than, you know, punching down at YAML. But he's asking the question to really explore the space. Is there a way we can eliminate YAML as a problem here? These extra features, these higher level features that YAML has baked in complicate things as demonstrated with the recursiveness there. I liken this to Joe essentially suggesting that we nuke everything and start over. It's obviously not plausible. But it is an interesting sort of experiment to think about what could we do if we didn't have to interface with YAML. And that's sort of where we're going to go as we look through the rest of the stuff in this space. So I think really summarizing that Kubernetes is fantastic. We all love it because it solves an inherently complex problem for us. Distributed systems are hard. Some of that complexity Kubernetes resolves. Some of it is, you know, it's what Fred Brooks would call essential complexity. It's inherent to the problem space and very difficult to remove without sort of a leaky abstraction. And, you know, just looking at the amount of resources that an average user of Kubernetes has to get familiar with and think about when they're deploying something to a Kubernetes cluster. It's very clear to see how overwhelming and demanding this is. So it's understandable when people complain on Twitter and in Slack and over coffee about how much of a pain Kubernetes is. Why do people do this to themselves? Well, they may not have experienced the challenge of running distributed systems without Kubernetes, but, you know, it's still a fair criticism that the introduction of Kubernetes and asking them to get on board with containers and Kubernetes is part of what is challenging for them. So I call this space shuttle design. You know, Kubernetes is designed to a very fine precision and it protects you from the hostile environment of the distributed system space. Distributed systems are very difficult to do correctly. If you've studied Paxos or looked at raft sort of the history that brought those algorithms into our domain of knowledge, it's been pretty burdensome to work with those raft is actually a way to simplify and make Paxos approachable for us mere mortals working in distributed systems. So Kubernetes is our space shuttle. It is our our vehicle that is taking us to a hostile land and keeping us alive there and allowing us to flourish, but space shuttles are complex space shuttles are not like our remotes for our TVs. The modern remotes that we work with rather old school remotes are something else but as a developer or just a human being. We tend to prefer designs like this that we think are elegant and sleek and there's not a lot of learning curve we can infer from context what's expected on every screen, and that can guide us through the process. Looking at the list of Kubernetes objects that is obviously relatively difficult to infer from just context you have to do a bit of studying so in order to solve those problems. The implications these days are targeting yaml some are targeting the CI CD workflow so that get can be your, your source of truth. And often a lot of these solutions span sort of multiple problem domains and what they're trying to address. But yeah, at the end of the day, it's really, we have to figure out all of these things for developers coming on board the Kubernetes. I think for the most part, none of these issues are really solved out of the box for them. To some of them, it may be more appealing to go back to this pinned version of a VM or a pinned version of software that you deploy to production to keep things working nicely with each other. Most of these solutions kind of run on a spectrum. I like to think on some make yaml suck less by filling in default values or minimizing the amount of fields you have to enter in. Manually they give you tools or DSL or command line options to automatically sort of parse data and make things magically work. And others are tools that are a little bit more opinionated they obviate some of the more complex things that limits your freedom of action at the end of the day as a developer but it also means you can get more close to that push button workflow and what works best for your organization and your developers is wholly up to you, but I think in general the Kubernetes distributions that we've been consuming and downloading haven't been getting very opinionated in this way so it's a little tragic that developers are left to the whim of their operators or their organization giving them the time to customize their Kubernetes in such a way that it can really serve them under this form. So let's look at some of the tools that are available to us to simplify this this yaml conundrum. So the first is one that I know the CNCF has done a webinar on recently I think it's also been donated to the CNCF sandbox. It's brigade on it's from the folks who started dais and eventually moved into Microsoft and like shout out to them for being some of the first people to really think about developer workflows in the Kubernetes space. I like this, the brigade homepage just because they are literally advertising leave your yaml at home. One thing to note here is what they've done is they've introduced a JavaScript based DSL so this domain specific language, you know, you could argue that maybe it's not as declarative as you would like it to be. I'm going to argue that it is okay and actually totally natural to mix procedural or some imperative tools with declarative software and I'll give you the example of, you know, every developer out there who runs a query on SQL and then downloads the results of that query into Python and continues to shape, you know, the results in Python. I think that is a accepted and fruitful way to work with things and what a lot of these tools are doing is trying to figure out where the right boundaries are for the declarative piece and the imperative or procedural piece. So this is cool. The idea behind using JavaScript as your DSL is that JavaScript is one of the most popular languages on GitHub. If you look at the statistics for all the repositories that are up there. So it's a language that's widely understood. And what Brigade does as a tool is it solves the CFI integration with Kubernetes. So as a developer, you don't want to use a tool like kubectl generally. You sort of want Git to be your source of truth. So this enables a Git push type workflow where Git is your source of truth and, you know, you can check dashboards and make sure things are working the way they're supposed to. But Brigade gives you the tools to create specialized workflows, pipelines, get your code running on your Kubernetes cluster. Another early entrant into this space is Metaparticle. It's gone a little stale over time, which we'll look at in a little bit, but this takes the declarative piece and rather than give you a DSL in contrast to Brigade, what this is giving you is an SDK. So whatever your language is that you're working in, you know, part of it will give you an input in your code or your Kubernetes resource as a separate file that you have somewhere in your repo. You're defining it directly in line with your code. So the hope is that this makes it easier to see exactly how this application is supposed to be deployed. And then the Metaparticle system reads this once you pass all your tests and everything and can help push it out to your Kubernetes cluster. So that's the idea behind Metaparticle. As noted, it is a little stale. There's, you know, when I checked the issues, there was a comment from August 26 that was not responded to, basically asking what's happening with Metaparticle. So I think we can safely note that the focus is elsewhere at the moment for Brennan Burns and the team that sort of brought this out, but it is open source. So if you think this is a great idea, you like the philosophy of an SDK, I'm sure they would be happy to have folks get involved and continue building on this. So what Metaparticle does is it lowers the burden necessary to learn a Docker file or Kate's Yemo formats. It lets you work with the language and tooling that you're already accustomed to. So it lowers the sort of learning curve required to get your resource up and running in Kubernetes. ISAPOD is sort of something that's a little more recently come onto the frontier. It's an interesting approach. It also takes a DSL approach. This is an example of what ISAPOD might look like taken from some of their docs. And, you know, you can see some distinct things here. Number one, it looks vaguely Python like that's because it's based on a language called Starlark. That's the same language that Bazel and Buck, the build systems use. Tilt, another program we'll look at in a little bit, also uses Starlark, I believe. But yeah, the thing that they're trying to address here is that your testing needs to happen along with your config changes. And being able to do that in a procedural or an imperative way where you're using control structures like if else or all that stuff is pretty conducive to getting things up and running. So this is sort of, I think, the base file of what your ISAPOD format would look like. And I'd like to think of ISAPOD as solving the need for testing your configuration changes before you push them to production. So I think there was a bug on BigQuery that came out a year ago due to config changes. That's not to say that BigQuery dropped the ball or anything, it's just to say that this stuff is really, really hard and the best of us, you know, can't be perfect when we manage these things. So testing is very important. And I think the push towards having your tests sort of live with your config changes is relatively important. CNAP is another, I think, modern technology that a lot of folks might not be familiar with. If you check their website, it's container native application bundle. So they say CNAP facilitates the bundling and installing and managing container native apps and the services that go along with them. This is just the idea that an application in Kubernetes is often not just a single container, even if it is a single container and there's no sidecars or anything. There are usually a lot of supporting services that go along with that. And I like to think of this, if I'm an elevator pitching this to people as it's just Docker compose, but it's more neutral. Docker compose is obviously tightly coupled to Docker swarm and Docker's way of doing things. The container ecosystem with OCI and RunC have since sort of become a little more agnostic. And CNAP is an agnostic or more neutral version of Docker compose where you can collect and group your application together, but and, you know, push it to different locations. And it's not specific to any one thing. So, you know, if something else comes out beyond Kubernetes where you want to run your application, CNAP hypothetically should work in that context, which is pretty cool. It's sort of, you get the immediate advantage of having a neutral way to define your app. And then the long term potential advantage of it sort of being a technology that lives on beyond just the life cycle of a single product. Yeah, the world is bigger than Kubernetes. If you happen to use helm, I believe the folks that are behind home have even said that, you know, CNAP potentially can be something that is backing helm, which is cool as well. One thing to note is that CNAP has, you know, come into the news in recent weeks. This is a post by Jimmy Zelinski, PM and I think engineer from CoreOS and Red Hat, talking about OCI artifacts and some of the early work that they did 2016-2017. Thinking about how to store helm or more complete applications in a container registry. The problem is that a container registry is designed just for a container. It's not designed for all of the things around the container, like the services and everything. And CNAP is sort of becoming this thing that is being explored as should we store a CNAP object in our registry as well. This is something they're actively designing at the moment. And as Jimmy points out in this medium post, you should absolutely reach out to the folks at Red Hat or Microsoft or any of the other companies that are contributing to this and doing the early work on it, because they can't read your mind. So if there's an essential feature that you think needs to be added to that spec for CNAP to work for you as a build artifact that you can store in your container registry, definitely get in touch. You will be helping them out. I'm sure they will be very grateful for your feedback from a person on the front lines kind of doing this stuff. So CNAP solves the problem of portability beyond just Kubernetes. It also solves the problem somewhat of organizing the disparate resources that are associated with the service running in Kubernetes. And good organization, as we'll see in some of the other tools we look at, does also help reduce some of the complexity of managing your workflow as a developer. Something that has come out a little more recently from the same team behind CNAP at Microsoft is OAM. I think it's the open application model, if I remember correctly, and for Kubernetes, this is implemented as a reference as Rutter. So I saw this happen on Twitter as I was creating the slide deck recently, so I figured I'd cut and paste it in here. And the idea as Harry puts it here is developers don't have to consume system calls directly to write a program on Linux. If Kubernetes is the assembly of, sorry, if YAML is the assembly of Kubernetes, then they're doing something very similar in Kubernetes space currently when our developers are writing applications for Kubernetes. So how can we simplify that? What layer can we introduce to create a user space? And the response that OAM is giving is that, you know, maybe you can organize your YAML files in such a way that some of the YAML is ideally suited for an application operator to fill in. So the person running the infrastructure will fill in those fields. And then other parts of the YAML file are things that a developer will fill in because they know more about how the application runs on that level. If you can give each of these roles, you know, an idealized file, a more organized file that they can easily pencil in values for, it can reduce some of the confusion and sort of ease the burden of bridging the dev to ops transition when you're pushing code to production. So I think that's a really cool idea and I think that's something that a lot of folks miss when they're initially just kind of looking at things. This is what a YAML file for OAM Rutter looks like. It looks very similar to some of the YAML we're already defining a lot of the same values are there. We're still defining the port and, you know, some additional characteristics like the image, any other config values that are important. And the system then digests that and produces the resulting necessary Kubernetes resource at the end of the day. So the idea is that even though it's still YAML, it's more organized. And it becomes your new sort of interface for looking at all of these things you are looking at less at the end of the day, because dev and ops are collaborating better. So in a way it's exploiting Conway's law, the idea that the communication structure your organization affects how your programs are built and deployed. You know, you're exploiting the facts that there tends to be different concerns between developers and operations folks, as much as dev ops wants to make things everyone's concern. I don't know that anyone ever fully gets to that ideal. We always pursue the ideal because it's a good thing, but at the end of the day we are beholden to separate concerns. So the idea here is that things can be simplified by looking at things through the lens of what your role is, which is pretty cool. Build packs are an old technology if you're used to Heroku or cloud foundry. This is the tech that they really invested in. And these are just like really simple, like you can see the only real important bit here in this app that I think it's app.js or app.json is the image that we're specifying, which is the build pack that we want to be used for this actual piece of code to get built. Usually there's something called a proc file, which is just your specifying the command that's required to start your web application. And basically you're off to the races. A lot of things are sort of figured out for you the container gets created automatically because that's what that build pack is. It's a process for bundling and building a container and it will then deploy to Kubernetes if you have other tooling hooked in to make get your source of truth. So if you were to combine build packs with something like Brigade that kind of bridges the gap, you can have a really neat system in place. But one thing to note here is that build packs, they are one of those solutions that is abstracting some things away. You're losing some freedom of control, obviously by not being able to specify a lot of the details in your normal docker file. You're dealing a lot more with idioms or expected values intelligent defaults and you can customize those to your organization by creating your own build packs, but that is a burden on the system still so it's a trade off rather than a pure solution. Tilt is a really cool tool. It solves the node mom problem that we looked at earlier. So this is an example tilt file from one of their tutorial areas and you can see the important part here is, you know, it's it's written in Starlark that same language kind of language but they're talking about how live updates are going to work and how things are going to be synchronized. So what tilt is doing is doing hot reloading it's it's replacing the code that's inside of the container with the most up to date code that you just saved in your editor and you're spared the burden of having to recompile your container image by hand or through a script you wrote every single time. They've kind of figured out a lot of the edge cases and complexities around that and given you this nice tool that not only speeds up your iterative development but it also makes debugging a lot easier when you're you want to see it you want to make a change see if it fixed something for the bug that you're looking at. That's also part of this this process so tilt is reduce bringing back that I would say quick feedback loop to you as a developer that you usually lose when you move to container technologies. Cool. This one actually isn't an external project admission controllers. Those of you who are familiar with Kubernetes may know that this is sort of bundled in and how a lot of authorization and stuff happens when things first enter the cluster. There are mutating admission controllers that can actually make changes to files so I like to think of the base admission controller limit ranger as one of those things that can make changes and make life easier so if someone doesn't specify resource limit for the application that they're deploying to your Kubernetes cluster limit ranger looks at it and actually enters a default value for you and this is one of those things that can reduce the burden. It's one less field that a developer has to enter into their YAML and think about and this is specified at the cluster level so depending on your environment you can have different values for your limit ranger that work for the context that that app is running in. I like this a lot but I think one downside to this potentially can be that it's a little bit opaque you may deploy your application to production. Forget that a lot of these values are getting mutated or tweaked as things are getting pulled into the cluster through the API server and it can seem sort of magical so you definitely want to think about how you're communicating how your admission controllers work. If you want to enable or disable admission controllers I believe they're flags on the API server so you know talk to your cluster admin about doing that. Helm is sort of the standard that I hope all of us are familiar with and a Helm chart is something that looks very similar to a normal Kubernetes object that's even got the name of a replication controller up top here in this example. One thing to know is that you know near the bottom of this page there are some curly braces and what they're doing there is templating these are go templates. Go templates look very similar to mustache templates or ginger templates. There are some commands or procedures you can sort of feed in in addition to just a list of values as seen pictured here. I think you know there is some bit of this that's unfortunate and that people do have to be familiar with go templates specific sort of flavor of templating to really dig into this stuff but I do think that learning curve is relatively minor. A lot of the tools that we're going to look at in the next couple pages and one of them that we already looked at previously isopod are going to be critical of this and I don't think it's a big knock on Helm in general. It's more just to say it's sort of like there's a famous yarn stroke stroke quote about C++ like the only reason people are complaining is because they use it. So the people who are passionate about this stuff are passionate because they're using Helm day to day and it's worked for them and it's it's figured out a solution to a problem. It's well enough that it's become very popular and well loved by a lot of folks. Now what a lot of these tools are looking to do is kind of do the whole I'm going to stand on the shoulders of giants that came before me and see if I can improve this a little bit so we look at case on it and customize the context for those tools is really going to be about thinking if we can do something that's even better than templating templating works well enough for some cases but when you get to complex multi cluster multi environment scenarios. There can be some challenges. So what Helm does really nicely is it's part of what CNAP did it bundles your applications together. But nothing else does which is it gives your developers a menu of options that you can really choose from. And it also helps manage sort of the basic life cycle of your application of your your charts are defined well so Helm is a great basis to start with if you have absolutely no developer friendly tooling. And I believe Helm version three, which is the version that has a tiller list deployment uses CRDs and other fancy controller stuff instead on Kubernetes. That is an release candidate phase right now so now's a great time to download that test it out give your feedback to the home team. Case on it is a tool that is based on something called J sonnet. The idea is that if you give people a basic command line tool and some primitive objects or prototypes or components as they call them sometimes. You can use those prototypes and primitive objects to combine things together on the command line and generate very quickly. The type of resource that you want to deploy. I think the obvious criticism of this is just that there's a lot of complexity here you're you're learning, not only this new CLI tool but there's a whole language that comes along with it. This is the example of J sonnet that case on it is based on the idea is that instead of using YAML where you have to deal with some of the funkiness of those anchors that caused the CB we looked at earlier the billion lapse attack. We're using a simple json format which is a subset and in that json format we can define functions and some other fancy things that can really help with the management of any of these resources that we're dealing with. Unfortunately, it's it's worth noting that when Heptio was acquired and they sort of took inventory of what was happening. You know how much excitement or community interest there was that they found that a lot of folks found case on it pretty challenging so they started to scale down their investment in the project. It's still again open source similar to metaparticle and it's, I think there are a lot of interesting and good lessons to take away from what case on it did. In particular, Brian Miles gave a great talk at cube con recently and I think this design criteria I want the easy things to be easy and the hard things to be possible. I hope that's something that everyone who's working on these developer focus tools is keeping in mind, you know, the hard things to be possible is probably speaking more to you don't want to limit the action that people can take so build packs are sort of maybe discarding that a little early. But this idea that keeping things simple and the goal is to make things accessible and approachable is a great north star for anyone looking to improve the developer experience. So case on it, really, I would say in contrast to home addresses a lot of the problems that people have with multi cluster multi environment, multiplicatively complex configuration scenarios. It really goes to great lengths to keep your code dry and composable. So you can kind of assemble what you need together on the fly that comes with some learning but hey. So whereas case on it, you know, a shoes, YAML in favor of Jason and uses a entirely different format customized goes the other direction customized says you know what like YAML is a disease. What if we gave you another disease which is more YAML and those two diseases fight each other out. I think that was a cure way back in the day, one of the primitive cures for malaria as they would inject you with something that would give you a fever. And the fever would kill the malaria and bang bang boom your body gets over the fever. You're good to go. It's it sounds primitive when you first hear about solving YAML with more YAML but once you dive into it. It's actually really cool so there's great talk on this. It's one of the 12 which I don't have time to emulate fully given the the time limits of this presentation but customized to play your app with a YAML free template Ryan Cox. It's another great cube contact kind of diving into how this works. By you defining a basis that or using a generator to generate a basis for you to then overlay or patch your values on top of so this looks familiar to anyone who's worked with communities YAML hopefully. This is an example of them generating a basis that we might want to use as an overlay. And then if we are doing an overlay say for different environments. In this case we're we're renaming the prefix of name to dev or prod based on which of these environments we want to deploy to. So if we go back one step. You know what what is the metadata name my engine X that would be dev my engine X and the dev context and prod my engine X in the prod context. So customize ends up being a lot simpler there's maybe you know a handful of different overlay commands that give you access to a whole lot of options for customizing your templates and you can overlay multiple times on a single base. You can use a base and many different projects and in this case like as for dev or as for production. Customize has actually been bundled in with a cube cuddle as of one point one four so this is a tool that is actually potentially available to you today. You may not have even realized it's installed on your system and could make life a lot easier for managing and creating Kubernetes objects so this this tool manages to stay both declarative we're not using control structures or DSL that is putting us through like for loops or if then statements. And it applies very cleanly on top of the format that we're already familiar with. If you're like me and you were a little scared by solving YAML with more YAML and that being a little confusing. I definitely want to point you to the ship project, which is another open source project that kind of assists with customize. And it can show you sort of the desired target of, you know, these these different based files getting patched how these values are going to change at the end of the day when multiple overlays sort of enter the picture. It's very easy to lose track of what is changing what and what order things are happening in a tool like ship can really help simplify that. So I think with ship a tool similar to this you get a very fluid and nice experience with customize that ends up being really cool and it's it's a lot more than some of its parts. The, the, the YAMLness being YAML native YAML centric sort of fades to the background for me when I'm able to interact with it in this kind of a really nice fluid way. So customize similar to case on it solves the complexity problem, but it does so well keeping things declarative and it is also oriented towards keeping your code very dry for generating these config files. So, that is our preview of, you know, some of the developer oriented tools I'm sure invariably there are a lot of other tools tilts is in company with scaffold and other projects brigade has competitors in the form of Argo project. This is sort of just a tasting plate for you of some of the different types of solutions that are out there, because it'd be impossible to dive into all of them but there are a lot of great other tools that you know we haven't covered here today. One thing I'll note before we get to the wrap up and summary here is just that the experience that you want to push for for your developers kind of depends on your context. The context that I work in. We're trying to really simplify things for companies that are just onboarding to Kubernetes to begin with, which means we want as minimal burden on them as possible to support their apps over so we've invested with build packs and some of these other technologies to give you sort of this get push workflow. And the system is it's NetApp Kubernetes service but you're interested in that you can check it out but I think what everyone is aiming for at the end of the day is something similar it's taking a a and gluing them together integrating them think any one tool is going to do that so that's that's point six on our summary here but the other just things to note our Kubernetes is inherently complex it's our space shuttle protecting us from the the ravages of distributed system space outside. That's okay, but people still need to know how to use it they they not everyone needs to be an astronaut trained for years to go up in the space shuttle. So, we need to look into solutions that can make that a lot more approachable. There are a lot of solutions out there, probably too many to go for any one person myself included to go through fully. And it's worth noting that no one tool solves all of these concerns currently I think in the future we may see frameworks emerge that you know bundle some of these tools together and provide a neat workflow in that way but we're early enough in the process that everything is still sort of mostly modularized to a particular domain. Some tools work by reducing the amount of YAML we have to work with we saw that with admission controllers we saw that with build packs. And others try and take you away from YAML altogether they provide you with a DSL like JavaScript or something like Python. Hopefully language that you already know, and allow you to simplify by reducing the burden of you having to learn yet another system or get another tool. And yeah, that is the presentation for today's webinar. So thank you for attending. If there are any questions I think Karen is going to help tee those up for me now. Thank you all for a great presentation. We have time for questions if you have questions as a reminder, please drop them in the Q&A tab at the bottom of your screen, and we'll get through as many as we have time for. There currently aren't any so. Okay, I will, I will send you one as soon as we go in. I have a question for you Karen. How did I do on Helm and the other projects there that we covered? Are there any additional details that you would add considering we work pretty closely with those folks? No, I think you're pretty up to date with most of it. Thank you. Okay, cool. Someone asked if we can share the recorded webinar. I believe this will be on the CNCF website after including the slides. Yes, there's also a CNCF YouTube channel that I subscribe to and they are lightning quick. They usually get that stuff up in like a couple hours. I'm sure it being conference season. Don't hold them to that standard but I've been impressed by how fast they've been in the past. Let me give it another minute for questions and see if anyone drops any in. There's a question. I missed the first few minutes. Do you have details on what the hyperscalers are doing in this space? Yes, to some degree. It seems like most of I assume by hyperscalers you mean the first party cloud options. It seems like a lot of them are offering their own marketplace of solutions. So they're kind of curating good helm charts, good operators, good charts to start out with as a basis for your Kubernetes cluster. A lot of them are integrating monitoring and a lot of other details deeply into their stack. So they're not, they're sort of taking care of for you. So security and authentication authorization is out of your purview. So it's one less thing you have to worry about when you're deploying your app to Kubernetes. And then otherwise, I think a lot of the work that's happening on serverless is actually developer oriented or developer oriented serverless is a very bad name for a very cool idea. I like to think so stuff like native or I think the Microsoft project is Cata. There may be other serverless projects like open FAS that get integrated in the future. But, you know, each cloud seems to be investing in a space that simplifies things. I think AWS their play is with Fargate, where the basic unit that you think of is the container not the server. We'll see where they take that. I think they're still a little bit a little bit in development compared to Microsoft and Google. But yeah, I think the what they end up layering on top of those serverless options is going to be a big part of how they address developer productivity. I should say that for the hot reload type tools. You know, we talked about tilt as one option Google has released scaffold, which does some similar things to tilt as sort of an open source project that you can use to really quickly iterate on containers that you're working with. I would argue that metaparticle as a side effect to achieve some of that same process of providing you a faster feedback loop. But yeah, there's, I would say a lot of that is scattered or optional or, you know, in the case of scaffold it works with any cloud it's not just Google cloud so it's still being figured out. Great. Next question. Is there any Pythonic way to solve YAML problems. Yeah, I mean, so I think there's two questions here one is, can you make it easier to read and more terse and I think another Pythonic attribute is there's really one sort of blessed way to approach a problem. And it's usually like a best practice or the standard way. I think that standard way of addressing things is still getting figured out tools that are relying on Starlark, like isopod are a great start to that, rather than using YAML they they output to proto buff, I believe. So, you really are working directly in Python and solving a lot of these issues. That said, you know, the a lot of the fields that get filled in like the port number for your app or the container image that you're basing your work on or any of those other details. A lot of that, you're either losing in a lossy abstraction because you're setting a default value, or you're just reorganizing how you're filling in those values so you're going to make a trade off at the end of the day of either reorganizing things in a more Pythonic way but there's still being a whole boatload of values you have to fill in, or choosing to pick intelligent defaults that you can override, and having things work that way that can be a little more magical. I don't think the community has quite solidified around what what is best to have an intelligent default for versus what is best to, you know, just reorganize in a better way. That's that's something that we're exploring so explore with us. Cool. Next question. If you were just starting with Kubernetes, what would you use. What would I use I think helm is a great starting place. If you're just getting started with Kubernetes and you don't want too much of this stuff to get messy. You can start with helm helm includes templates which is sort of the bare minimum you want to have in order to create a flexible and easy deployments on helm also helps organize things in a way that like your logical application is kind of bundled together. And in a set of files. So, you know, with helm you don't really have to worry about CNAB or OAM or rudder or any of the other solutions like case on it or customize like helm does a lot out of the box. But it doesn't do any everything it can benefit from being combined with tools that reduce the hot the hot reload loop give you more feedback faster as a developer so start with home and then it's a branch out from there after you get some comfort. Awesome. Okay, next question. One of the challenges is this one of the challenges is the secret and configuration management on YAML that contains sensitive information. Is there any project on Kubernetes to handle this. Yeah, actually, so I think if you look into isopod case on it or gosh customize all of them actually have specific sections that call out, you know, bundling in secrets. I'm sure some of the other tools that I covered today have have some information on that but they all have their own unique approach to how to safely pull in secrets to your, your value, your file of the contains values. There's one project I believe by Bitnami. It escapes me. It's maybe Bitnami or GoDaddy. I can't remember which but it, it's a way to encode a secret and in a hash that is noble only to your secret store on your Kubernetes cluster requires some tooling to be there as well. But you can safely commit it to your Git repo and not worry about someone just deploying that to your cluster and having it work as well as it's it relies on some other secret information that's stored on the Kubernetes cluster itself. So I would look in the projects by GoDaddy or Bitnami for something a little more advanced and in that area. Okay, I think this will be our last question. What about tools like Rancher which give you a GUI and possibility to modify the, sorry, to modify the Kubernetes environment? Yeah, I think tools like that are great. They're really useful for debugging when you're live debugging but I would say they are antithetical or counter to some of the goals of Kubernetes. Part of what Kubernetes and containers are about is immutable infrastructure. And, you know, part of, in parts of this presentation we talked about Git being your source of truth. When you're applying a patch using kubectl or you're, you're modifying things live that are in production and deploying that way. You lose that that Git as a source of truth you lose that insight into everything that's going on the cluster knowing that it's been fully tested by CI and all that. So I think that approach is again useful tool to have in your belt, but it would not be the way that I'd want to deploy everything to production for my clusters. Okay, last question. I'm going to do this quickly. What is your opinion about Tommel? T-O-M-L? Yeah, Tommel is an alternative to YAML. I mean, I like Tommel. It reads a little more cleanly to me and I don't have to deal with a lot of the spacing ugliness, but I think it fumbles in a lot of the same areas as YAML at the end of the day when you get a lot of data in Tommel. It can be as burdensome in some areas as YAML. It's great if you can make Tommel your markup language of choice. I think there's some go tools by, gosh, forget his name. I remember his GitHub username SPF13. There's some tools from him I think called Viper that is used in Hugo. You can use any configuration language you want and Viper will translate it. That stuff I think is really neat, but at the end of the day it's still a markup language and it's still an awkward fit and you're still mapping to Kubernetes resources and Dockerfiles. So it's an improvement, but it's one minor improvement on the road to the ultimate goal of reducing complexity for developers. Okay, I'm going to have to end things now. Great. Thanks all for a great presentation. That's all the time we have for questions. Thanks everyone for joining us today. And the webinar recordings and slides will be online later today. We're looking forward to seeing you at a future CNCF webinar. Have a great day. Thanks.