 Okay, good afternoon everyone. Thank you for joining this session. So it's probably like we just have one session left after. So thank you for staying for the whole day. Hopefully you're not too tired. So the program for today, we're gonna take a look at building apps in communities, the unforgiving sea of containerization and the life saver tools. So the idea behind this talk is really to take a look at various tools that are available for developers when they start their journey to developing into communities. So starting from your developer with a local environment and you want to go from building container to developing and testing into communities. So my name is Nick. I work as a head of DevRelast SpectroCloud. I've spent the last six years working on communities approximately in both the field of CNI, which is the network plugin and the CSI. And I'm joined today with my Tyler. Hi everyone, I'm Tyler. I'm a Principal Software Engineer at SpectroCloud. I've been working on Kubernetes-native projects for a couple different startups between now and 2019. And so the agenda for today, we're gonna start by digging into the inner loop development. So I don't know how many of you are familiar with this term, inner loop development. So we're gonna define this a little bit. Then Tyler is gonna explain the tool sets with the different stages of the inner loop. Then we're gonna take a look into an application that we developed especially for this conference or at least to show the different tool sets. This application is awesome. It's a DatJokes generator built as five different microservices. So you're gonna see pretty much, I mean pretty overkill, but it's just for one purpose which is showing you the different lifecycle stages, the different tools, and how you can start developing into Kubernetes without having to know all about the Kubernetes complexity. Okay, so let's start by defining the inner loop development. So when you are building an application as a developer, most of the time you're gonna start by developing on your local laptop. And at some point you're gonna commit your code into a Git repository and eventually trigger some pipeline, some automation that can be managed either by you or by the enterprise. So strictly speaking, the definition of the inner loop stops when you are creating a PR into the upstream repository and then the code is pushed to your upstream repository and then an enterprise CID pipeline is triggered. But with the current tendency, which is shift things left, which means that who is familiar with this term, shifting things left a little bit? Okay, so the idea of shifting left is kind of mirroring what's inside the enterprise repository, the different processes that are involved here into the local developer environments so that you can test things early from end to end. So from here writing the code, building container, testing your container, testing the entire application, which also involves building potentially your own pipeline and also including things like security, compliance and different policies, not only from the enterprise, but bringing this left into your developer environment. So the more agility you want to bring into your application lifecycle, the more left you're gonna shift, which means that as a developer, the more you're gonna use the enterprise tools into your local development. Okay, so onto the tools. As we all know, the Kubernetes development, cloud native development is rapidly evolving. There are a million and one tools, there's probably a new one that just got published as I'm speaking and as we work, it's just the process of developing software changes along with the tools. So I'll show you a brief journey of that evolution and talk about some of the most useful or ones that we wanna highlight. Okay, so on a scale of one to 10, if your laptop broke right now as a developer, how mad would you be? Hopefully not that mad, but if your answer was closer to 10, then you might wanna consider some of these new tools, which is basically remote containerized development environments. So products like Codespaces or Gitpod or well, those are the two I can think about right now, but basically what they do is they provide portability and reusability and consistency to your development environment. I'm sure we're all familiar with the concept of it works on my machine. That's what we wanna avoid. So you can encapsulate all of the dependencies for your build inside of a container, which is shipable and shareable. So this speeds up developer onboarding. New hires don't have to spend forever installing all the requirements. They can just get started. It's also self-documenting. So something to consider if you were on the 10 side of my first question. And then when we're writing our code, when you're moving from just writing, you're maybe prototyping your app locally on your laptop, you might not be thinking about how you're treating sensitive data and environment variables, but then as you sort of move along the spectrum from local dev to maybe deploying or unit testing and Docker compose and then eventually Kubernetes, you need to think about how you're gonna extract that information, refactor your app so it's not hard coded and then you'll inject it later. And as part of that, you wanna avoid committing anything dangerous or sensitive into your version control system. And pre-commit is a useful tool for that. You can put it in your dev container so the developers have no choice but to use it. And what happens is if you use, there's a million plugins, but one of them is Gitleaks. If you use Gitleaks and you try to commit, say an API key, it'll get automatically blocked. And then obviously, maybe not obviously, but the 12 factor app pattern is just something to always keep in mind when you're thinking about code. All right, so we wrote some code and now we wanna build it and we wanna test it. Initially we might use Docker compose and you'll write a Docker file for your individual application and then you might be pulling in third party integrations like a cache or database service. And initially it can be quick and easy to spin up all of that infrastructure locally using Docker compose, but then as your project matures or as you move closer or further from development to production, you'll start thinking about deploying in Kubernetes and make file for the last, I don't know, couple solid decades has been sort of the de facto glue layer to do some of that orchestration in terms of building your images, tagging your images, cutting releases. But I wanna highlight Earthly and Earth file. They describe themselves as what happens if Docker file and make file had a baby. So bring some of that portability, consistency, repeatability to your builds in the same sense that dev containers bring those same things to your development environment. What that means is that your builds and everything that you wanna run in CI is actually running inside of a container. So there's never that issue of, oh well tests passed locally but they didn't pass in the pipeline. It's really familiar, comfortable Docker file-esque syntax. So sort of the, yeah, something to look at. And then, all right, we're deploying in Kubernetes. Initially it might be raw manifests but then as things could flexify, you inevitably start templating or you might need to do environment-specific customization. And there's a bit of a philosophical war there about whether to use Helm or customize is probably the two most popular. I'll just say that if you're doing environment-specific changes, like minor tweaks based on dev stage or prod, customize is the way to go. But if you're doing more convoluted conditional logic like enabling or disabling different components of your stack and that's based on user preference, I would go with Helm. Then again, there's always a shiny new tool. So I have to mention Q. Q is a data validation language that can actually render its own Q spec from your manifest. So you can take your existing manifest and port it to Q easily and it reduces boilerplate by 75% probably out of the box. And then not only that, but it'll validate that your manifests are correct. So another thing to look at. Okay, so we've built our artifacts, we have our pipeline defined and now we're gonna be developing in Kubernetes. That might be local. If so, Minicube, KIND and K3D are all popular, fast, easy solutions to get a local Kubernetes cluster up and running. And if you're doing things perfectly and you've mocked all of your external dependencies, that might be all you need to do your dev. But we all know that's a lofty goal and it might be hard to reproduce prod or a prod like environment to the extent that you want locally. So then there's this exciting opportunity where you can basically have prod and local at the same time. That's what dev space, scaffold and tilt offer. So the problem is it's hard to reproduce this convoluted cloud native environment in your local Kubernetes cluster. Like you might wanna reach out to a cache or like a cloud message queue and you can't always bring that down to your local Kubernetes cluster. So what these tools offer, dev space and scaffold in particular, a bidirectional file sync. So like the code in your IDE is going to be automatically synchronized with the container running in Kubernetes and on save. And it's pretty configurable. So you can do all sorts of stuff. You can hook it up with a debugger. But what this does is it bypasses the image build, the pane of image build and push. I think as devs we're all familiar of tapping our fingers and on the keyboard waiting for the image to go through and that's no fun. So massively improves developer productivity when you can avoid that. But the journey is not over here in Kubernetes, great. And now you've got to think about security. Secret management is key. There are a million solutions. I wanna call out SOPs, Vault and Trousseau. SOPs is client side and low barrier to entry. Vault and Trousseau are both server side. Vault is super fully featured but quite complex. Trousseau I would say is the most Kubernetes native. I'm not gonna go into too much detail because there's a lot to unpack. But SOPs is a good place to start and enables GitOps flows really quickly and simply. It allows you to check sensitive information into Git essentially because it's encrypted. You can just use a PGP key locally to encrypt your manifests, put it in Git and then you can hook that up with say Argo CD. Which is a quick and simple way, relatively simple to get yourself into the GitOps pattern. And speaking of containers and Kubernetes, another thing you want is static linting. So check off as a tool that you can use to lint your Dockerfiles, hash and core configuration language, basically any static config against hundreds of security best practices policies that'll tell you if you're not following, like if you don't have a user defined in your Dockerfile for instance. And dive is another tool you can use to inspect your images and it also has a beta CI feature which will allow you to basically fail it builds if your images are not being space efficient. If you're bloated, if your images are too big, not meeting certain usage thresholds, it'll fail the build. All right, so if we've taken all this time to optimize our images and make sure they're secure, then we wanna make sure it's exactly those images that run in our cluster. And you do that by signing them and then we'll elaborate more, but two tools for signing images are cosine and notary. That brings us into the concept of S-bombs and vulnerabilities. So what's in an image? Well, it's whatever's in your code and you can basically use tools like sift, get pod and turn to look at all the dependencies entailed by your image and the code that's inside of it and create a record of what that is and that's useful for an audit trail and compliance. But then what you can do is you can plug those dependency lists into vulnerability scanning tools like gripe, trivy and Claire which compare those dependencies and the versions against known lists of vulnerabilities. So there's like the GitHub advisory database or the NIST national vulnerability database and they can pull in the output. So for instance, sift and gripe, sift will generate the S-bomb, gripe will read the S-bomb, it'll say, all right, these are the issues with your image and how critical are they? Hopefully yours won't have too many critical issues. All right, but lastly, we want to verify that these images that we've taken all this care to build correctly are the images running in our cluster. You do that using a cluster admission controller. Caverno and OPA are both valid options. Basically what you do is you create a cluster policy that will reject any image that doesn't match certain constraints. We'll see more on that in the demo. So now what are we gonna do? Whatever, I mean, Tyler mentioned a lot of different tools. We're gonna seed them live into a real life application. So, as I said, it's a bit of a silly application generating that jokes app and we're gonna turn it later with their space into a Chuck Norris joke generator. So the idea is very simple. We have the couple, this application into multiple components. In terms of out of the box services, which means that this is something usually you consume, you don't develop. We have NATS, which is, again, a CNC, I believe it's a CNCF project that is a message coding system. It is used by the joke server, which is providing an HTTP endpoint for the user to request a joke. As the user requests the joke, the joke server will publish a message on NATS. This message gets picked up by the joke worker and the job of the joke worker is to use the open AI API. So we're gonna use the GPT-3 model. I mean, everyone knows probably chat GPT at this stage, right? And then this joke is gonna be stored both in a MongoDB database and will be also stored into a Redis cache for super fast performance because we need to generate 1,000 and 1,000 of jokes. For later, if you want to see the, I didn't go exactly in all the details of this app. This is just for your reference in terms of user workflow. But at this stage, I think it's the moment to start the demo and go into the different stages. So here you have the entire repository of my application. So classic Go project, where I have a CMD folder with both the joke server and the joke worker. Every folder, in our case, equal one microservice and then I've got another internal folder for library that are essentially shared between the two components, the joke worker and the joke server. As you can see, this is all I have. I don't have anything related to NATs, MongoDB or Redis because I'm consuming those services and my job as a developer is to test my joke server and worker code against those services. Now is how can I deploy this into Kubernetes, right? So typically you start with local Docker. So here I've got a make file. The rule of that make file, essentially I've got different targets. The rule is to create, facilitate container build, facilitate the Docker compose if I look quickly into it, compose. Like I've got a macro here to build all my application locally and deploy it into my local Docker demon. We're not gonna go into that because what we are interested in is Kubernetes. But first just to give you some context, I'm gonna show you quick Docker file here, which is not ideal because typically you would do like a multi-stage build to make the image slimmer but at the same time it will give us the opportunity to see like the results for the SBOM and all the vulnerabilities of doing so compared to a distro-less multi-stage container. The only thing I'm doing really here is building the code and then with the joke worker, same thing where I'm just building the code from the Docker container. So I've got two Docker files into those two folders and then this is where I want to start deploying things. So I've got a deploy folder where I'm gonna be walking through you through the dev space configuration because this is the tool we are going to use today. So it really looked like a Docker, sort of a Docker compose YAML file. So everything is in the YAML and the idea of dev space is to provide wrappers around your code so that you can effectively and efficiently deploy that into Kubernetes. So we're gonna start here with the syntax of importing images. So you have an images section where you just specify the image, the Docker file and the context, so very easy. With this you can build the image. In terms of the deployments, here this is where we're gonna specify I want MongoDB. So it's gonna be using Helm as Tyler was explaining. You can use Helm, you can use customize. As well, and you can use also static manifests to deploy all the components using dev space. So in our case we have MongoDB, for MongoDB we have the Redis operator to deploy a Redis database or cache. We have NATs to deploy NATs. So all of that is taken as you can see from the official out of the box Helm repositories. So it's just like copy paste of those URLs. And then we have the joke servers and the joke worker. This is the custom part. Like this is the code I'm building. And what dev space does is also integrates with another loft product or feature which is called component chart. It's basically allows you to wrap the container image directly into a Helm chart without having to build the whole chart. So it saves you time as a developer. You don't necessarily want to deal with Helm charts, right? Because this is typically the role of the platform team or the DevOps team. Here just by specifying this repo is gonna wrap the image into a Helm chart while at the same time allowing you to customize and add environment variable to add extra parameter to your application. So in our case, for example we have the joke server where we need the container image definition as well as define the natural as an environment variable. Same thing, the joke worker will need extra components, extra environment variable like the URL again, the Mongo URL, register URL as well as the open AI API key. And which bring me to SOPs. So Tyler mentioned SOPs as a tool to encrypt your sensitive information from the client side. So what you want to do is use SOPs to encrypt and then commit it to your Git repository. And at the same time, you want to install pre-commits with Git leaks so that if you commit by mistake the unencrypted file, then you have an error message. So for example, I'm just creating the config map in Kubernetes that defines the open API key that is unencrypted. And if I try to commit this, so Git add this file and then commit, you can see that it's detected by Git leaks preventing me from effectively committing that unencrypted file into my repository. So it's always like starting from my own laptop so that I'm sure, you know what I was talking about shifting left, it allows me to detect that mistake before committing to the upstream, I mean, or to do a PR into the upstream repository, right? And then, depth space provide also some hooks that allows you to use the SOPs-D which is to decrypt this file before deploying everything into Kubernetes because of course you need to decrypt that to make it a resource readable into Kubernetes. All right, then you also have the ability to use custom resources here because all the operators have used such as the Redis operator, the MongoDB operators, they need static manifest to define the databases and options such as like the size of the database, the permissions. So typically this is some boilerplate you can copy paste from GitHub repositories and just some extra configuration. And so what I'm gonna do now, I've got my empty Kubernetes cluster there with a dev namespace. And what I want to do is use depth space in dev mode which has some nice features like that Tyler mentioned before with the file synchronization, also avoiding rebuilding the image every time you change code, what at the same time having the new binary into running into your container. So I'm running this, oops sorry, I am to go to the right directory. So if we take a look at it, it should deploy all the, start deploying all the component I've mentioned starting with MongoDB Redis as well as NATS and our own code, so the joke worker, joke server. You can see some, you know like red portion here which is like dependency on some services but eventually Kubernetes is based on reconciliation. So when all the services will be up then the whole application should go without any error. Okay so now it's running and I can start actually generating some jokes. So as I was saying, we can see here that it's, if I go into service. Service. So my application is running on port 8080, the joke server. So I can just do some, let's generate maybe just 50 joke, okay cross finger, is it working? Hopefully, yeah. Okay so now it's generating joke from the GPT-3 model and the next step will be to try to modify the code live so that our application is not generating that jokes but check Norris jokes which may be more fun, I'm not sure. Okay, there's one last thing I didn't mention, sorry about that I forgot. The last step for depth space, so we've seen that you can deploy, you can define the different images and then you can also control the order in which you deploy the different components and for that you have the depth section here that define a series of macro and the one we are interested in is create deployment which is calling out the different names for the different deployment items in a specific order. So in my particular case, to be able to deploy the custom resources, so all the custom resource corresponding to Redis, corresponding to MongoDB and NATS, for this to be valid in Kubernetes, I need first to deploy the different operators so the role of the operators in Kubernetes is to extend the API to make those external components first class citizen in Kubernetes. Of course if I deploy the custom resources before the operator and the custom resource definitions are present in the cluster that will fail. So here depth space gives you this extra granularity to really control how you want to to deploy that application. Okay, so now on to kind of showcasing the power of depth space or where it really shines. So like Nick said, we're gonna show you a live code reload and how that bypasses what I was mentioning the pain around the image build and push which is typical of a standard dev life cycle. So I'll scroll down to the sync config. Yeah, and you can see here what we've done is inside the joke worker we've enabled this restart helper and well actually sorry before I go into that I'm just gonna change the code and then we'll dig into the config a little bit. So inside the internal folder we've got our joke implementation. All right, no more dad jokes. Tell me a Chuck Norris joke and just really quick before I do that I'll just show you inside this pod. This is where the joke worker is. I can shell in and I can cat the internal joke joke file grep for that same line. We can see okay in this container it's still saying dad joke as we expected. Oh yeah, and if we look at the processes as I was pointing out we have that restart helper enabled so the PID one here is that restart helper and what that does is it makes it so that we can reboot our actual process that we care about which is the joke worker which you can see is PID 254 and I'll dig into that in a second as I show you the config. So back to our dev space file as I said we enabled the restart helper and what it'll do is it'll execute a certain command every time we upload a file. So the on upload hook is where we define that and what we'll do is we'll restart the container but we're not gonna really restart the container we're gonna restart our specific process running in that container that's executing our code. So what we'll do is we'll run go build on this to rebuild the joke worker binary and we'll execute it. And then there's more stuff in the sync config so to unpack it a little bit we can say exactly which paths we wanna synchronize. We might not wanna synchronize all the paths in our repo so you can lay that out here and so this is a mapping between the local file system and the remote container. So we're only copying in our dependencies inside go.mod and some of the source code we need like that internal folder. So I've made my change, I wanna see Chuck Norris jokes. Okay, now let's just ask for some jokes and let's see 15. Maybe these are hopefully a little bit funnier than the dad jokes, I'll let you be the judge though. So while that's running, what we'll do is we'll just take a quick look at the logs of the joke worker. So it says killed restart container and that's that restart helper that I was showing you but if you look in reality it's been up this whole time the same amount of time as all the other pods so it didn't actually get killed but we can do is shell back in and look at the processes and if you remember I think it was 254 but now the joke worker process is 349 so the dev space restart helper rebooted it after a recompile with our new code and there was no image build and I can also grab for that same line and there you go Chuck Norris. So that's pretty cool, at least I think it is really speeds up your workflow as a developer and yeah it's just generally a pleasure to use, fun tool. Okay so now we have our images, we know how to deploy them, we know how to develop quickly and easily using dev space but what we want to do is we want to show you Sbomb and vulnerability scanning using gripe so we'll use sift and gripe. If I just show you the list of images we have locally you can see that we have this joke worker image and now I want to know what's inside of it so I have previously executed this command to save the Sbomb to the file system and this is sort of the machine readable format in JSON format, it's very verbose. I've done that already though and so instead I'll show you the human readable format which is the table and we'll load the image and within a second here we'll see all the different dependencies inside that all the dependencies pulled in by our image and the version and what they are. So here you go and then if you wanted to see the same thing and that more verbose JSON format that's in the file and that's what we need for gripe to actually match up all of these dependencies against the vulnerability database so now that we have that Sbomb saved to disk I can use gripe and gripe will read in the Sbomb and tell us if we have any issues so because Nick didn't use a multi-stage build in his Docker file like best practices would be to first compile your code inside of a build stage and then copy that into like a final stage that's using say the distrelas image that's as slim as possible and lacks vulnerabilities but as you can see here we've got all of these different vulnerabilities and we can look these up easily online to see exactly what the issue is and then if there's a fixed version it'll be here in this column in the middle so you can know exactly what you need to update to remediate. Okay, so we've done our best, our image is as secure as it can be and now we wanna sign it so that we can ensure that inside our cluster we're running that signed image. What we've already done previously is deploy Kiverno. So I can show you we have the Kiverno admission controller that was just deployed using Helm install and it's ready to help us protect our cluster. So what we'll do is we'll create a custom resource called a cluster policy and the cluster policy just indicates how it is that we want to enforce which images need to be signed under which context. So I'll show you the cluster policy. So what this means is under image references we're gonna match on any images that are coming from the V55 Docker repository which is Nick's repository. And so for any image or sort of any container using that image that's inside of a pod or a deployment in the cluster we're going to use this attester to attest that that image has been signed properly. We do that using Sigstore and it uses Google OIDC under the hood through Nick's Gmail account. So basically what we've said is make sure any images that are supposed to be coming from this repo are signed. All right, let's sign an image. So if you remember I was talking about cosine. We have that image locally and we are now going to put our stamp on it saying that it is good. And this is keyless. There's also, I don't know where that opened up. Let me try that again. Okay, so you could optionally sign using like a PGP key. However, there's this nice keyless option which just uses Google OIDC that we are relying on here. So it opened up my browser and basically now I'm authenticating as Nick through Gmail and now that's saying okay, we're going to create a signature and we're going to upload it to what's called the record transparency log but later we can use that to validate. So now that's available and our image is signed. We can verify that it's signed using cosine verify and this will tell us okay, yes what we just did worked and we get this big JSON blob here that I'm not going to unpack but essentially what this means is that that image is as we expected as our stamp on it and now we can basically use that cluster policy to block it. So I created the cluster policy as you saw a few minutes ago and now what I'll try to do is I'll run an image that's tagged with the unsigned tags. This is a different tag from the one that I recently created the signature for and it'll get blocked and I'll just show you in the caverno admission controller logs it failed manifest unknown. So that's an example of how we can protect our clusters from man in the middle attacks. All right, so now just to tidy up I will delete the cluster policy and then show you how this whole time this was all running through dev space I'll just quit dev space and do dev space purge and just like that everything we had up and running and Kubernetes gets torn down and it's all cleaned up. Okay, so I guess that conclude our demo. We did everything live and it worked. So it's a happy day. We had the video just in case but we didn't have to use it so I'm very happy about that. So just a few key takeaways for today. So we've seen that developing cloud native application is not necessarily an easy task but now we have tools that we as developer can use for abstracting some of the communities complexity with things as easy as Docker compose back in the days. Because I do believe for a full adoption of communities developers need to use communities as a target to develop their application otherwise it will never have full adoption. We always say Kubernetes has to be boring so that we have the full adoption and it crosses the chasm but without a full developer experience with the right tool sets, we won't reach that state. And I think today we do have the tools to make it a reality. So if you want to implement this sort of new tools into your developer environments the first thing to do is don't overkill it. Don't do the same thing as we did like if you need to build that jokes generator maybe don't use microservices, right? But yeah, start small and just in increments and make things that are really useful for your business. Try to shift left as much as possible especially in things like security compliance storage networking bring all that into your local cluster as soon as possible, as early as possible so that when you come into the enterprise CICD cycle and with all the CICD pipelines almost 80% of the job has been done already. And finally when you are developing with those remote container make it as Tiger mentioned everything is shareable, portable, shippable because sharing is caring. I added some cat picture there because otherwise the presentation without any cat pictures is not really a cool presentation. But we're both from Spectra Cloud and didn't really say anything about Spectra Cloud that whole time which you might be wondering. Basically, Spectra Cloud is a day zero to Kubernetes manager platform. So once you're at fraud, maintaining that cluster might be more of a challenge than you anticipated. What we can do is orchestrate Kubernetes environments on private cloud public, sorry, private data center on public cloud or at the head. Not only that but it's the infrastructure in the OS and Kubernetes and it's also the application stack. So we have what we call the Palo Developer Experience which allows you to model your app and you have to call application profile. You can see here is the screenshot from where you are and we're taking that whole app and modeled it within the Palo Developer Experience as these different tiers. The tiers can share configuration information with one another just like you do in Dead Space or like how we showed you in our demo. However, this is repeatable. Take that application profile and stick it into whatever cluster you want whether that's in any of those environments that I just mentioned. And we have a free credit program if you want to check this out. Feel free to check the link. Just a quick review of all the tools that we used today, a bunch of them are up there. We didn't explicitly call out A&I S but that's maybe one of my favorites. That was the Kubernetes user interface or the 2E that we're using to navigate Kubernetes. It's a lightsaber. If you use it too much you might also forget how to use QCTL though so be careful. But yeah, brief overview of everything we did. Check out the link to see what Palette has to offer. And final thing is plus three API. Kind of how we do that. These are the two orchestration I was mentioning. If you call it plus three API, it provides continuous reconciliation of Kubernetes every two minutes. We'll basically detect what it is you have and match it against the desired state and use that to maintain your Kubernetes environment. Like me, we're going to check that out.