 Hello, thank you all for coming. We're going to talk about from the desktop to production building applications everywhere. Hi, my name is Matt, Matt Farina, and this is Ray. We work at SUSE on some fun tools that you might all want to use. The first thing I want to talk about here is that development happens on the desktop. The advent of so many different kinds of technologies, you just see that developers are able to get on their workstations and code away. Same thing with DevOps and just different technologies like this. This slide shows you some results from the Stack Overflow Survey, the most recent version. You're going to see I zoomed in on professional folks and just everybody to look at it. You can see IDEs are everywhere, text editors are everywhere. I'm still shocked at the number of people who use VS Code, but you can just see there's a long list. If you add up the percentages, it's well more than 100% because people use multiple tools. But you can see the desktop is a place where a lot of stuff happens. And you can even see with the new computers coming out, the amount of power in them, what you used to not be able to do on your local desktop environment because you didn't have the resources, you get more of that now with the latest systems, whether you're on a Mac or even in Intel chips and AMD chips with the latest processors and the ability there. And yes, there are online editors. This editor here is a snapshot of what you get on GitHub. If you're logged into GitHub, you can just hit the period and get right into an editor or go in and you get an online editor. And there's a number of these out there where you can code online. You can even use an iPad with a keyboard and do certain things there. But the reality is, is still most of the work happens on desktop machines, even with the power you're getting here. And as there's more power in the desktop, it's easier to do that and cheaper than ever. In addition to editors, you also see things like what tools people are using. And here I went back to the survey and I said, okay, what other tools are they using in addition to their editors and other stuff? And right at the top, you see Docker. And this is just Docker itself, whether you're getting into containers with Container D and Nerdy Control or you're using Podman. There's this huge swath of people working with containers now. And you also see Kubernetes fits in the top 10 too. And there's a lot of people using Kubernetes. And a lot of the people who are using Kubernetes don't interact directly with it because they've got some kind of platform built on top of it where they don't need to think about Kubernetes. And so you see this containerization mixed with your desktop environment, but also needing to be in the cloud. It's everywhere. So developers are using containers. And this shouldn't be a surprise if I go back to 2016, right? Over seven years ago, you had Docker desktop show up. And this is the earliest screenshot I could find of their website, the earliest thing, using the way back machine to go back in time. And back in the summer of 2016, Docker desktop came up with a native Mac app and a native Windows app. And developers were able to start doing things right in their native environments at that point in time. And you had it for Linux. You were using the terminal, things like this. But this introduced it to areas where you didn't natively have Linux in the desktop environments that are out there. And so it's been around for a while. But it's not just containers that have been around for a while. Desktop applications have been around for a while that lets you do Kubernetes. And here I picked one that has been around. It was around for a while back in the day. And it's not around anymore. It's CubeSolo. And CubeSolo was a Mac desktop application that brought Kubernetes to your desktop. And people use that for a while, and it was well loved. In fact, they even had a counterpart where CubeSolo was a single Kubernetes cluster. There was one that let you have more than one Kubernetes cluster in your desktop. So you could test out these multi-cluster architecture tools that we're going to deal with this. And that was popular for a while, but it ended up being deprecated. And they sent you to other tools to use, other things to use. And that desktop application, right? In your desktop world, people like desktop applications a lot. And so we see that. And so we entered this phase in Kubernetes desktop history, where we went to the terminal applications. And people worked in terminals a whole lot. And we'll explore that in a minute, but we kind of backed off the desktop application that developers like. And if you remember from a few slides ago, you had VS Code at the top, right? I'm sorry to say it wasn't Vim or Emacs or Pico or something like that. People like those desktop editors, right? And Visual Studio was right behind that. And you get things like Sublime Text and Atom. Those visual editors are the kinds of desktop applications people like. And so we went into the terminal-based app. And you had things like Minicube. Minicube is a Kubernetes sub-project. And it has been actively developed for a long time, right? And the primary way it works is by using Docker under the hood, right? It expects a Docker socket. It works with a Docker socket. It can use other tools, right? On Mac, it can use Hyperkit to create a virtual machine. But that even gets into some shaky grounds these days because Hyperkit hasn't been developed in a while, and it hasn't had a commit in well over a year. And that's the interface on Mac OS to create those virtual machines. And on Windows, if you're going to use it there, you have to have some outside tooling in order to get that virtual machine things up, right? Like something like VirtualBox. Or you have to be using Windows Pro and have its built-in capabilities with VMs in order for it to work. And so there's all these outside expectations in order to even get up and running with it, which is where I need to sidebar on Windows for a minute. And I'm curious, does anybody in this room use Windows as a development tool, all right? I don't see a lot of hands here, but the reality is a lot of developers use Windows. It's still the majority, and you can see that here. And they're not even using the Windows subsystem for Linux when they do that. So when we go out and we want to create local development environments for developers, we can't just think about, you know, Linux desktop or Mac OS environments. You have to think about Windows as well, especially in companies where maybe you have a mixed environment. Some companies are letting people choose between Windows, Mac, and Linux as their desktop. And how do they all develop together? And then there's tools like Kind came out, right? And people in here may be using Kind as their development environment, but it was actually designed to test Kubernetes, and it requires Docker again, right? And if you're using Docker desktop these days, you know about the licensing and things like that. It's not free and open source software. And of course there's something like K3S and K3D. Again, more terminal tools. Again, requires a Docker socket in order to work in any of your environments. To have that Docker socket, you have to have a Linux VM setup. If you're on Mac or Windows, you've got to have all those things. So what about a Kubernetes desktop app, right? What tools are there out there that get out of that terminal world? And most people probably know Docker desktop. I hope you all know Docker desktop, but this is an older screenshot from their blog. And one of the things you're gonna see is, it's very basic in what you can do for Kubernetes. It gives you Kubernetes, it gives you the latest version of Kubernetes, usually around the latest version of Kubernetes, and you get to run with their selected version of it. And you get a few kind of settings around that, but it does provide a Kubernetes environment where you can work. Of course, it also has licensing things where if you're at a company, there's stuff like that. But if I think about it with Kubernetes, you're not using the latest version probably, right? If I look at the public cloud providers, usually the latest stable version is probably the oldest community supported version. Companies are usually behind in versions of minor versions of Kubernetes that they use. This is a screenshot from the Kubernetes blog a few years ago, and that highlights the versions from back there. And at the time, you can see two minor versions back was peak usage, and this isn't unusual. And so you take a Kubernetes desktop application, and if it's on Docker desktop, then the latest version of Kubernetes and you're using an older version, you may run into differences. In fact, I've actually run into bugs that showed up in my Kubernetes version that was in production that I didn't experience in my desktop environment. And so everything worked great in dev, and then you moved it to production and something broke because of a bug or an API addition, something like that that had happened that I was using I didn't know about. And so this is where you can kind of run into some issues, right? You want to match your development environment. You want to match your QA environment with production. And a lot of this is what led us to create Rancher Desktop. And this is one of those things a few years ago that I started. Rancher Desktop, the idea is to be able to have a Kubernetes native experience, right? If you're gonna work with Kubernetes, develop for Kubernetes, do container work, things like that. In fact, it started as, start with Kubernetes. And so it's got container management capabilities we're gonna touch on in a minute, but it started with the Kubernetes part of it more than anything else. And here you can see on the screen you can pick your version of Kubernetes and you can choose any version of K3S that's pretty much out there and switch between them. And so you can match that development environment with your production or wherever you're gonna go environment because creating that native experience is one of those primary things that I wanted to have as a developer so I could get past those bumps in the road. Now the way it works is you've got a desktop application that runs on Mac, Linux, and Windows. And the way it works is on Mac and Linux it's using the Lima project which uses QEMU under the hood although they're starting to work with some of the native Mac virtualization and you've got either container D or Docker D and Docker D is provided by the Moby project and then you've got Kubernetes K3S layered in on top and K3S is the version of it you wanna work with. And then on Windows, it's actually instead of wrapped in a VM it uses the Windows subsystem for Linux which Windows kind of manages the VM for you and you can get in there, tinker around, things like that. But it also means you don't have to have a version of Windows that has extra virtual machine management on it. You can use it on Windows Home. You can use it in the basic environments. You can use it anywhere. And then we bring in K3S and we provide a handful of tools and other things that I'm gonna show off here in just a moment. So let's do some demo stuff because I like live demos and let's see how this goes. So we have Rancher desktop here running and there's a handful of things you can do. You can see containers. You can do port forwarding images. You can take snapshots of your environment to restore from. Snapshots is useful if you wanna go make a bunch of changes and then see what's going on. But let's have a little bit of fun here and tie things together. So I've got a demo code base here and I'm gonna open it in VS code. And let's see, can you see that? And one of the thing that prompts me is if I wanna reopen it in a container and that's because you have something called dev containers in VS code. And dev containers let you work inside of a containerized environment. And this is one of those things that I find really helpful because as I work on teams where I'm jumping between Windows, Mac, Linux and you wanna have the same development tools, same development environment, you can containerize that and you can work inside of a container. You can supply those tools. It doesn't need to muddy up your system. If you've got a specific version of Node.js or Python and you wanna tie to that, it lets you couple those things together and then share that environment. So let's go ahead and fire this up and see what happens. And so I go ahead and this is a Go application so I'm gonna run it. And it detects, hey, there is, it's opened up a socket. Do I wanna open this in a browser? And I can, hello world. What it does is piped out and it just works and ties in with the local system. And here we're using, it uses a Docker socket because I set Rancher Desktop up to use DockerD provided by Moby with the Docker CLI so it's the same tools you're used to. But you can do those things. Now I can come back here and make a change, save it and run it again and refresh. And it changes right here. So I get a local environment where I can develop, I can see things using my native desktop tools, tie these things together but it's a containerized environment running in there and everything's tied together. And this is useful when you wanna do this because if I take the same environment over to a Windows machine and pull it up, I'll have the same tool set, same everything, right? And this is powered by containers. Now with dev containers, I can go ahead and a dev container, in this case, I specified it with a Docker file. You can have a container image you wanna share. And for the Docker file, I am using the Slee BCI, the base container images that we use at SUSE. And I just tell it to install a few tools and then everything else just works. And so this is a slim, small base container image and it can create that environment. Now let's go take this and take it over to Kubernetes. So I have a Helm chart here and I've got that Kubernetes cluster running, right? And so I've got a Kubernetes cluster running in the background through Rancher Desktop and I can install this. First, I'm actually gonna go build this into a container image, sorry. In this case, so I'm gonna go ahead and build a container image here and this is taking a base container image, it's building it, it's done because I downloaded the images and so it built my thing into an application here. And so I've got a Helm chart here for my application that I've been working on, right? And I haven't pushed this container image anywhere but I wanna work with this in the local environment. So I'm gonna go ahead and install this and I'm telling you to use the latest image tag which is how I just built it to be. So this is gonna install Helm as a demo and it's going to be over here. Now I wanna go ahead and work with that. So what I can do is I can come to port forwarding and I see my applications here, right? And I can go to port forwarding that application I just started is here and I can forward it. It'll give me a port, I can select myself or I can just go with it and then I can take that port and I can work with this. Now since this is all in the local environment, right? If I go make changes to that and I kill the pod and it refreshes, it'll actually grab the latest container image. So I can do my development, I can kill it, I can refresh and everything's right there and it's all tied together in my local environment and this works this way on Windows, Mac and Linux. Now I don't just have to do this, right? I can go ahead and do some other things that are really useful. Like one of the things is with Kubernetes I always want to test, right? How does that, my application handle the Kubernetes upgrade, right? When I change versions, what's going on? With Rancher Desktop, I'm running one version. Let's go to another version and see what happens with my application. So if I change the version and I get newer version of Kubernetes, it'll upgrade it because Kubernetes can upgrade. You can't go backwards because of it. So let me go ahead and do this. Now what this is gonna do is it's going to, sorry, this is warning me that VS Code can't connect to the Docker daemon right now. And so it's thinking we'll give it just a minute, well this is restarting. So it shuts down Kubernetes, it goes ahead and upgrades it. Now we don't ship every version of Kubernetes with Rancher Desktop. What it'll do is on the fly, if it doesn't have it cached locally, it'll download it. It'll be pretty quick here because I didn't want to do this over a conference wifi and so I pre-cached it, but any of those versions are available because it downloads it. So here, it's starting up the virtual machine, it's going to bring up the new version of Kubernetes and those same applications are going to be right there on top of it so you can see what that environment is like. And so it's starting the backend up and that's where it'll bring up something like DockerD. Now while it's doing this, I'll actually show you around some of the settings here. So you've got things like Kubernetes, you can also set your container image between containerD and DockerD provided by Moby. DockerD lets you use things that require the Docker socket like VS Code and those other things and the Docker CLI. If you're using containerD, you have to use nerdy control as the CLI to pull off those same kinds of things. You can also change things like in Windows and Mac or sorry, Mac and Linux, you can set your resources and other stuff so you can configure what resources you need for memory and CPU count based on your system. On Windows, that's actually handled for you and you can do things like specify your volumes and networking and other characteristics here. So while the out of the box works well for this and of course you can come here and you can actually see the containers that you have running and you can do actions on those. You can see exited ones, you can see the containers that are here and you can stop them, you can remove them, you can do stuff like that. So if you don't wanna jump into the Docker CLI, some of these tools are there for you. So everything is back up now and we can come back to port forwarding. Now when you restart, you lose port forwarding. We'll go ahead and bring that up and I can go see what my application looks like on the other side and everything came up and this is one of those things that you can test yourself those upgrades and upgrading Kubernetes underneath whether it's a minor version or even a patch release of Kubernetes to see how your environment handles that setup and one of the other things I touched on here was changing the application and then seeing it with Kubernetes. So let's go take a look at that and here we'll just retry and it can connect back to that container environment. We'll say this back to, we didn't like to change so we'll go back to this and then I'll come build it and it builds and then we'll come in here and we'll go to the containers. Actually, you know what? I'll do it right in Kubernetes. So we'll delete this pod and so the pod deleted, we'll see a new pod came right up. Oh, yeah, so I build it and so the pod came up and so I can come over here and that same port forward, I can just refresh it and because it restarted the image was locally. Now if I'm happy with everything, I can use Docker push, all of those things to push it to the registry but it lets me have that local development environment around my application and containers and there's a lot of neat ways you can do that. Now one of the things that I touched on here was container images, right? What do you use for your base container image and where does that go? Because I think base container images make a huge difference. I mean some of them are riddled with hundreds or more CVEs, there's problems with them from a security aspect. In this case, one of them that I want to touch on here is what you'll see here, I've got a go and another BCI called, a base container image called micro and I think your base container image makes a difference in your experience of doing things, whether you're creating a secure environment with the tools that you're gonna have in those kinds of things. So let me come back to the presentation and dig a little bit into base container images and so a base container image here, I grabbed it from the Docker glossary, right? It's the image that has no parent image, it's that foundation on what you're going to use for your images. It's from scratch, so to speak. And there's various types of base container images you can use. There's scratch which has nothing in it and that's great for some applications. If you're building something in Go and you don't need to have another tool in there, that's a great way to have it. There's distrilis which just has kind of an operating system not tied to a distro with certain things in it. You find a lot of the distril ones have security vulnerabilities and things like that. And then there's ones based on an operating system minus the kernel and you'll see a lot of those. You might see Debian ones or Ubuntu ones. So we have something that we use called the SleeBCI and it's a terrible, it's acronym suit. It's the SUSE Linux enterprise and the base container images and they're the ones we use as the foundation for things and they're based on SLEZ 15, 15.5 now. And what's really unique about these is the build system that's used to build them. It's designed to deal with common criteria, ELA4+, and if you deal with stuff in Europe, you know that that is a crazy requirement system as far as build security and keeping CVEs out and that kind of thing. It's the only Linux distro that has this capability for secure supply chain and that meant doing things like salsa level three was just, it just works. In fact, under the old system with salsa level four, it also just worked for anything that was build system related because it has to go so far beyond and CVEs are fixed like that because you need to have that in these kind of environments and so I run this thing against trivia all the time and see no CVEs show up even though trivia knows how to look at them. And so we have a bunch of base container images and this shows a little bit about their size and what's in them because we've got different ones and so I'll use things like BusyBox or Micro because it takes things out that I don't need but I don't need things like a shell and some of the other stuff but these are the base container images we use as the foundation for what we're using and anybody can use them. You don't have to be a SUSE person, these are open for anybody on the internet to use as their foundation and they're secure. And we don't just have base images, we also have images for the different programming languages and they come out anytime a CVE is fixed in any of these things, anytime the base images, everything's there and so we can build our Node.js stuff or use it as a runtime we can build in Rust and go which we do and our foundation is always CVE free and built with a secure supply chain and it's the same thing you all can do too. And you can get this stuff over at registry.susa.com but I think that that base image is part of the developer experience and having one that is by default secure and by default handles these things is a really useful tool when you wanna have whether a development environment or for build or for anything else. And so with that, I'm gonna pass it over to Ray. Hello everyone, I'm here to talk about platform as a service with Apinio but before we talk about Apinio, I want to set the stage. Kubernetes is hard, so for a developer to deploy their source code onto Kubernetes cluster they have to first build that container image using something like Docker build or Podman, then they have to create the Kubernetes manifest for their application and they can deploy a pod for their container. They might want to scale their pods so they might control the rate of upgrades or scaling so they might have to add a deployment. They might have to want to reach their application from outside of the cluster so they wanna add a service or also add an ingress so you get that URL that will route to the service, that will route to the pod that will route to the container, that pod. Or if your application is persistent you need that persistent resources. If you're using a CNI driver or a plugin you need a storage class, they'll create the PVC, you have to create the PVC, they'll then have to create that persistent volume. The whole point of this Kubernetes is hard. It's so hard that the Linux Foundation actually has a certification called the Certified Kubernetes Application Developer. If you were at this morning's keynote, one of the panelists actually said the reason they're key to success for developers to go to, for their experience, was they did not have to know about the infrastructure. They just, they knew that it worked. So we bring it and there's, here's a few tweets about paths on Kubernetes from Kelsey Hightower. One is the, I'm convinced that most people want to just have a path but the only requirement is that they have to create it. The other one is Kubernetes is a platform for building platforms. So it's a better place to start but not an end game. So we have this open source project called Epinio and the goal of Epinio is to simplify application development. So we want to make it easier for developers to push their source code to a Kubernetes cluster and it just works. So we say that from code to URL in one push. We'll go into details of Epinio in the next few slides here. The reason is because Epinio adds needed attractions and tools so that will actually allow developers to use Kubernetes as a pass. So a little bit more about Epinio. It's an application platform on Kubernetes. It's a single step push and we'll go into great detail of that single step push. It also adds in some self-service provisioning as well if you want additional services. There's a CLI which you install on your developer's machine that CLI is used to help deploy the application. There's also an Epinio server that you deploy on Kubernetes. There's also UI to help manage those applications that you deploy onto the Kubernetes server. With Epinio, we say it's batteries included. So it has all the necessary tools that's required to just do a single step push from your source code to the Kubernetes cluster. There are some prereqs. First is cert manager. Cert manager is used for TLS certs for a variety of ingresses. Helm, so we use Helm to install Epinio on the Kubernetes cluster. The first tool that comes with Epinio is QBD. QBD is a config sinker. It's used to sync config maps and secrets across namespaces, also across clusters. Epinio is using QBD to sync secrets for the container registry that we'll talk about in a little bit that Epinio will install. Minio, so Epinio will need an object store, so Epinio will take your application, create a tarball or an archive of it and store it in your Kubernetes cluster. You could use Minio or you could also use any object store you want to S3 Gateway. So if you are using a project called Longhorn, Longhorn is an object store, you could use S3 Gateway as well. Next, with Epinio, it actually comes with a container registry to store your images that Epinio will create. So this is after the staging phase we'll talk about that will create that container image and store it within that internal registry within the cluster. Next is DEX, so you could actually integrate with an OpenIDC Connect provider with DEX. And Epinio leverages build packs. So build backs is how it takes a source code and defines the dependencies that's required of your source code and creates and builds container images. So behind all you need to do for a developer is actually do Epinio push and give it an app name. But behind the Epinio push, there's actually lots of steps that is required or that Epinio does. First step is that Epinio creates an archive of your source code, TAR, ZIP, there's other formats as well. It'll send it to the Epinio server running on Kubernetes. Then it will copy that and store it into S3 or Minio. Then we'll go through a staging phase. That staging phase is actually three phases. First, I'll fetch that archive from Minio from S3, then I'll unpack it, then I'll send it off to and use Pequetto build packs. So build packs will help you, will create that find those dependencies of your source code and also create that container image. Epinio will then push that container image to its internal registry. And from there, Epinio will deploy the necessary Kubernetes resources. So it will create that deployment. Within that deployment, the pod that runs your container, it will create the service, it'll create the ingress, and Epinio will actually give you a URL where to access your application. So here's a little demo of Epinio here. It's running. Okay, so I'm just showing off, this is a sample React application going through the JS files here. On the right side, it's a little obfuscated, is Epinio running on a Kubernetes cluster. Then here we will create, do Epinio push. So this is a process where it will create and also create an archive, a tar ball, of your source code, and then we'll go, we'll watch the staging process of the entire, what is behind Epinio push. They'll create the necessary resources and they'll send it to Pequetto build packs and create that container image. Then it'll store it in the container registry. Then Kubernetes will deploy the required resources to deploy your application onto the cluster. And at the end of this demo here, this will take a few minutes here. It will give you a simple URL where you could access your application. Then as this is staging here, then you could go into the UI of Epinio as well and you could manipulate Epinio as much as you want. You could scale it up or down. If you want as well, I could add an additional services here. So I've just been more, we have three minutes left. We're gonna let this demo run here, but in the meantime, are there any questions for us that we could answer while the demo is running? Yeah, I can also say everything we've shown so far is free and open source software. You can just get it, you can contribute to it. It's all up on GitHub. You have a question? Yes, all the slides will be up on schedule. Sorry about that. Yeah, demo's still going, still running the build pack, the staging process. So we're still generating that container image then. Well, at the end of this, we'll get a nice URL that we could take a look at our application. This for Epinio, it's a, so Paqueta build packs is under the cover. So they support the set number of languages that builds those container images. So we would support what Paqueta build pack supports. Are there any other questions? Yeah, that's a great question. What a lot of this comes down to is your resources in your local environment. Some of the Kubernetes distributions will take four, six, eight gigs of memory even running on a local machine. And a lot of developers don't have that. I mean, if you've got a machine with 16 gigs of RAM and Kubernetes by itself is taking up six gigs and then you've got Slack and a browser open, you don't have much for your own applications there or environments. And so the Kubernetes API is gonna be the same. The controllers will be the same. It's some of the things that usually don't affect the application itself that are different. If that makes sense, if you really get into some of those things, then you might even be worried about the host operating system and kernel modules. And if you need something like that, you're gonna have to go a step beyond with your environment. But that's why K3S, because it fits in a tiny environment. But it still is the exact same controllers and other things that affect your app. Any other questions? I'm showing off here. You could take a look at your application, how it's deployed using the Pinio CLI. You could take a look at logs as well for your application through Pinio. You could also check out the logs through the staging process as well. And the last step is deleting the application from the cluster. And it will end the check on the Pinio CLI to make sure that the application is deleted. I wanna thank everyone very much. Looks like we are at the end of our session. And if you have any other questions, feel free to stop by the SUSE booth and ask about any of these technologies. And somebody there can answer more questions or maybe even show you a demo, that kind of thing. And go further. Thank you.