 So welcome, my friends, to the gem start your journey to cloud native session with SUSE manager. I am Stacy Miller. I'm based in the U.S. out of Austin, Texas. And with me is Miguel Perez Seyer Calino. Who is the director of SUSE manager out of Madrid. So we're gonna cover today real briefly. I'm gonna tell you what SUSE manager is and why you should be using it. How you can containerize your apps. And then we're gonna take you on, Miguel will take you on Suma's own journey to containerization starting with the proxy and ending with the server. So what is SUSE manager and why should you care? So question I get asked often is the product marketing manager. So SUSE manager is really true open source infrastructure management. And it's a solution that manages not just SUSE's own sluzz, SUSE Linux enterprise server, but it also manages more than 16 Linux distributions all from a single console. And it doesn't matter where those distributions are. Those distributions can be in the cloud. They can be on premises. They can be in multi-clouds. They can be a combination. You can see it, you can see your entire environment from a single console with SUSE manager. And we kind of bucket the reasons why you need SUSE manager into three different buckets. The first one being security, which we know is really a big, it's a big pain point for many companies. The cost of security is enormous and 60% of cyber attacks occur because a system is unpatched. So SUSE manager handles security by providing CB patches for all those distributions and allowing you to schedule those CB patches through the internal calendaring tool. You can set up SCAP profiles for your different systems. And then you can monitor those systems using open SCAP to make sure that your systems are the same configuration as you want them to be. We also integrate really well with Prometheus and Grafana for real-time monitoring and dashboarding. And then if you're in a situation where you really cannot afford downtime due to servers being down, we integrate with live patching, which can give you really up to a year of patching without taking down your servers. The next reason is really simplicity. We know that patching is really the bane of every administrator's existence. They don't like to patch. Manual patching is dangerous. It can cause you manure. So we use automation with salt and we also allow you to import your Ansible Playbooks into SUSE manager. You can schedule those patches out using internal scheduling. And then we have something within SUSE manager called Content Lifecycle Management, which means that you don't ever have to put a patch into production without testing it first. And then finally, we talk about scalability. People are scaling, you guys are scaling your systems up, you're scaling them out. SUSE manager works with all of the slides variations from slave micro to IBM Z systems. And with the hub architecture, SUSE manager allows you to manage what we say in marketing more than a million, up to a million instances, but we do have a customer who's managing 90,000 endpoints in production today. So really the scalability problem is solved with SUSE manager. And then I talked a little bit about content staging, but content staging is really important because you don't wanna grab a patch and put it directly into production. So SUSE manager allows you to take that patch, put it into your dev environment. You can do some testing on it. You can take it through QA, make sure your applications work, your workloads are working and then move it into production. It's a very easy way, a very nice way of making sure that the patches that you're applying to your systems will not inadvertently bring down your systems for some inexpressible reason. And we talked about managing anywhere with SUSE manager. In fact, we like to say, so SUSE manager manages any Linux anywhere at any scale. That means if you have workloads on premises, in the cloud, if you have hybrid clouds, multi-clouds, private clouds, SUSE manager works the same way everywhere. If SUSE manager can see the workload, SUSE manager can manage the workload. And finally, again, I wanna say that SUSE manager does not just manage slots. It manages the micro, it manages all the relevant variations from seven to nine and all the variations of rel, so Oracle Linux, CentOS, Liberty, Alma Linux, Rocky Linux. If you're using, if you're a Red Hat shop and you know that your developers are using Debian, you can manage the Debian servers and the rel servers with SUSE manager. You don't even need to have slots. So that, I think, brings me to the end of the marketing section. I'm gonna turn it over to Miguel now. Miguel's gonna talk about how you can containerize your apps and what we're doing within the SUSE manager team on our journey to containerization. Thank you, Stacey. Okay, so you may be wondering, what does this have to do with cloud native? Okay, it has to do a lot with cloud native, but first we have to put it in context, okay? We talked about SUSE manager. Let me introduce you to Ujuni. Ujuni is the project, you know, in truly open source fashion. We have an open source project with this open source governance that is open to contributions. It's open to external inputs, patches, fixes, et cetera. Let me, you see Astralinux? This was one, not even a customer. It was someone who was using Astralinux and they needed someone to manage it. This was an open source contribution to the product. It went, I mean, to the project, and then from the project went into the product. So we're really open to contributions. That's one of the reasons we are managing so many distributions, okay? Let me remind you of this sentence by Linus Tobos. The Linus philosophy is laugh in the face of danger. Oops, wrong one. Do it yourself, okay? So in this case it's do it yourself. How do we modernize this project to Ujuni? How do we modernize SUSE manager? So we're doing it ourselves and we went inside of our own company, SUSE. It's like, what do we have to modernize? And then we realized we have plenty of tools to modernize, okay? From the SLEE micro-operating system with Bodman to run containers, to K3S to run small Kubernetes clusters, very lightweight, to RKI2 that runs large Kubernetes clusters, to Rancher to manage all these clusters, to all of that. So it's like, okay, let's do it ourselves. Normally you say, you eat your own dog food. We rather think that we are drinking our own champagne. So what is this strategy here? So the strategy here is like first, you're running applications on a Linux host, traditional Linux hosts. Let's say this is, for example, DBN11, okay? Let's say this is SUSE Linux Enterprise 12 or 15. And you have this summa agent that keeps the system up to date, that updates the configuration, keeps it what you want, and you're running the application on top of that. What would be the first step, okay? And that is something that many people think, what do I do now with this application? Okay, so our suggestion is first, put the application in a container, a big, chunky container, okay? Running on your traditional operating system, you can still SSH into the operating system, see the container, you can still use system, need to start the container, stop the container, you can connect all the folders to the container, running inside the operating system, and you can connect the ports that need to go into it. So you're using it in a mixed way. It's not completely containerized, but it's not, of course, cloud native, but it's almost there. You start learning how to build your container images, you start learning about Podman, you start learning about all the basics that you're going to need to be completely cloud native. Second stage, of course, you can have your SSH manager agent to keep the system up to date and to update whatever is in there. Next stage, let's change the operating system, which is a huge overhead, to reduce that overhead to the minimum. Think about reducing three, 10% overhead in every workload that you have on cloud. That is a lot of money in savings. So right now we have a micro OS in SUSE called SLEE Micro. You have the open SUSE Micro, of course, version of it, if you want to go to the project. So both of them are completely compatible. You can go with one or the other, you're good with it. So you find this micro OS that is completely thought to run containers. And then you start putting the containers there. You reduce the footprint, you reduce the addressable, the attack surface, and you make it a lot easier to update. Of course, you can keep using SUSE Manager there. Then you add Kubernetes. How do you add Kubernetes? K3S is the best way to start with it. So you can deploy K3S from SUSE Manager, maintain K3S from SUSE Manager, maintain the operating system from SUSE Manager, and start moving those containers into cloud native containers. This is the end container. Okay, so I'm dividing the application into chunks and those chunks can run nicely on Kubernetes. So I'm starting to move those containers. And then you go all in with cloud native with Kubernetes of a micro OS with all the cloud native containers running on top of Kubernetes. This is the path that we are following and we are more or less here. Okay, so what was your journey? Our journey, we're sharing it with you so you can say, okay, let's follow it. Let's go ahead with you. Let's engage on the upstream community with the Juni and let's follow it. So first, where do we come from? Does this apply to me? You know, many customers are going to think, many people are going to think, does this apply to me? Maybe your application was so ready to be cloud native. Let me tell you, this application started in 2008. Okay, this was the upstream for satellite 5 and for SUSE Manager before 3.2. Okay, and the project sat down already in 2020. This is, and this code from it, still there. Okay, so it's not a new application. This is Brownfield transformation, okay? So what is the difference between a Juni and a spacewalk? At some point in time, SUSE wanted to put salt, which is an automation mechanism or automation tool together with the patching tool. And this is where Juni appeared, okay? We added salt to spacewalk, we put it together, we merged them very nicely and this is what Juni is, okay? What is salt? Salt is an automation tool that is assisted to use as Ansible, if I may, but it has many of the good features of Puppet. Like for example, an agent certificates to understand the scalability due to having an agent, you know? Some people think, oh, I don't want to have agents, but then when you want to scale, it's more difficult. Some other people want to say, no, I have an agent, I have my inventory completely up to date every single day. And then of course, if I need to run something, everything can run in a more scalable way, okay? So salt is built with React, it has a way of writing React and Python 3, very common, okay? So again, why does it matter? This is the typical implication, Java, Tomcat, PostgreSQL database, Python, JavaScript. Okay, so this is a typical application you can find, I mean, even, I've been working with finance a lot in banks and the applications, of course, are a lot larger than this one, but if you go X-ray with it, these are the components in there. So we are doing something that you can do on your own with these same tools, okay? So the challenge, first, what are our goals? Our goal, of course, is to put these tools into containers and then to make it into components as it makes sense. What is underneath the covers in Uduni and Sousa Manager? We have these components. You see Apache, Cobbler, Saltmaster, Scripps, Tomcat, Tascomatic, PostgreSQL. So Apache, right now, we still need it, okay? But if we think about a Kubernetes environment, we will not need Apache because it's just to redirect connections to Tomcat and to manage the certificates. Kubernetes can do that. So at some point in time, we may get rid of Apache whenever we go Kubernetes, but meanwhile, we can keep it for the environments that do not have Kubernetes. Have Cobbler, which is really in Python that manages operating system images and pixie booting, okay? We have the Saltmaster, which is the component that all the systems are going to connect to, okay? And it's really in Python. Then Tomcat with the Java application, Tascomatic is a kind of cron written in Java that has an API that you can inject tasks and then it will run it and schedule them. And of course, PostgreSQL. So at some point in time, several of these components, okay, are going to be its own container. But the starting point is putting everything into one large big container that Kubernetes normally complains about. So we're going to run that container directly on the operating system, okay? So our goals, make it easy for users that are not used to containers. Most of our customers, most of our users, SSH into the system and start doing things. So we have to make it easy for them, okay? We have to be independent from the host operating system. Right now, Ujuni only runs on open SUSE, or it's less, of course, you know? But we want customers who are running Debian. I mean, if you're running Debian on your laptop, we want you to be able to run this, okay? If you're running, I don't know, Rocky Linux, we want you to be able to run this. We want to make it more modular. This is like phase two. And we want easier dependency management. Of course, when you have containers, all the dependencies are in the container. So you can manage the dependencies independently of the operating system. We want to make it easier to maintain. You download a new image, you stop the previous image, start the next image, and then it starts, you're good to go, roll back, do it the other way around, unless you have made changes to the database in which you have to take it into account and have some script to take care of that. We want to do faster innovation, modernize everything in the applications and align with DevOps strategies that we have in the company, okay? So know your weaknesses. First, what is in friendly to containers? I mean, there's this project that is open source that I know quite a lot, which is called bindup. And this is a tool that you can scan Java code and it will find some things that are not container friendly. Like for example, if you are using to store the session of our users, you're putting it in a file on the operating system. That is not container friendly. If you have fixed IPs or fixed URLs, put in your code. It will detect them and it will raise things. So some of our pain points, file shared by components, called to external tools, not done properly, like going to a file, you know? We need to have a full qualified name for the tool to run correctly and we need to have a specific time zone for the tool to run correctly. So we are fixing these ones and we are starting the journey with this. So first step, as I said, create a single container. We are there. In SUSE we have the SLES operating system, SUSE Linux Enterprise Server and the open SUSE LEAP operating system. And both of them, you can get what we call the base container image. Base container image is a container image that you can use to create your own Docker profiles. There's one base container image that comes with system D inside of the container image. This is perfect for this case, okay? You put everything in there, system D starts all the services within the container, okay? So you don't have to change things. So whenever you have components that you can get outside of it, you have another container image, which is a minimal container image, that you can put this component outside and then start connecting to it. So we have all the tools, as you see, operating system-wise, Kubernetes-wise, to be able to do this. So next step, components destruct. So probably, it's very likely that our first component destruct will be the database, Postgres SQL. Why? Because there are already a lot of images with Postgres SQL that we can consume and we do not have to maintain. We just consume them, we put the data in them, we check for them, we check for the, of course, we have to do our duties to check that everything is okay with that image, that we know the sources and so on, but we will not have to maintain it. So that's something we want to add. And then we keep containerizing things. Of course, we need to know when to stop, okay? If you put everything into too many small pieces, then you start losing all the benefits of putting things into container. So you have to be very careful and check, does this make sense before that? So we started with a small project and we did it with the proxy. The proxy for Ujuni and SUSE Manager is a squid proxy with some add-ons to manage the squid proxy remotely and it's done with, I mean, and it can connect to SUSE Manager to be managed, okay? So this was a small, very tiny, so we put it in container, now you have the proxy completely supported in container form to be able to be used by you. So we started first with a small project to learn the basics and then we moved to larger projects. Again, SUSE Manager proxy can be used in containers, okay? So as I said, when you cut in the pieces, first find a DC1, then are you already container, is the piece already container ready? This was not, we have to adapt it. Are you going to use only configuration? Are you going to add other things to it? Because, I mean, if you need to inject some files into it, it's going to be more difficult. How many changes does it require? What value does it bring, okay? Can I scale with this? I mean, in this container image, we are taking the steps to be able to scale it out with Kubernetes, okay? We're not there yet, but we're doing that. One of the beauties of Kubernetes is that if you have a container image that can do something, you can create new instances with the workload, okay? So can we do better modularization? Okay, can we make the modules in a better way? And will it be easier to maintain this way? I mean, if the answer to that one is no, it's like, stop. Okay? We don't want to add this. What are we using? Initial two, we're using Podman, of course. Podman comes with open SUSE Linux leap, comes with open SUSE Linux tumbleweed, comes with SUSE Linux Enterprise, and comes with almost any distribution nowadays. So Podman is very, is pervasive. You know, it's everywhere. So you can use it very easily. Then of course, we are trying to use RKE2. We did it, we did a prototype. We have to use NGINX ingress controller instead of Apache. We realized it was a lot easier to use in RKE2 to manage all the certificates than what we're doing right now with Apache, okay? And then we tried the lighter version with K3S and we used traffic to redirect the network traffic to the container and it worked really well. It was very lightweight. It added very little overhead and it made the application run very nice. So right now we have very large container that even it is supposed to not be handled nicely by Kubernetes when we run it with RKE2 and K3S, it runs and it runs quite well, okay? However, it cannot scale out, okay? That's the drawback. We still need to improve that container to make it horizontally scalable, okay? So again, one step at a time, base container image with system D. You have it right there. Please, if you want to try it, take a picture of it, mount volumes. Of course, we're am I storing data to mount the volumes correctly. Tune the setup, start trying things. SSL, if you can let others do it, it's a lot better, especially in Kubernetes that part is super well implemented. And you can do the termination at the ingress level, which means the ingress component of Kubernetes can take care of the certificates and then have an encrypted traffic between the ingress layer and the container. And we are working on the initial home chart, still not available. The image is still not available. So stay tuned, go to Juni project from time to time and check because whenever we have a public image and it's going to take weeks, it's going to be there for you to try it and test, test, test, test, test, test, test, test, and then test, okay? So we also thought, what about the people who are already using Juni also as a manager? Do we do the migration in place? How do we do it? So the idea is that no, we're not going to do the in place migration. We're going to have a new machine and we're going to provide the tools to go Juni ADM migrate. It will get the information from the first machine, we'll inject it in the new machine. You'll power down the first one, you change the DNS and then all the systems will start connecting there. Something went wrong, you'll power down this VM to power up the previous VM. All good, okay? So we are going to include that. How are we doing it? We learned from Kubernetes. Kubernetes is a wonderful tool. So you have the Kube-Cuttle to connect to Kubernetes and do things with containers and the Kube-ADM to deploy it. We have created Juni ADM to deploy and manage and migrate and whatever, an upgrade, the Juni and social manager and we created Juni CTL. This is completely an early stages so if you want to hack it, it's a very good moment now, okay? To connect to the API, get information, to be able to get yourself into the container and be able to run scripts there, okay? So these are the utility tools and of course documentation that we need to update. This is it. Questions, come on. Go, should. What did you do with Postgres in your implementation? So the question is, what did you do with Postgres? Right now Postgres is part of the big, large container. I know it sounds like, oh my God, really? Yes, really, okay? It's like taking no risks, putting everything into one container so people understand this is exactly the same as I had before in a container and then we will start splitting it out. We already started the initial test to do that but first we want to finish phase one which is having one large container that runs nicely when put in production and then we'll go to phase two. Thanks for the question, by the way. More questions? If you don't ask questions, I will do a demo. I'm warning you. Go. Okay, so the question is, what do we say to a user that doesn't want to use Kubernetes? No, I mean, you have to be ready to use Kubernetes, okay? It's like boxing. You have to be ready to get in a ring. You have to train first and then you get in the ring. If not, you can end up injured badly, okay? So you have to train to get in the ring first and for users who don't want to run Kubernetes, we still can manage containers, okay? You can run a container in the operating system and for that, we have two things. One, right now, today, SLEE Micro, Susan Linux Enterprise Micro, that you can run containers on top of it with system D, you start the container, you manage it as it is, you need to patch it, you create an image to change the image. Pretty straightforward. In the future, we're going with Alp, the adaptable Linux platform. This is going to go even deeper with the containerization of the operating system. So we will try to make it, I mean, we have the product manager here, but so if I say anything wrong, please correct me. So we will try to make it as containerized as possible so you can handle the whole software lifecycle with containers, okay? So that will be the first deliverables of Alp. So you can run a operating system containers in a way that will be container native, not cloud native without Kubernetes and very lightweight. So for people who are just getting adopted or they want something very, very extremely lightweight for very heavy containers, that could be the option, okay? So of course, we want you to be cloud native and make the most of being able to scale out and being able to standardize everything with Kubernetes. I mean, the interfaces of Kubernetes are super good, are very standard to help you use it anywhere almost the same way. So this is really a good benefit, but we understand that some people or some workloads are not ready for that. So we offer this stage in which you can run containers on top of the operating system and you go with it, okay? However, the plan is please become cloud native. It has so many benefits that it's very interesting. Moving on to the Kubernetes connection, that also means that you want to run it? Well, I mean, you can go runcher or you can use any other tool. Of course, if you ask me, I will tell you, try runcher. I'm not telling you, I'm telling you try it. Okay. What are you just talking about and runcher? This, is there an overlap between what I've been talking about and runcher? I mean, runcher is a Kubernetes cluster manager, correct? Okay, so if you're going to put a workload on top of Kubernetes, I suggest you to use runcher, okay? Underneath, I mean, you can use K3S, you can use RKE2 that we provide, but you can use any other Kubernetes provider, like for example, AKS from Azure, Azure Kubernetes servers, or the Elastic Kubernetes server from AWS, or the Google Kubernetes server from Google, and then connect with runcher and be able to go multi-cloud very easily. So, but that's your choice. I mean, to say, we want you to have choice. We want you to be able to choose what you want to use to run your workloads, and we try to offer the best ones. Did I reply to your question? Okay, so that's it, yes. More questions? I'm warning, I will do a demo if you don't ask questions. Yes. If you're going to be in the direction of running the machine, we have about 12 minutes for the levels that we need, but the single thing there is this technique that may be useful to everyone. If you will, I'll show that to everyone here. Okay, so the question is why not going the VM way instead of going the one before the containers way, going the VM way? I mean, that is a tricky question, okay? It's a difficult one. So, many customers that are on-prem already using VMs. Okay, many customers that are on, all the customers, well, not all, because there are also very metal instances, but many customers that are on-cloud, are using VMs already, okay? So, I think we are already there in the virtualization part. If you have a component that is not, and let me go back to this slide. If you have a component that is not easy to containerize, of course, you can put it in a VM. You have QBIRT that can run VMs on top of Kubernetes and use all the Kubernetes native. So, using a VM could be used, okay? But the thing is that this is keeping you back in your way to containers. So, if you put everything in VMs, of course, you will have some benefits, but then you are not starting to use container images. You're not starting to build the images and create your pipelines in, for example, your Jenkins or whatever you're using to build those pipelines, those images in pipelines. You're not starting to automate all of that, and then you're getting stuck in the previews per dime. You're not moving. So, you're telling me if it is a way to instruct customers to go the... Yeah, I think that's okay. You can start to work and then... Well, I think customers normally instruct us. So, in some way, I think customers will get here on their own. I don't think we have to instruct them. They're all every day working with it, and that provides you a point of view that is very valuable. And therefore, we listen to customers, and we arrived to this point because of listening to customers saying, hey, if you go into a VM, this is going to take even longer to get it outside of there. So, going there one big container way could really unlock your situation. More questions? Okay, let me give you... I was saying it, so let me show you this, okay? So, this is my laptop, of course, OpenSuselip 15.5. I'm SSH'ing in another VM, you know? You know this VM manager? This is so cool, you know? You can create VMs. You don't have to install anything weird on your laptop. It comes with the operating system. So nice, so clean. Okay, so we have this machine, okay? You see here we have Uduni ADM, Uduni CTL. This comes from a repository that is Uduni Tools. Oh my God. Uduni Tools, okay? Okay, this is a pull request. This is Uduni Tools, okay? So, you download this and it has a bash.sh to run it. You need to have Potman there and it will compile everything for you and it will create the Uduni ADM, Uduni CTL binaries. It's very, very easy, very straightforward. Let me show you. You need tools. So, I have the code here. It will download the image, it will just compile it, it will create the binaries in the bind folder, bin folder. Okay, maybe I'm asking for too much for the laptop. Okay, alas, bin. So, you see? Right, created right now, fresh out of the oven, okay? So, you copy these two to your operating system here. I have it here. I copy to user bin to make it easier to use. So, now, if you run this command line, okay? On any system that has Potman, but we recommend OpenSUSE 15.5 LEAP version, it will install, configure all the ports and configure all the mount points for your container, okay? And it will create the system, the unit image and so on. What is inside this Uduni ADM Jamel? It's a very long Jamel file, okay? So, I created this file. You add your password for the database, you add your password for the certificate, that's going to be great on the image. I mean, you can take a picture of this. This URL is valid, you can use it, okay? So, if you want to give Uduni a go, the latest version, I mean, it's not completely published, but it's available, so go and try it, okay? So, running this will take me to this situation in which I can go system CTL, as any admin, status, Uduni, server, service. And it will say, oh, everything is running, you know? You have this container running. If I run Potman PS, it will show me the container image that is running, and of course, all the ports that have been redirected. Taking into account that we're doing TFTP for PixiBurin, taking into account that we need to have ports 4505 and 4506 for Salt, 443 and 80 and 8080 for Proxine and so on. So, if I go here, this is what I get, okay? First, the first time you log in, it will go, please create an organization and give me your user and password, okay? But then, you will get it here, it will be immediate, and then you can go admin, and the password I provided, and here you are. This is Uduni running on a container on top of OpenSusseLeap 15.5, okay? I really encourage you to go to the Uduni project, try it, contribute to it, and if you want to manage tens of thousands of systems that are Linux, I really recommend you to check on SUSE Manager. And I think that's it for me. One more point, about one million clients, we tried with fake clients, hammering servers, and we managed to get one million without breaking it, so that's the theoretical limit that we have tested, but with real production servers, we have managed to have more than one server, more than one proxy, we are managing to run, as of today, 90,000 devices in production. So, one last question, going once, going twice, gone, thank you very much for coming, and it's a pleasure being here. Thank you.