 All right, thank you and welcome. Yeah, this cloud is more on the application level and talks about how to manage all the systems, all the cloud and especially container infrastructure we have. Short slide about me. My name is Klaus Kempf. I'm a product owner working for Sutilinux. And yeah, I will find me on GitHub or Twitter if you want to contact me. That's probably the easiest way. So let's look at the problem statement. In today's world, you have a huge variety of systems. You have physical systems. You have your cloud infrastructure, your VMware, your containers, and so on. And you want to keep them up to date. You want to deliver the right software at the right time to these systems. So you don't want to have development packages on a production system and maybe vice versa. And you also want to detect drift. So this is a set of packages. And this is a configuration that should be on the system. Is it really? Did anyone change it? And of course, moving to the cloud and the containers world, you want to build your right VM images from the right package repositories that you defined. OK, these are the packages that should be used to build the VM image or to build the container. And once you have built the container, you upload it into a registry. And maybe you pick it up with Kubernetes to actually run. You still need to know about what is going on here. So you want to stay in control. And there is a solution for all of this. And this is a new project called Uyuni. We have a website where you can learn more about it. Twitter account, we are on IRC. And here is the link to the announcement mail subscriber address. So just send an empty mail to this mail address. And you will be added to the announcement mail. So my say, Uyuni, what's that? Actually, Salade Uyuni is the world's largest salt flat, this about the world. So now what has it all to do with software and salt? Let's have a look at the origins of this software. So this is actually a friendly fork of Red Hat's Spacewalk. Spacewalk is the open source project for Red Hat Satellite 5. And this should be well known to those of you who administer systems. And from this, SUSE also derived their competing product called SUSE Manager. However, since Satellite 5 is end of life, it's in maintenance mode. Red Hat switched to Satellite 6 with a complete new code base. The same is true for the upstream spacewalk. Spacewalk is in maintenance mode. So we from SUSE contributed a lot to upstream spacewalk. We have still a lot of open pull requests going on there. But nobody is there in the spacewalk project to actually review and merge them. And currently, the official end of life for spacewalk is in 2022. And this, of course, raised questions, not only in the spacewalk community, but also for our customers, what is the future here? What is about an upstream project, which is truly open source? So this is, and we talked to the spacewalk people. We tried to take over, actually, a spacewalk and give it a new life. This did not turn out so we forked it. We gave it a new name, a new symbol. And from this fork, the next official product called SUSE Manager 4 will be derived sometime next year. So what is SUSE Manager? Why do we have SUSE Manager? SUSE Manager is SUSE's answers to a satellite and its opinionated fork of spacewalk. We focus on simplicity of installation. Spacewalk is known for a bit of complex setup. We moved to assault for configuration management. We did container Kubernetes integration. And we also improved the BAP UI. A couple of years ago, we submitted a switch to Twitter Bootstrap for spacewalk to give it a more fresh look. And now we are in the process to move more and more pages from an old Java Spark library to a more modern React-based web UI. A lot of these changes, we tried to upstream, but they were not matched to spacewalk. And they were rotting in our closed repository. And we didn't want this to happen, we want to open. Let's take a brief look at the architecture, which is mostly derived from spacewalk. So let's start from the left. You have physical systems, virtual systems, you have cloud, you have Kubernetes, and maybe VMware clusters. So if you look at today's production systems at customer sites, VMware is everywhere. So that's just a given. What we did now is to use SaltStacks Salt for configuration management. Spacewalk actually had a small agent, Python-based, but it was kind of limited. It was limited to obtain basic hardware and software profiles. It could manage packages, and it could manage config files. But it was not easy to extend, and it would have still put the burden on us. So we took the decision, while everything we do on the client in terms of software or config file management, it's configuration management. So let's look at, and this is something which is solved today. We have puppet, chef, uncivilized CF engine, and whatnot. So we looked at a tool which best matches our needs, and we came to SaltStack and integrated it. There is a database, Postgres database, which acts as a CMDB. So it has all the information. It knows everything about this infrastructure. We have a user interface. We have a VapUI, an API, and a command line interface. And of course, we have connection to the outside internet to package repository. And in terms of package management, what Uyuni does now, it knows about the packages on these systems. It can query Kubernetes about, OK, what kind of containers are you running? What do I know about these containers? It's the same. It can query VM there. What kind of VMs are there running? What do I know about these? And at the same time, it can connect to the internet or to your local server, serving package or portatories, and especially all the updates from the vendors. And since it has all the information in the database, it can do database operations and can compare what is available to what is actually installed and running and can inform you, hey, there's a container running on your Kubernetes cluster, which has an urgent need for an update because there was a security fix published by your vendor. And this is the core function of Uyuni. A quick word about Salt, a question? Can I ask a quick question about the previous slide? I wonder, does it run as a demon on every machine or when does it run to check that something is out of sync and when does it apply updates? Is there some policy? Yeah, let's switch to Salt because this is all done by Salt. Salt started as a remote execution framework, but actually it's a very scalable configuration management tool. So it's from the base functionality comparable to Puppet or Chef, but it also has an agent called Minion, which is a Python program running on the system. But it also has aspects of Ansible because you can also run it agent-plus via SSH. And this agent and the nice thing about Salt is it's event-driven. So as soon as something happens on the client, an event is triggered, and this can be intercepted, and the next action or something else can be done. And this also makes it very fast. It runs everywhere. It's written in Python, so you can find Salt on Linux or Linux distributions, of course. It runs on most of the Unix terabytes, and there's even Salt Minion running on Windows, which again gives opportunities for Unix to manage more systems than just Linux. And how it's done? So this client queries the RPM database or checks config files, reports back the state to the database, and then a drop is run to compare what should be on the system, what is actually on the system, and then you'll get informed about config drift. So of course, this is not possible if you run an agent list. Then there is a timer in Unix which opens an SSH connection, obtains a software profile, a current software profile, checks about the config files it knows and compares to what should be on the system. So let's look at the functionalities that Unix gives. So from a physical perspective, it can provide Pixiboot to a physical host which you just turn on, and it can then obtain a hardware profile, so what kind of CPU is in there, how much memory, what kind of disks, what other devices are there, upload it to the CMDB and then via IPMI shut it down. So we have customers that really get fresh hardware, unbox it, just plug in power network, turn it on, and then it automatically, there's a small image downloaded by Uyuni which obtains all the information, uploads it, and shuts the machine down. Then it's in the UI, and from the UI, you can say, ah, OK, yes, this will be my new database server or my new web server. And then again via IPMI, you can power it up again. Now it's known. Now it can get the correct package repository, the correct autoyast or kickstart description, and it can install the correct software. All these architectures below here, so x86, 32-bit, 64-bit, power up to S390 mainframes are supported. Those in bold, we have the actual server running. All other architectures are currently only supported as a client. But yeah, porting is simple. In terms of deployment management, it supports autoyast through the Linux kickstart for Fedora or RAIL. Then, of course, it obtains a software profile. It obtains a configuration profile uploaded to the CMDB. It assigns the correct re-portatories called channels. And then, since these channels should be there, this is the actual software, you can watch the upstream channels. You can filter them and say, oh, there are so many updates, but only these three updates should actually be available to this client. And then, you can install these updates. In terms of virtualization cloud, meanwhile, we can build VM images. We do this with a SUSE tool called Kiwi. And this actually is similar, because to the build host, to the build environment, we usually controls which package repositories are there. So only packages from this package repository will be used to build the VM image. Once the VM image is built, a software profile is obtained and uploaded to the CMDB. So for this image, you know what is inside. And when you deploy it somewhere, we can identify it. We don't have to run rpm-qa again. We know what is in there. There's the virtualization management update pending. We have OpenStack and public cloud connectors. The same for containers. This solves the problem, OK, how do I build the right container with only the packages that should be here? This could be a production container. So make sure that only those repositories which are tagged as production are available during build. Then again, we obtain the software profile. We can push it to a registry. So Uyuni knows about this container image. As soon as it's deployed to, for example, Kubernetes, we know what is inside this container. And we know, oh, there's an update available. You have an outdated container running there. You can trigger a rebuild, push it again to the registry, go to Kubernetes, and start your updated and hopefully secured container. Where are we currently? So we have a GitHub repository with all the source code. We have a wiki. You can create issues and, of course, pull requests. We are in the process of building RPM packages. Actually, we currently do have RPM packages, but they are not bug-free. We are still testing. So there is no 1.0 released for Uyuni yet. We also plan to open up our CI infrastructure. The complete test suite was made open source years ago for Spacewalk. And now, very recently, we have a devil mailing list. And, of course, we welcome community developers. And further outlook, so we are actively working on the first release based on OpenSUSE of LEAP42. We are also actively working to make Ubuntu Debian a fully supported client. I know we know that there are various community patches floating around for the Spacewalk community, which were never fully integrated into Spacewalk. We just finished a single Simon implementation based on SAML. The biggest next step is the Python 3 and Java 11 port that would be needed to move it to the latest OpenSUSE of LEAP release called LEAP15. This could be a bigger step. Of course, Uyuni will be upstream for SUSE Manager in the future. And so SUSE Manager releases all the development will be done in the open. And there's a possibility for non-Linux client. So I did a Windows adapter years ago for Spacewalk. So things like this are possible. And it's up to the community whatever they want. And that's it. We are at the questions and answer down there. Thank you for the talk. I know there is OpenBuild system by SUSE that builds packages and maybe some other artifacts. Is it somehow replaces this project or it's a parallel project? No, this is just that we. So the OpenSUSE build service is just the means for us to publish packages for Uyuni. And we have already a community contributor who picked up the client side packages and built them, for example, for Debian. So this will be our package repository to make it easy for everyone to download and install Uyuni. You mentioned that Uyuni can detect that when there is config drift and it can send alerts when it detects them. But can you also specify that it should take some action, like install the latest version, wait for that service to finish, processing, restart, and stuff like that? Config drift is being detected currently because our main, still our focus is on commercial customers. They all ask, hey, I want to be informed first before you change anything. Of course, due to salt, you can just say, OK, as soon as I see a config drift event, I trigger config remediation and fix this drift. This is possible. So how do you figure out what kind of versions are running on containers in a Kubernetes cluster? Is there some sort of metadata agent that is also running in the containers? Or is there something else? So there are two ways to find out what is inside a container. First is if the container is built by Uyuni. During build time, we run basically rpm-qa and find it out. But we have also now the means to scan a registry and say, OK, I'm interested in this container. Then we would start this container, but not the actual container application, but a small script which basically also runs rpm-qa and would, by this, obtain the software profile. And then we know, OK, this is what is inside the container. Of course, this does not work if you have unpacked a property, tar file, or something like that, or directly copied a Java jar. But these are fixable problems. But it is not during container runtime. We leave the container alone. So thank you, Klaus, and give another round of applause. Thank you.