 All right, hopefully this is still working This is the Fedora classroom on silver blue My name is Micah Abbott. I'm a principal quality engineer at Red Hat. I work on the Red Hat CoroS project currently I'm gonna give you a overview of the Silver Blue project and some of the technologies behind it Some ground rules, please keep your audio and video off during the presentation And hold all your questions towards the end I'll be in pound Fedora classroom in on free note after the presentation and I'll answer any questions in there to the best of my ability. So What is Fedora Silver Blue? First of all, Fedora Silver Blue is Fedora. It is a project supported by the council It has the same principles as all the other Fedora projects It uses the same RPMs built in the Fedora infrastructure We just deliver them in a different with a different mechanism using OSTry, which I'll talk about in a few slides Fedora Silver Blue also uses containers and container technology. We have a set of tools that is heavily used in the Fedora space around containers that We ship by default as part of Silver Blue We also ship by default the ability to run flatpacks, which are basically containers for your GUI applications And I will talk about these in later slides Silver Blue is also can also be defined as an immutable host. I have a definition for that coming out And lastly, Fedora Silver Blue, I think is very awesome. It's an excellent way to containerize your workflow and provide atomic upgrade capabilities And I'm going to get into all that kind of stuff as we get along. But first, I want to talk about immutable host You may have heard this term before are also an immutable infrastructure The way I define immutable host is one where the OS is delivered in such a way that it's difficult or impossible to modify This allows for The idea of pet versus cattle hosts Pets, pet hosts are ones where they are uniquely crafted and You don't want to lose their configuration or lose the ability to run them whereas cattle are more disposable You are able to lose those servers or those hosts Without too much fear of hurting your infrastructure These immutable hosts also provide foundation for repeatable deployments. These are some time referred to as D-NIC servers ones where you are able to take down the host and then reprovision it Nearly instantaneously or very quickly at least with the exact same configuration that the host had previously Typically these immutable hosts are delivered as an image or an image like artifact artifact and previous examples we had Dan Fedora red happiness and OS we had a host Which was RPM based just like silver blue We had container Linux which was made by the core OS company before red hat acquired it That was a gen 2 based system We also had endless OS which is a OS made by the endless computing company They use Debian for their their basis So let's compare silver blue to a traditional Fedora workstation They both have some similarities in that they both use the same RPMs from the Fedora ecosystem They both support package installation Package installation so you can do the equivalent of a DNF install On silver blue although the mechanism is slightly different And they both have the ability to run containers and flat packs Where whereas in for our workstation you have to install those tools after the fact silver blue we ship them out of the box Some of the differences between the two come into Come to the file system for example On silver blue you can only write data to bar in ETC This lets you Store your date your data under sub directories and bar Actually your home directories also mounted under bar And you can configure your system typical as you typically would on any other Fedora system under ETC by moderate Modifying the config files there Where this comes in where this is powerful is it prevents Malicious RPMs or mistakes in RPMs from destroying your your host. This is a screenshot from I believe it was a RPM that installed some of the NVIDIA drivers and there is a once one typo that Oz the RPM to delete all of slash user This is not possible in silver move because slash user is not writable by RPMs or the user Silver blue and workstation also have different upgrade mechanisms So we use an atomic transactional update During this time your running system is not touched you at the updates are actually Low do in the back Because of this you can pull the power on a silver blue host during an upgrade and your host will and your Reboot back into Deployment previous deployment the trade-off is that you have to reboot your host to get into your upgrade The poor trade-off is the upgrade your does not touch your upgrade or running Is the upgrade does not touch your running this prevents these kind of problems, which is a thing grab from Long grab or QE engineer and since we're all talked about Where he taught about that's telling me the crash From about this top is actually crash or X And I hear some echoes. I'm gonna bounce over to blue jeans And I hear some echoes and I'm going to bounce over to the jeans and the other chats I'm just muting some folks that show their microphone on Okay All right back to the classroom So, yes sort of blue and Workstation also have different delivery mechanisms So we delivered delivers are the OS as an OS tree commit The OS tree commit is actually built apart from RPM. So we do use RPMs in a way But we both sort of blue and workstation can install RPMs After the fact after the fact so like I said before in workstation, you could do a DNF install In silver blue it would be something like it would be RPM OS tree install and I'll show you some examples of using RPM OS tree in the next couple of sides So let's talk about OS tree and RPM history, which are the is a technology stack that is I'm sorry, let me bounce over to blue jeans again to make sure Sorry, you can't hear me. I just don't want to Get any echo on this this recording So OS tree is Like I said is the core of the the technology stack behind silver blue it provides It's a library. It's also a command line utility We can Generalize it at simplified as a get for an operating system the individual files on your of your operating system or your file system are checksummed and then tracked in a Content-addressed object store the files on the hosts are actually deduplicated versus via hardlinks and OS tree is also able to handle Your bootloader configuration and your management of slash ETC And this little example I have here It shows you how you can use OS tree to track individual files. I've initialized a OS tree repo I've made some directories and files and then committed those files that file system to a commit on the in the OS tree repo on the Master branch of the repo and then afterwards I've created a second there a second directory and used OS tree to check out the master branch which Plays out the files that I had originally committed And this is the same thing that we do For RPMs when we build the OS tree commit for the entire operating system Do we just do it at a very large larger scale obviously So our people history is defined as a hybrid image package system Well, it uses live OS tree as the base image format We come like I said before we compose the OS tree From the RPMs on the server side And then on the client side we use RPMs as well using libdnf to handle package installs Rpo history is also the primary endpoint entry point for managing your OS and we'll get into that right now So when you manage your Blue OS you're going to be using the RPM Austria CLI primarily there is plan support for RPM Austria support for Nome software to do some of this and there is the initial Implementation there it's just not as robust as it would like yet So all right, I'm going to switch over this is a RPM Austria status output You can't really read it because it's cut off. So I'm going to pop over to my own terminal This is on my own host right now And I'm going to give you the same output that I was trying to copy there And I want to point out a couple of things in the output So at the very first line we have the state of RPM OS tree There are no operations happening right now. So it's idle We have the ability to configure automatic updates This the the timer for those just if you can see here ran 15 hours ago and Then we have a list of deployments the first in the deployments are what we are basically the the versions of the OS that are either you either booted into you have the ability to roll back into or A deployment that will be booted into on the next reboot. So in this example The deployment with the dot which I've highlighted here That is the deployment that I'm currently booted into it has a OS tree URL Which points at the foot or a workstation remote and the photo of this for door 29 x a 664 silver blue branch We've got the version information the commit hash GPG signature validation and then any layered packages that are on my system The deployment that's first in the list that is the pending deployment That is the upgrade upgrade deployment as you can see the version number is gate-based. So This is newer than the one I'm in right now And as I said it when you do an upgrade it doesn't touch your running system So this is basically just staged in the background waiting for me to reboot into it and then the the deployment and third in the list is The previous deployment that I was booted into Before I was running the current one This gives us the ability to reboot into the new one We can stay booted into this one right now But as soon as we reboot it is going to change because the Deployment that's first in the list will always be the one that you get booted into unless you instruct it otherwise We also have the ability to roll back to a previous Deployment if we find that the current deployment does not suit us So scroll further down I Have a fourth deployment that I pinned and this is a Deployment from Fedora 28 that I was running that I had pinned to my host before I had done the Rebase to Fedora 29. I could probably actually garbage collect this at this point And then finally the piece of output. I wanted to show you is the available update Now this matches the same deployment that was first in the list. I originally showed but I want to point out that We actually can show you which CVE's Which security advisories have been fixed in the next In the next pending deployment or the next upgrade deployment Which I think is a important piece of information to have when you are considering When to reboot into your upgrade deployment So that's status So when I do an upgrade we use RPM OS tree upgrade And what that looks like is Has to have to do that privileged It pulls the objects from the OS tree repo And it will print out the packages that have been upgraded Any packages that have been removed in the new deployment and any packages that have been added in the new report deployment It will also print any packages that have been downgraded if as you change deployments But we don't see downgrade packages as part of the upgrade process, but it it's not Out of the question our people industry does not really care about version numbers in terms of what's newer what's older It just sees it as a Set of files that is right into the disc And at the end of the upgrade it prints out a message saying that we have to reboot before we can enter into this upgrade deployment so when we reboot as I showed you This looks similar a little little trimmed down than the one I showed you on my own host You can see that the first deployment in the list is now booted into as identified by the little dot here and the deployment we are previously in is Available to rollback into So I keep saying our rollback. So that's a function of RPO history. So if we did have a Situation where the upgrade deployment wasn't working the way we wanted to if there was a Problem with a particular package or whatnot and you wanted to go back to your previous one We can use the RPO history rollback command So in this in this situation we have the new deployment 2901 12.0 and We use the rollback rollback command and all that does because both deployments live on the disc All it's really doing is swapping the bootloader To boot into the previous deployment And as part of the rollback command it prints out the package changes just like you would see during the upgrade So because we're going backwards in time We see that packages have been downgraded There are packages that have been removed and there are packages have been added and again it prints out you must The note about rebooting into the the new deployment and if we do a status again after the rollback We see that the older deployment is now first in the list. We're still currently booted into the Upgraded deployment, but now the older deployment is Staged as the next deployment we will reboot into So when we when Fedora releases a new major version we need to use the RPO history rebase command This allows us to Go from say 28 to 29 or 29 to 30 It also allows us to go backwards in time if we wanted to so if You're running Fedora 29 right now and want to use for our 28 you could use rebase to do that the way we do this in the Major upgrade scenario going from say 29 to 30 or 28 to 28 to 29 We would add a OS tree remote that points to that uses the newer Actually in this case. I'm doing a going backwards to 28. So I'm on 29. I'm going to 28 But I've added a newer remote here the first line That points at the Fedora 28 GPG key I've named the remote Silver blue 28 And I've provided the URL to the OS tree repo Then I use RPM OS tree rebase Give it the remote name and the branch name and it does a similar operation that we've seen before Where it's pulling in the new objects You can see that some of the patches have been upgraded Which is odd as we're going backwards to the older version But you'll see that the packages are going from FC 29 to FC 28 And then when we inspect the status We see we have a new deployment that matches The silver blue 28 remote and the new foot or 28 branch So when we reboot our host we would be in the new foot or 28 or I should say the old foot or 28 deployment It's also possible to switch the entire OS because RPM OS tree and OS tree just treat the files as files. There's no real concept of Switching between OS's I guess you'd say it's hard to describe. I'm just going to demonstrate it So in this example, I've actually added a OS tree remote that points to the CentOS atomic host OS tree repo You know we point at the mirrors on the CentOS org And I use the rebase command again, and I give it the CentOS remote when you have the CentOS branch And it does the same thing it pulls the files down from the remote Stages the deployment and you can see that the next time we reboot it will Actually be able to boot into a CentOS host Now the utility of this is not really great I mean it you probably don't want to be switching between a an atomic host Base system and the silver boot base system, but it's a very I think a very interesting party trick You can impress your friends with it So moving on So package layering so We talked about I talked about earlier how we can how fedora workstation has the concept of DNF install for installing packages and In silver blue we have a slightly different command We call it package layering So package layering is the way to install additional packages To the host that wasn't we're not included as part of the base layer In my opinion, I think the parameter that the paradigm we should be following is to try to containerize as many applications as possible Containerize as many RPMs as possible And use package layering as a last resort. However, that's not always feasible Package layering is useful for what we call host extensions like the Burt and PCSC light, which is used for card readers When you do perform a package layering operation We actually create a new OS tree commit that includes the packages that have been layered on top of the base OS You have the ability to override your base package set with Man's like our PM OS tree override remove and replace And these package layers are all tracked with the base OS So if you have layered in a new package on your base OS say if I did RPM OS tree, I'll show you an example actually. I do like a RPM OS tree install S trace and the next time The host OS is there's a upgrade available and S trace as part of It's also upgraded you will get the upgraded version of S trace when you upgrade your hosts So let's see an example So the Most common operation is RPM OS tree install or uninstall In this example, I've done an install of the utility jq an RPM OS tree install supports RPMs that are local to the disk so you can Build your own RPM and then install it onto your your silver blue host or pull it from a repo It understands DNF repos as well. So you can you you still have at C Yum repos dot D and all the fedora repos are still there on silver blue So RPM OS tree will query those those repos and look for the packages to install when you request them So in this case, I've just done jq and that's in the repos. So it goes to the The fedora repos and it pulls down the necessary metadata in the packages And right at the bottom here, it says it's added these two packages Jq is the one I requested and oniguruma is a dependency of jq If we do an RPM OS tree status, you can see we have a a new commit A new deployment created with a new commit hash And we have a new piece of info showing the layer packages that were requested. It was jq It doesn't list the dependencies of that package, but those are tracked So if you were to RPM OS tree uninstall That package all the dependency would be removed as well And just as before To get into our new deployment, we need to reboot it. So if I do a jq right now The command is not there, but after I would reboot into the host the command would be available So I talked about replacing packages in the base set in this case I'm going to replace hodman which is the tool for managing container running and managing containers in Silver blue for the preferred tool, I should say So we have a certain version of hodman, and I've said Give him the command rpnlustry override replace and give it a url And rpnlustry has downloaded the The rpn and you can see we've downgraded hodman from 0.12.1 to 0.10.1 And that's reflected here in the status with the replaced base package name Because we're override we're doing an override replace If there's a new version of hodman that won't be updated By default It will come as part of the base or less However, we will maintain the replaced base package this override During those upgrades unless you decide to remove that base that that override and As I said, as I said, we can also remove packages from the base os So in this case, I chose virtual box guest editions, which is an rpn We shipped by default In silver blue And I'm point. I just wanted to show that the following binaries are shipped as part of That rpm And then I decided I wanted to show how it we would remove it rpmlustry override remove After I reboot my host I can see My new deployment is booted into with the dot Got a new commit hash and The base packages had to remove the virtual box guest editions have been removed and You can see if I try to list out the One of those binaries that existed previously. That's that's no longer there So containers So containers are linux. They're just a little process. They're just a linux process the host they're just put into their own cgroups and namespaces and As I say here, those are the containers are enabled through cgroups and user namespaces and network namespaces as well as tid namespaces and There's a list of I think eight different namespaces that are used You probably have heard of docker They were the company and the tool that popular that made running containers easy Um and kind of drew the adoption of microservices in in the industry Uh, and as I said, they're usually a single process per container, but it's possible to run more than one process within a container so In fedora, we try to push the This new set of tooling that has come out of our container run times group Um, we have four new tools called dildo podman gopio and fedora toolbox dildo is the tool for building our container images podman is the tool to run and manage our containers gopio is a tool that we can use to inspect a remote remote container registries and also copy container inges between different registries and container storage types And finally we have fedora toolbox Which is a new utility developed For silver blue that allows us to create peck containers where we can install development tools and libraries Let's see some examples here So builda Okay, so before it's the tool we use to build our container images. It supports building Images from docker files. So if you have a set of docker files already you can migrate to build up pretty easily by just passing those docker files into to build out as arguments You can mount a working container to do Container image creation that way And we supports the oci image format to my fault, but also the docker image format as well So in this example, I'm doing the working container example where I've done A builda from scratch So I got a scratch container made and then I mount The scratch container so now at the Container file system is mounted into my onto my host and I can Do things into that file system. So for example, I can do a dnf install And here I've done a dnf install in jq and the the trick I've done here is to specify the install route of the Working container that I've mounted Then I commit the container To name it jq I've unmount the working container and then remove it and when I use a use builda images to list up the containers on my host I've got the jq container that I just created using build up available to use So now let's use it. So we use we're going to use podman The podman is like I said the tool to run and manage containers In fedora, uh, it's intended as a drop-in replacement for most of the docker cli Just like builda supports the same image format. So cian docker It doesn't require A demon like a demon running on the host like docker does It allows you to manage the full container lifecycle and you can also Run containers as an unprivileged user so you don't need root access This is still somewhat experimental, but uh, it it marks a new New way of running containers in the for silver blue In for door at large So here we've got I've got builda images shown where I've built the I've got the jq container podman shares the same container storage as builda. So podman is also able to List out the same image And running the container as simple as a podman run. I'm doing a privilege in this case as a privilege user with sudo Uh, and I'm piping out I'm piping the output of rpmo status as a json in json format to jq And just selecting the check sums of the deployments available and you can see that Uh, it returns the check sums exactly as as I expected Simple as that Scopeo is Uh, it's more of a it is not a tool would use And your day-to-day necessarily it's helpful for inspecting remote registry So if you're doing a lot of work with remote registries, you can use it to inspect the tags on an image on a remote registry It does allow you to copy images between Registries between storage mechanisms But most commonly I use it We're just inspecting registries. So In this example I've done a scope yo inspect. I've given a docker url Pointing at the fedora registry and just inspecting the contents of the fedora image and It spits out a lot of Information about the container. It's the container image including all the tags associated with that image And things like the architecture of it and when it was created some of the labels and whatnot Handy for debugging problems the remote registries, but you might you may not use it as often as the other tools Finally, there's fedora toolbox This is a tool that creates a what I call we call pet containers. This is where you would install your development utilities and development libraries It does operate as a rootless container so you don't need any privileges to run it Uh, you can layer it. It's a it's a bill wasn't rpm. So you can use package layering to install it as Uh, a package on your host. It has not yet been included in the default Uh, os of syllabi yet, but I'm sure it will be or you can run it directly as a script It's just a bash script. So you can go to the upstream github source github repo and Run it from get it from there and The need one of the neat tricks what makes it so Great to use is it automatically not send your home directory Into the container so you have access to some of the same data that you would have on your host So this is how you would use it In the first couple lines, you can see me on my uh, my host machine You start with fedora toolbox create and it will pull a container pull down the container image and create the container and then You enter into the container And you can do all your work in the container now. So in this short example, uh, I've done an install of s trace Uh And it looks it's since it's a fedora container. It just uses dnf as as you would on any other fedora system And after it's installed s trace is available and you can see we have version 4 to 4.6 4.26 And just to show you how the container persists. I've exited out the container here And then re-entered container again And the output is cut off a little bit here. I apologize, but You can see that s trace is still there because the container state has been maintained And You could continue to install additional packages and just keep exiting and entering whatever you like Very handy for your pet development containers and your development workflows So flat packs flat packs are basically containers for gooey apps If you've ever tried to run some sort of gooey app in a Container by itself. It can be real tricky. There are people that have done it. However, I recommend using flat packs uh It uses live os tree to store the runtimes required for running the the gooey applications As well as the application itself onto the disk It uses bubble wrap to allow Unfolded users to set up and run these containers as well as D bus and system d and some app stream metadata The apps can be distributed in OCI image format or via os tree repos And it allows for distribution of apps on any flavor Linux So the flat pack utility Is actually available for number of distributions So once you package up an application As a flat pack and distribute it through a flat pack repo Any user who has flat pack installed on their on their host whether it's fedora or arch or ubuntu Or deviant at all, I don't want to say deviant because It might not be packaged in there, but The idea is package your applications once as flat packs and then run them wherever you can run flat packs So what does it look like to use flat packs? I've done a lot of a lot of I do want a lot of on the command line initially Just for the purposes of this demonstration I've added the flat hub Flat repo, which is the probably the most popular repo available right now It contains free and non-free applications in the repo. So it's up to you to choose which ones you want to use And then once the remote has been added I can search for an application. So in this case I've searched for spotify and we see we have a A hit there for spotify And then we do a flat pack install of the of the spotify client And It prompts you do you want to install it Here are the Runtimes that are required for it Here are the permissions and accesses that it's it's asking for and you can confirm or deny those those requests and then Similar to what we see during a rpm OS 3 operation It pulls down the files from the repo Rides them to disk and then when we list it out after it's all done We can see we have These two runtimes that were required for spotify spotify and we have the application itself Once that's completed you can actually access spotify Through the dome desktop as you would normally I'm going to try to do that right now because I have it my host of I type in Spotify You can see we have a spotify client that looks a lot like any other spotify client you install the rpm But this is actually running as a flatback Which is pretty great Flat hub like I said, there are Hundreds of applications that are packaged on flat hub I'd suggest that if you're going to use flat packs you go over there and check them out So that covers all the technology Not all the technology, but most of the technology of Fedora silver blue And this is my spiel about where we want to go with silver blue in the in the future There's still a lot of rough edges to smooth out you if you are going to use silver blue you Are probably going to want to be ready to handle the occasional bug or problem and Be Responsive in the community to report those kind of problems so we can get them sorted out Ultimately we want to Enable the automatic upgrades of the os As I showed you in one of the early Examples or early slides Rp mostry has the ability to do automatic upgrades in the background So when you enable that it will Periodically check for upgrades and then download those files In the background into a new deployment again not touching your running system And then you are able to choose when you want to reboot your host to get into that upgrade right now These automatic upgrades are optional, but we think it's very valuable to have the ability to download those updates in the background For all the users Right now we don't install any flat packs out of the box Because the main source of flat packs has been flat hub and they ship A combination of free and non-free software. We didn't feel it was prudent to enable that we go By default however, there's work has been done recently to Stand up a flat pack repo for in the fedora infrastructure. So there are plans to Create more flat packs based on fedora rpms and distribute them that way Hopefully the next major release. We'll see more and more of that And then very long-term we also want to make silver blue the default workstation choice. So if you I don't know when this will happen But we have visions of when a user decides to install for our workstation They will get the silver blue experience silver blue experience by default I think we're a bit of a ways away from before that happens, but it's a gold issue for And then finally we want to improve our document existing documentation and grow the community at large So if you have any questions, I'm going to go hop into fedora classroom on freenode You can also come hang out in Pound silver blue on freenode There's a number of users there who hang out and are willing to ask questions You can join the forums On the discourse the fedora discourse site File issues that you find with silver blue on purgure And also engage with Us on twitter at team silver blue And you can reach me on twitter as well at rage here and i'll be happy to Point you in the right direction if i don't know the answer to whatever question you have And with that I am done I'm going to stop the recording