 I hope you guys have installed it, and if not, actually right now is a really good time to do that. It's really easy to try it out, to use it. There shouldn't be any problems there. And I'd love to go through it step by step and look at what it looks like and all the different features, but we don't have time for that today. Cockpit is, as you know, a Linux admin interface. It lets you remotely access Linux, load it in a browser, interactively configure it, discover it, and it makes Linux usable. And over the last, let's say, year or so, this has gone a lot further than where we were here at DevCon last year. There we go. That's kind of what it looks like. There's a bunch of different topics and different things that you can get into on the side. You log in, you feel like you're on the server, and you are on the server. And you can, of course, do things like networking, storage, and all configuration, looking at the journal logs, and so on. Let's take a quick peek before we go into deeper details on what the goals of Cockpit are, and these transcend the Cockpit project itself. The first goal is making Linux discoverable for a broader audience. People who before would have given up on Linux, given up because the first thing that they are required to do is deal with the root bash pump, which is pure power, by the way, and really awesome, but it does prevent people from getting started. So Cockpit takes away that barrier and lets people, Windows admins, for example, or people who would have gotten scared away start with Linux. The second thing is to take complex Linux features that we can all deal with, but we may not want to. We may not want to spend time on them, and make them usable, make them discoverable, such as setting up a network fund, or other things like that, where you have to read a bunch of blog posts, you have to figure out exactly how you want to accomplish this, and then go and do it, and read the manual pages, and then figure out how to make it work after boot. Those kind of things are just trivial to accomplish, and then when you do them, you can move on to the task that you really wanted to do. So a lot of these play out in the user interface, in the admin interface, in the browser UI, but these same goals, that's something we want to talk to you about today, is these same goals, when we fix them in the UI, it has ramification on the command line, or on another tool, or another script, or another way of interacting with that, that makes these tools discoverable, complete, and actually usable. So there's just so much we could go into, but we've picked some topics, and I hope that they're compelling, that they give you ideas, they call out their kudos to the people who have gotten involved and contributed. So for any of these things, if you want more information, talk to us. There's Dominic and Peter up here, joined us last year. Up front here are Andreas and Marius, these are the guys who started Cockpit, and I think Lars is hiding somewhere up there, he's going to be joining us soon, there he is. So you're welcome, and tomorrow there's a hack fest in the morning too, very early unfortunately, but when you're interested in some aspect of this, we're going to be talking about containers, we're going to be talking about testing, continuous integration, we're going to be talking about how it works with atomic hosts and all of these things. So for any of those aspects, figure out how to nail us down and ask for more details or how we can go forward and make something happen in the future. And we tried hard to remove the excuses, Cockpit is zero footprint. We want it to be installable by default, it starts on demand, exits after use, doesn't have a big fat intermediate stack, it's pure UI, and so when you're, look for the excuses, I hope you don't find them. When you're dealing with this stuff, don't assume, look and see how, at each step we've worked really hard to take away the reasons not to use this, not to integrate with this. With that, I'm going to hand it over to Peter. Alright, hi everyone. So, OS tree and Cockpit, I won't get too into the details, there was some really good presentations on it yesterday, but basically RPM OS tree allows you to have an immutable operating system that's based on RPMs. So this last year we worked on getting support for that in Cockpit. So, this is what it looks like. You can see sort of what packages are currently installed in your system, the version you're on, you can see what was previously there, what the changes are from that and what you're running now. You have option of rolling back, you can check for updates, see what's there, and you get to inspect like the exact details before you install and see exactly what changed, what's added, what's updated, sometimes there's downgrades or whatever, and of course you can go ahead and install those updates. The nice thing about this is that when you're doing this, if this gets interrupted or canceled in any way, your operating system is not affected at all, you'd boot back exactly where you were before. So, let's give it a head because it takes a bit. Your computer's got to reboot after that and you can see that the new updates are installed. You're on the new version. So, that's the UI. So, getting that working wasn't quite as fast and simple as it looked. There was some issues with RPMO Street that we had to work through and to be able to deliver on the promise that the UI makes. We actually see this quite a bit working on different projects, Cockpit. Sometimes when you're coming at it from a top-down level and saying, okay, what's the user actually trying to accomplish? What do we then need the tool to do? And it helps sort of complete the picture for something like RPMO Street. It takes it from being a cool tool to something that really just works for the end user. So, one of the first problems we had was the fact that you couldn't really predict what was going to happen when you did an upgrade. You could look and see, hey, what's new? It would tell you and then you could install it. But if something changed on the upstream server that you're pulling from, in the meantime, then you just installed something that you didn't get to see. So, that was a problem. So, together with the atomic guys, we worked on adding a new verb, deploy, which allows you to... So, you can look and see, okay, what's new. And then it gives you the version. It gives you the hash. And you can say, okay, I like these changes. Give me this exact version. And that way, it's predictable. You can run this on multiple machines or one machine or whatever, but you know exactly what you're going to get. Another issue was with the command. If multiple users try to run certain commands at the same time, they might conflict with each other. You might end up sort of messing up your operating system. So, that wasn't good. So, Colin added some OS Tree Level Locking that helps make sure that these commands are safe and it just won't let you do something else is in progress. And lastly, we added a DBS API to the service so that it helps with the multi-user thing so you can see what else is going on. It also allows Cockpit to provide the nice interface, all the details about the packages, the versions, all that kind of stuff, without having to do weird screen scraping or other things that we prefer not to do. So, yeah, in the end, I think what came out was pretty well. I'm missing a slide here, but there's definitely more still to come, more that we can do. Things like allowing rebases, better support for multiple operating systems, picking your upstream servers that you want to pull from and verifying them with keys and all that, and hopefully we'll see more of that in this coming year. So, our next containers in Cockpit. So, in Cockpit, we've added this last year a Kubernetes UI. It's focused around what system administrators will want to do more than developer use cases, but, you know, you allow it to deploy here. We're deploying just a simple mock service. You can see your pods. Pods are like your collection of containers. You can see what the details of the container that's running. Check out the logs. You can, you know, shell into the containers and, you know, type commands and do whatever you need to do. Replication controllers are what controls the pods. Make sure there's the right amount of them running and all that. Again, you have control and visibility into what's happening. Services is how you interact with pods since they come and go. So, you know, there's a nice UI for interaction with those and seeing what's exposed. There's also this graph kind of helps clarify what's actually going on system, how your objects are related to each other and you can make adjustments. And, you know, there we just added another pod and, you know, it springs up and you can see how things are related to each other. There's a lot more there, of course, sort of just a quick overview of what's there. But what I wanted to talk about more is, so this you can see is kind of when the standard cockpit, you've got your machines and your dashboard and all the other cockpit pieces are here and present. But what we wanted to do is get this running as a pod in Kubernetes itself. So, you know, eating our own dog food and making it work the way we're telling people that they should be running their applications. So, one of the problems is with the downward API actually gives us a lot of really good stuff for making our container work. But there's a few things missing. So, in this case, we have to tell the container what the public URL for our KubeMaster is as well as the URL that we want to run our container on. I'll get into more about that later, but basically, so we can use the OpenShift template feature to, you know, generate our objects and then we can just pipe it to OC Create and we get our Kubernetes container running. So, now I've hooked this one up to GitHub as the OAuth provider. It's using OpenShift. And you can see we log in with GitHub because that's how OpenShift is configured and now we're in. And you can see this is totally different from the other one. There's nothing else available. All that's here is we've got our Kubernetes OpenShift UI and that's because this is actually running inside a container. It's isolated from the host system. It doesn't have access. It doesn't make sense to do any of that other stuff. In order to make this work, we have to sort of refactor the way we do authentication in Cockpit and make it a little more pluggable and we're hoping that that's going to lead to some more interesting uses of Cockpit. Like, we're different pieces of the Cockpit UI. It can be sort of separated out and run differently like this. So, just to show that we are, in fact, running in a totally real pod, here we're going to kill our pod. We kill our replication controllers and new ones come up and then we kill the pod and Kubernetes takes a minute or two to kill it off. But once it does, we lose our connection and we can't load back. So, we killed ourselves from inside here. So, this works and you can actually use it and run it if you pull from the GitHub page. It's not as smooth as we'd like. Like I mentioned, one of the things that would be nice is, not a lot of stuff is running, I think, especially with the Kubernetes UI, like the OpenShift web console doesn't run as a pod or the OpenShift registry does run as a pod, but it still requires some special cased commands and things to really get it going. So, in trying to do this, one of the biggest issues you run into is dealing with external URLs. These are things like, my example here, my Kubernetes master, where is that publicly available? Or my cockpit instance, where is that going to be publicly available? And communicating about those things within Kubernetes and OpenShift is not very easy to do right now. So, that would be something that would be nice to get fixed. Another thing is some of the authentication defaults, whether you're using Kubernetes or OpenShift or a little not super user friendly, like OpenShift by default will allow, except any username or password, that's the default. The default OAuth configuration, you probably really never want to actually deploy that. Steph has a great PR for making system users be a supported OAuth backend. Something like that might make a good default. And with Kubernetes, you pretty much make this work. You got to enable basic auth, otherwise it's just opened by default. So, things like that there might be nice to, you know, spend a little time on making it a little more usable out of the box. With the end goal that we could just, you know, bring up this container as a pod right away, no configuration needed. And, you know, it would just work. There's lots more to do here, of course. We're working on doing an OpenShift registry UI as to, you know, for managing a Docker registry. Also, better support for projects and users and things like that. So hopefully, there'll be a lot more coming in the next year. And I'm going to hand over to Dominic to talk about 2&D. Okay, thank you, Peter. So, one of the things that Landon and Coppit recently was the support for 2&D. I'm not sure if you know it, but 2&D enables you to set performance profiles for a machine. If you have IO or CPU heavy use, then you can set a profile to define 2&D in the machine if you don't want to worry about all the little details. And so this was a nice event for us. It's what you like to see. We opened the GitHub page and there was a pull request for a new feature, 2&D. So the 2&D people, they did the main work. They changed the API and they said, let's get this into Cockpit. So what we did is we iterated a bit with them. We figured out where to place it. How can the user access it. We did the whole context of where does a user want, actually want to use it. Did some Cockpit-specific stuff, then it got in. And this is a success story for us because this is how it should be. You figure out what does the user want. You change the tool to let the user work with it in a bigger context. And you make it available in the UI. You change your API. And that's how it should be. So if you look at where it's placed right now, prominently on the system page, even if you haven't heard about 2&D before, you can access it and say, what's this button? Performance profile. Let's click on it. You can read the comments, choose something, say, let's try out the desktop profile and then activate it. The UI will tell you, this is active, it's a custom profile. Maybe you want to change back. So go for a balanced profile. And that's it. So the question is for other tools, what do you think should be here? I mean, we can make it happen. This is where you want, you don't want the user to think about what tools are there to solve my problem. The user looks at a system and says, how can I do what I want? And this is what I want to do. Show me what I can do. That's how we need to look at things. And one of the next things we're looking at currently is troubleshooting, and especially as Linux troubleshooting. If you look at a system, of course, you need to configure it at one point, needs to run, so that's a big part of what Cockpit does right now, but Cockpit is more than that. Once you've set up the system, and this may have happened to one or two of you, is sometimes they run into trouble, something breaks, something doesn't work as it should, and then you need to go in and figure out what happened and fix it. So, SO Linux is a case study for this. It's a good technology. It's developed. It's matured. You can fine-grain, tune security aspects. You need to consider it during development. You can do a lot of things, but it has some acceptance issues, because when something goes wrong, it quickly degenerates into looking at logs. So you have these complex log messages trying to figure out what really went wrong. So, what do you do? Yeah, you turn it off. But, obviously that's not the best solution if you want to secure environment. So how can you make that better? So we have the great tool, SE Troubleshoot D, that does exactly that. It helps you look at what happened and helps you fix it. And, of course, it's a command-line tool. It's very flexible. And what really helps is if you can have a UI for that. So the SE Troubleshoot people, they changed their API, the D-Bus API. They made it accessible. So this is a design stage for us right now. And what you want to do is you want to log in and see this is what happened, so I can see what happened and maybe even fix it. Maybe you need to change your rules, you need to change something else, and you can do that directly in the UI. The thing here is, of course, this is not something that you can only do in Cockpit, but the underlying tool changed the API, and it's just made it more available. So you can do all of this without Cockpit. It's just more comfortable to look at it in a UI. And along that Troubleshooting line, one of the other things we're looking at is container image scanning. So with OpenScap, there's some designs right now to do this. So I'm going to go ahead and go ahead and move forward to get that in as well. So next, I'll hand it back to Steph. Thanks. So, further, let's talk a little bit more about the Troubleshooting stuff. And specifically, let's focus on Atomic, an Atomic host. Cockpit is integrated with Atomic host. And I want to answer the question why and how. Atomic host is targeted at the cloud. And if you're doing it right, if you're doing the cloud right, if you're doing containers right, you're deploying these hosts as a mutable infrastructure. You're deploying them in a pre-configured way. You're deploying them as cattle. That's what they... That's the going phrase. But, so Cockpit likes to make things discoverable and usable. So it's a great way to discover how to use Atomic host, how to drive it with Cockpit. But going beyond that, what does it mean to have Cockpit integrated with Atomic host? When a cattle rancher has thousands of animals and one of them gets sick, he doesn't just take it out and shoot it. What he wants to figure out is why it got sick and will all my other cattle have the same issue? Will they all have the same problem? Troubleshooting. Making troubleshooting discoverable, making it usable, making it trivial to then take the knowledge that you learned, oh, this thing failed over here because this, for example, this volume that was mounted into my containers didn't respect the proper SE Linux context. Taking that knowledge, bringing it back into your deployment infrastructure and then deploying the rest, deploying next time in a better way in fixing that issue. So when you log into what we say Cockpit is integrated into Atomic host, if you actually try to connect to it with your browser, it won't work right away. And the browser access is not enabled. Cloud instances typically are accessible over SSH and of course whatever service they're serving and the containers that they're serving. So how do we make your browser talk to the cloud instance via SSH? This got a whole lot better and actually now works for the cloud instances. You can connect to one Cockpit instance using your browser. Unfortunately, browsers don't support connecting to something over SSH. It would be awesome if they did. But you connect to one Cockpit instance and you can add others and it will connect to them via SSH. So you can troubleshoot your cloud instances, your Atomic host machines and log into them without having to enable browser access, open firewalls, or even as you'll see, enable password login. So here I'm logging into a Fedora server instance and it's called Falcon and you can see it's kind of a boring server that's what it's called. But let me click on the dashboard. There's one server listed here. I'm going to add another one and type its IP address. This first one is a laptop. Look, there's an SSH fingerprint. We're actually connecting to that other instance SSH. And boom, there it is. Listen. If I click on it or select it from the menu up there I can look at the details. Obviously, I can configure it and then the future troubleshoot it. And you can see it's even a different operating system. It has different options. Let's add an Atomic host instance. Again, that instance is SSH fingerprint. Oh, I can't use the credentials that I logged into the first instance of Cockpit to login to that Atomic host instance. In fact, passwords are not even supported. But you can see Cockpit now has an SSH agent UI that lets you know keys, the keys that are accessible on that first instance before connecting to the other. I've just loaded a key and it turns out that Atomic host has pretty opinionated ideas of what the username should be. But there we go. We're going to connect via SSH to other instances. And you'll see we're actually logged into these pairs for the world as different users. And to the other one as stuff. So on. So, you can see that we're building on the real tech that's there that we've all worked on together making it discoverable and usable. In this case SSH, in this case cloud instances that's the whole story going forward with troubleshooting and other stuff too. We have built these tools and there are a bunch of tools now taking them and turning them into stuff that just works for the user. So let's talk about the next topic. And this is a big one. Today we've had to pick and choose the things we've talked about. What we've talked about communicating with a whole bunch of different system services a whole bunch of different APIs between 2D, Kubernetes the Docker stuff SSH stuff I mean the list goes on. There's probably about 100 things we talked to and how does that how in the world can we do that? In theory you would have just complete explosion of combinations of issues and problems. And the reason that we managed to do this is because of the continuous integration and the testing that we do. So I want to highlight that a bit because it has helped us accomplish what we're trying to pull off here but it has found tons of regressions and bug fixes and issues in other projects as well. When you open and I encourage you to open a pull request against cockpit you'll see something like this. These are a bunch of different test suites running before merging before that code gets merged and you can see a bunch of different operating systems that get them included. Two different kinds of atomic are getting booted. Debian is there different browsers and so on. Even certain Kubernetes images the one that Peter was talking about is getting booted and tried out. This when you open a change pull request hundreds of times it's booted hundreds of times in different operating systems. Some of these test suites boot a hundred different DMs is that in a given day a busy day will easily boot 10,000 VMs testing code before it's merged into master upstream. This took work but it's not impossible it's not the kind of stuff you should shy away from. That's why I want to talk about it here how we can make this happen more. The testing instances are staged in Docker containers and the Docker containers start VMs inside the containers weird stuff but this kind of stuff is possible this kind of stuff works. It's distributed that's one of the things that we added this year before it was centralized and we were very careful about the machines that were running this if they stopped working the whole project stopped working but they did stop working now so that all sorts of different verify instances can run the tests that we're talking about. Here you see for example one verify machine hiding behind the red hat firewall here you see two more public machines and a developer running some tests and all of this contributes towards those 10,000 instances or a bunch of tests across every pull request and every change that you saw earlier. They each ask the GitHub REST API for what needs to be done what's the next task what's the thing that needs to be tested and they have a way of choosing between themselves oh I'm going to work on this we try to avoid collisions but if there's a collision there's collision detection one of them wins one of them loses and then they post the results the status the images like attachments of what failed all of that two publicly accessible URLs we have a small tool called a sync and it runs different places people have provided infrastructure provided like Fedora has given us some space but other places too and then those things update the actual pull request and the API so that we can kind of track what happened this happens on pull request this happens on master and if you kind of get my drift the amount of machines here can scale up and down so if we lose a bunch of these machines well things will happen slower but they'll still happen and if we throw a couple more machines in the mix well they'll start doing testing appropriately and that really makes a big difference so Fedora has given some OpenStack instances for doing this and they do nested so there's OpenStack instances with VMs that run Docker containers that run VMs like you're really talking nested there but then other we found other hardware maybe some of yours I don't know like people have given machines and stuff to run some of this stuff so we can actually make this happen and one of the things that has happened so one of the things that you need to do to drive this to make it happen is push the packaging upstream so if you're following along thinking oh how would I do this? that's one of the first things that actually happened because you want to package the results of what you're testing the same way as users would experience it so the packaging spec files Debian's rule files all those go upstream and you end up testing it very close to what your eventual delivery of those packages end up being the other thing that's really cool is that QE has worked with us to get their tests upstream so can you QE accepts cockpit and then tests it and does delivery on it long after the co-changers land upstream but their tests run before emerging so the tests that they would normally run months later run before merging and you can find those failures and this has been a big deal not only the fact that they then don't have to deal with these things late but also the tests get so much better when they run hundreds of times a day they change from acceptance tests or regression tests into real integration tests that need to succeed in order for the project to move forward so this has been a big bonus and boon for everyone so let's so that's another thing QE testing goes upstream and the real reality of this is that when you're doing CI properly it's upstream you can get some other benefits by doing CI downstream but the real benefits happen upstream including continuous delivery every time we sign a git tag in cockpit it automatically creates a release it automatically pushes out tar balls COG scratch builds pushes into Fedora disc Fedora body updates does copper builds devian packaging pushes containers to the Docker hub uploads documentation all of this happens without human intervention I mean there is some human intervention when stuff goes bump in the night which it does some of these things are not perfectly reliable but the idea is that if we sign upstream a git tag that's been continuous integrated with all these operating systems and tests we can push those contributions and changes out instantly within an hour without anyone getting involved or penalizing their schedule or time and so in the last year we've done 50 something releases each week sign a tag and it becomes a release that means someone who contributed something one week has users using it the next week we'll push into the branch Fedora for example we'll push into copper builds and all of that we want to go further with this we want to see some of this happen with rel too so we've done it with Fedora and we're doing it with devian we want to see some of this happen with rel so again the real magic of continuous happens upstream and happens before you merge you can get benefits elsewhere you can get some of the cool stuff elsewhere but if you really want the magic the oh it's so awesome stuff that happens upstream before you merge and we're running out of time so we won't talk about the system APIs if you'd love to talk about how easy it is to call system APIs how to spawn processes call the US APIs change stuff from cockpit the contributions that you've heard Dominic and Peter talked about earlier where people jumped in and did a pull request those were easy those were straightforward and I encourage everyone to try everyone to get involved here we're accessing a devus API just from the javascript output on the command line we're calling various functions we're accessing properties yeah here we're changing the host name and so essentially the javascript running in the browser is part of your system login there's examples here where you can call spawner process trivially with a single line of code from the browser on the server and handle the output there you go there's examples where you can read files and all sorts of things and I wish we could talk about this more and there's even cool stuff like this now and this is an example code we can even load gtk app over the browser because of the broadway stuff that Alex worked on we have a real session a real login that starts and you have web socket support in gtk so it's easy to bring that output into the browser and interact with it and stuff like that so I guess the point is there's so many possibilities we've taken away all the excuses we all have contributed all sorts of different things so far what's going to happen next year we have some ideas, the troubleshooting stuff the container stuff is going to continue making that better and all of that but a lot of the ideas and a lot of the driving happens when working with other projects helping make those projects better helping make discoverability better so I hope you I hope that gave you some food for thoughts some compelling ideas interesting stuff and we're open for questions yeah so the question is where can we go to see details of the release system it's all at this github url it's versioned together with the rest of cockpit because the test change in sync with it and we that's an example of I'd like to work to figure out what people are interested in and document those parts better work better so contact us on IRC or this url and we'll do that how many engineers work on this project so how many engineers work on this project so there's a red hat team I think there's five one of us is a designer which is good because we do design driven development all of these things happen in a design first and then there's four engineers Lars is starting to help work so that's going to expand it but this goes beyond just what the cockpit team does or has done these concepts are things that other people contribute to and that's the kind of stuff that really excites us when other people come and help not just on cockpit things but help with the goals of discoverability and usability make the APIs better start building APIs working on the kind of stuff we've talked about yes the question is does the SSH to another place work with GSS API and those kinds of things cockpit has support for GSS API and the SSH stuff has support for GSS API what doesn't work yet and people are working on is the fact that you can't administer your system out of the in the default configuration using a GSS API login there's problems with for example escalating privileges with policy kits with system D with sudo there's ways to get around all these things by manually configuring it but we really want to see that work by default so a lot of the stuff is there there's support for GSS API but getting the story finished is not done is there a way to integrate cockpit views into my application and how does that work so the question is about integrating cockpit views into other applications so much of the stuff that you've seen here in the various sections that you've seen here are independent components already we eat our own dog food in this area and then those components are integratable into other projects and we have examples for doing that you can do it trivially via iframe embedding you can get complicated certain of these things are angular components that you can actually share the code via a bower and other stuff and actually implement some of the connectivity for connecting to the API and make that part work so like the Kubernetes stuff or the image registry stuff goes all the way through what I'm talking about here you can actually take those pieces and we have put those pieces in other projects as code but at the very highest level every single part of cockpit that you see here can be embedded in another application so if you want a configuration management system that has a web UI and you want to put an interactive tool for the network or for terminal or for whatever it's trivial to load that piece and render it inside of the other app so the question is are any other projects doing it projects are doing the second part the part where we integrate code and there have been examples of other projects integrating views such as the IPA that was last year actually the IPA UI bringing in a terminal but I don't know that anyone is actually yet embedding the full views in the point is that we do it ourselves so we know it works and I encourage anyone who wants to do that to do it we have an API to even discover whether that is available on a target server or not yes there's a report bug button in the SC Linux wireframe and the question is how does it work it's similar to the one in the GUI in the GTK based GUI where you think something is broken and the SC Linux policy rather than working around it not only your servers you also report a bug I believe that's how it works you had a question sorry about that sorry