 Oh, is everyone here? If not, trickle in. So next, we have a talk about whatever's on the screen, not technical. Sorry about that. And so it's done by Katie Miller and Steve, who's down there. He seems pretty cool. He'll pop in and out. And we'll just let them get to it. So big round of applause for these guys. Thank you. Good afternoon, everyone. So I'm Katie Miller, and this is Steve Pusty. Steve, when he gives everyone a wave. We both work at Red Hat. And about this time last year, we were both madly working on the content for this book that we co-authored together, which is about Red Hat's open source platform as a service, OpenShift. Fast forward just one year to today. And there's a team of engineers madly working to make this into a coaster. The OpenShift engineering team is completely re-architecting the project. And we're going to talk a bit about that today and the different open source building blocks that are part of the new solution, namely Docker, Kubernetes, and Atomic. We do have a fair bit to get through. So can I just ask that you please hold any questions till the end? We'll try to get to them if we can. Otherwise, we're willing to stay back and answer questions after this session. And we might even give away a free copy of this limited edition book, if you're lucky. So this has been a pretty challenging talk to prepare because all of the things that we're going to be talking about, OpenShift itself and all of the components that are part of the new OpenShift are all still under development. So they're changing all the time. So just be aware that what you see today might not be exactly the way things look when there's finally a production-ready solution at the other end. So the new version of OpenShift is currently an alpha. We're expecting a beta in the next few months. Can I just get a show of hands? How many people are familiar with OpenShift, the current version, v2 version 2? OK, smattering of people. That shouldn't matter too much if you're not for today. And how many people here are developers? OK, it looks like most folks. And what about Cysadmens or other similar roles? Oh, maybe about half-half. All right, so there should be some stuff in here to interest both groups. Before we dive in and look at OpenShift and the new components, though, I wanted to start off by just going back and thinking about some of the use cases of why this project exists in the first place. So imagine someone at your company has this idea for an application, and they happen to be out of code. So they write the code to actually make it so. Have a little bit of think about what it takes to get an app like that actually running somewhere accessible. So what's the deployment process? What's the deployment style for that kind of thing to occur in your company? Here are some styles you might have seen before. We have the wait and see. You request your deployment, and then there's a lot of waiting while maybe some racking and stacking goes on. Maybe some binaries need to be installed on the server. And then you're finally able to do the deployment. And then you figure out there's all these environment issues. You're going to go back and forth, and there's more waiting before finally, hooray, you're in production. Or perhaps you might have seen the resource loop, where you request your server, and eventually you get it, and you do your deployment, you're in production, and then you figure out that you didn't really have sufficient resources for what you're trying to do, your app's failing, under load, and then you go back to the start of the process again, back to the start of the loop. Or perhaps the procure queue, you might have seen in your organization, they're probably not willing to admit it, where someone gets frustrated attempting to navigate the horrible IT desk service software, or they just get frustrated with all this waiting, and they just acquire a server and put it under a desk somewhere, and the app ends up in production, but not the project probably willing to talk to your ops people about. And finally, maybe you've seen something like this, where you've got some, app gets into production, but there's an epic fail, and you've got a call in your Chuck Norris type, whoever that is, to actually save the day and get all this running properly. So, you know, while we all wish that our deployment processes were as smooth as a swift roundhouse kick or whatnot, there are all of these things that we tend to encounter. There's timeline issues, communication issues, efficiency, scalability, all of these things that can potentially go wrong in this process. So what does a solution to some of these issues look like? Well, I think there's a few things that we're looking for. We want something that gives us the freedom as developers using the system to be able to choose the best tech stack for the job, whatever that job at hand may be, without any configuration headaches going along with that. We want deployments that are fast to do, that are easy to do, and that are reproducible across different environments. And we want the ability to do these deployments and have them be made up of lots of small, interconnected components that are wired together somehow. In other words, on microservices architecture. But of course, we also want the ability to deploy monoliths as well. And if we have this kind of architecture, the microservices style architecture, we want to be able to scale all of those pieces of the solution independently to meet changing demand. We don't want any more battles between dev and ops, so we want automation, we want a great deployment pipeline with our continuous integration and our continuous deployment in place. And finally, of course, we want our app to be secure in production. And when there are certain issues, so for example, when a heart bleed comes along, there are things in our stack that we need to replace, we want to be able to respond to that quickly and rebuild all the stacks that are depending on whatever that is that's now got a vulnerability. So platform as a service is a way that we can address these things. What do I mean by platform as a service? Can I get a show of hands? How many people are familiar with this term? Okay, so most not everyone. So just very quickly, so we're all on the same page with what we're talking about. So this diagram is showing three different types of cloud services, infrastructure, platform and software as a service. Everything in orange is the thing that the service provider is managing for you. And the things in blue are things that you're managing as a consumer of the service. So software as a service, you get everything. With infrastructure, you get some things, but there's a lot of things you've got to maintain, keep up to date yourself, are you going to install your operating system when you're on times or whatnot? With platform as a service, the one in the middle, developers are still managing their code and deploying that, but we want a bunch of these other things in the stack to be abstracted away. As developers, we don't want to be having to worry about configuring whatever language runtime or app server. We want these things to just work out of the box and be tweaked when the need arises. So that's what we're aiming for. So OpenShift is an example of one of these platforms as a service, but the term actually refers to three different things. The OpenShift we're going to be talking about today is the open source project, which is called OpenShift Origin. It's on GitHub Apache 2 licensed. It feeds into two other things. We've got OpenShift Online, which is what you get when you go to OpenShift.com, and we've also got an enterprise version there. So at the moment, the version of OpenShift, we call version two, v2. Well, when we're talking about origin specifically, it's milestone four or m4. And we actually are able to meet a lot of those things we talked about at the start with the current version. So at the moment, to get an app running a production, say you want Ruby and MySQL and you want the DNS set up, you want to make a SSHN, all of that. That's a one-line terminal command and maybe a one-minute wait. So we already have something that works pretty well, which begs the question, why are we rebuilding it? So there's a couple of reasons for that. Firstly, Red Hat's been doing this for quite a while now. It's been more than three years and OpenShift.com, the hosted Paz has more than 2 million apps deployed. So in that time, there's been a lot of lessons learned. So this is an opportunity to take those pain points that we've experienced and build something better. And secondly, as I've already alluded to, the technology in this space is moving really fast. So in that time, lots of new technologies have arisen, including obviously Docker and the whole ecosystem associated with that. So this is a chance to build upon those new best or best-of-breed technologies to build a new, better Paz experience. So the new stack or at least the piece of it we're going to talk about today looks something like this. So we've got Atomic or REL for our host and then Docker laid on top, then Kubernetes and OpenShift, which brings all of those together and also layers and things on top. So today we're going to have a little look at each of the things at the stack, starting at the bottom. So I'm going to hand over to Steve now to kick us off with Atomic. No, that just turned it off. Now it's green. Can everybody hear me? Okay. So okay, this is a Linux show. How many of you have heard of Atomic? Okay, that's actually pretty good, thank you. So I've been at other Linux shows, we're on like two or three hands went up and then I was very sad. The idea with Project Atomic is Red Hat has actually gone back and touched the OS. Right, like redone the way we pen. This is not Fedora, like it's not Fedora REL. This is actually rethinking the way we do REL, Fedora, and CentOS. This is available for all three of those OSes. And the idea is change has happened. Think about how long REL has been out or how long the way we use Linux or the Linux OS, the way we think of it has been around. It's been a pretty long time and a lot has changed during that time period. So what's happened? First virtualization and then cloud. And cloud is a totally ambiguous term. So what I actually mean by cloud is like lots of little machines running everywhere and things, there's no longer the big hunking server where you're gonna install one version of REL and then everything's gonna go on that server, right? And that's an irreplaceable server. So that's changed quite a bit. And then there's been a lot of experience. First we got containers and then there's been the growth of containers. And that's kind of changed the way people think about packaging up applications and delivering them. And so that kind of makes you think about how to redo the OS. So one of the first things that's happened, and this is the least important to OpenShift but I wanna bring it up because I think it's actually pretty cool about Atomic, is RPM OS tree. The people know about this one. This is actually for Linux sysadmins. This is actually probably one of the reasons why you would love Atomic in and of itself. Basically you build an image of the packages you want to install on your machine on another server and then you apply them wholesale. And it's like a transaction. And so if something goes wrong, you can roll back that entire update with one rollback, much like Git, right? So you apply the update and then you can roll it back. And basically you're doing an atomic update of your OS as opposed to a whole bunch of RPMs that can fail in the middle and leave you with a Hork system. But the real reason we like it is because it's a minimal system, right? It's got one of the best supported kernels in the world and in container space kernels matter a lot. I'm not saying, well, I said best when I wrote it up here but I'll say it is one of the best supported kernels and kernels actually matter. And then all the system utilities you need and nothing else. So it includes things, although maybe controversial to some still, system D, journal D, right? It's got all the basic Linux functionality that you want for the kernel and just how to run the system and how to boot it and do that kind of stuff. But nothing else. So there's no post fix. When you're running at an atomic host, there's no post fix. There's no DNS server. There's nothing, no services outside of the core services that are running on it. And the way you bring all that stuff is you bring it through containers, right? So if you want to actually put containers onto your machine, you're actually doing it through bringing a post fix container, right? Or if you want to run Postgres, it's not installed already. You're bringing a Postgres container. I just muted the screen. It's not fun. So atomic is built with containers in mind. There's a whole bunch of utilities in the base OS that it has there. The other thing about this is, and start getting your head around this now, is this is the new way of delivering packages, right? There's no, I think, like one of the senior Linux engineers at Red Hat, when I was speaking at DockerCon, when I was working with him, he's like, I don't care about Docker, except for the very reason that we can get rid of RPM and have a better package management system, right? So when you want Postgres now, you won't do yum install Postgres. You'll just bring a Postgres container to your machine. And it won't care about what version of glibc is running, because it'll bring its own, right? And in this new atomic, there's also a console for management of containers. It manages atomic hosts, but it also, you can change things like the C group parameters on the fly from the web console and watch read-a-source utilization change. Okay, so that's atomic. So what does that get us when you're trying to build a pass? You get a fast boot, right? Because we're only booting up, like, I don't know. I think it's about 700 RPMs total. That's it. You get container management and security right out of the box. So Red Hat has been one of the groups working on adding SE Linux into Docker and making it really tightly managed inside of there. Oh, I use the, is it speciality? Can anybody name what animation that's from? Is it speciality? I have to say it with kind of a British accent, though. And you imagine a sheep and a dog. Sean, but it's not Sean the sheep, it's actually from a close shave, I think. Windows are our speciality. And then it's got a great kernel. And I'm gonna tell you why kernels are really important coming up. So we got a great kernel, a fast boot, and containers right out of the box with our OS. How many of you have heard of core OS? Yeah, so this is Red Hat's kind of answer to core OS. And it's two different ways of approaching the same problem. Of course, you know which one I think is better, but I don't know that that's necessarily objectively better. But if you know core OS, this is the same idea. So the next piece that's coming is containers. And this is Docker. And we've actually been doing containers for a long time, but one of the things that Docker, everybody knows that, right? Docker, there's lots of different container technology well before Docker. But what Docker really brought, and this is the part that is important, is an image. Docker actually gave a nice way of specifying, these are all the binaries I want to come up with this container and run inside of this container. In a very standardized way, they got a lot of people to buy onto it. It's based on Linux containers. It now also has things like SC Linux and other things inside of it. It combines the file system layers into a union file system. So this file system is almost more like Git commits. And the only one that's writable is the top layer. And that's all I'm gonna say about it because this is not an in-depth Docker thing. And it includes all the components necessary to run the process, store persistent data or both. Does everybody know Docker already? How many of you have actually tried and played around with Docker and read stuff about it? So I should go fast during this part, right? The Docker guy raised his hand. That's a good thing. I'm glad he raised his hand. But so I can go relative, who didn't hear? Well, I don't want to embarrass people. I'm just gonna go relatively fast and you have more questions to ask me later. Okay, containers versus VMs, everybody gets the idea. The point that I want to make here though is this is why the kernel matters, right? Because everybody's sharing the kernel. So everybody likes to say the OS is irrelevant now. That's not true. What's irrelevant is all the packages on top of the kernel and the core operating system. But this core piece with containers is what matters quite a bit because everybody's gonna be tapping into that kernel. And you say, oh, that's easy for the Red Hat guy to say. But it's actually quite true. It's actually quite true as well if you think about how containers work, okay? And everybody gets the why these are quicker, better, faster. Somebody not get that, right? VMs basically are booting a whole bunch of different operating systems on top of the operating system. So running Docker operations, the first one I'm gonna run Docker. The image I'm gonna use is sentos and I want it to run binbash. It runs. I could have done a whole bunch of commands but I typed exit and so the Docker container ends, right? I can watch it, I can do Docker PS and it'll show me this is the last one that ran and what did it run and so I can get information about what my containers are doing. And then if I had, if I did not ask it for a terminal, see how that's dash t that said bring up a terminal. If I didn't ask it to start a terminal, if I just said start this image, this container name, it starts, it's running, but the only way I can get back to attach is to attach to it and then I can say exit and then it's done. Any, everybody's cool on that? We understand that stuff. So one of the great things though is now we can actually diff container. So in this first one, I'm gonna start up a container, basically the same one, sentos and I'm gonna give it a name rather than that grave Newton. I'm actually gonna give it a name and then I'm gonna yum install wget with my binbash and then I'm gonna exit. And now I can say Docker diff add wget and you get all the file changes, either the ads or the changes, which is actually pretty fricking cool. At an OS level, you get to see all the different files that were changed by whatever operations you did. Try doing that with a VM. So containers can also run as demons. Containers continue running unless whatever inside it exits or you tell it to stop. Stop the container name. And containers can be linked. So within it on a single host, you can actually link containers together. They can only see within the host and it's done through environment variables and it changes to Etsy host. But then you can actually have like, this could be your typical web scenario, which is your app talking to your DB and they talk back and forth and then the cloud talks to it. So that's all possible with Docker and they did a nice way of doing that as well. So pros and cons, extreme application portability. Basically as long as your kernel is up to date, you can take your Docker container and move it anywhere you want. Very easy to create and work with derivative images. You can take the sentos and then just layer stuff on top. Fast boot, cons, it's host centric. It does not aware of other machines in the network. It's no higher level provisioning, at least in core Docker, right? You just provision individual containers and there's no usage tracking or reporting. So the wins for us with this though are efficient resource usage, right? But OpenShift in the old coaster version, which is still the production version, we were using containers already. So this is not really a big win for us. We already knew containers would give it. What actually is the bring your own bits part is great, right? Each Docker container brings its own bits. So we don't have to worry that our version of G-Lib C is different than the one that the person wants to use or whatever dependencies they have. They just bring their Docker container and it'll just run. So that's actually great for us. It's a standard way for people to make container images. There's been containers for a long time, like I said, and there's been no standard way. So everybody kind of made containers differently. With Docker, we say just make a Docker container and bring it to OpenShift and it'll work or bring it to our pass and it'll work. And because they've done such a nice job of doing this spec by themselves, they've got a huge ecosystem. And so the win that happens for us now is, yeah, we can ship our own Postgres container and we can ship our own Apache container and our own Nginx, but if you have one that you think is a better one, no problem. Or if your IT department, when you use the enterprise version and you install this internally, your IT department can say, this is the blessed Apache stack for Docker. That's the only container you'll use and everybody can agree on that. And you can run it locally and then you can run it on the server. So with that, I'm done with those wins and now I'm gonna turn it over to Katie to talk about boats. Thanks, Steve. So the next piece of the puzzle we're gonna look at is Kubernetes, which is a Greek word for pilot or helmsman or shipmaster. So this is a reference to the Docker analogy of the whale ship that holds all these nice standard sized shipping containers, except now we're going one level up and we're looking at how we can deal with multiple whale ships or multiple hosts in other words, especially when we wanna create applications that are comprised of multiple Docker containers. So the way that it's described in the docs is that it's a system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications. It has a declarative model. So you tell it the state that you want. So I want this many copies of this application or whatnot and it goes away and makes it happen. It decides where those containers should be placed, which hosts, it does that replication and it does any restarting or removal of containers as needed to give you that state that you've asked for. So it's an open source project of course by Google, still a pre-production beta at the moment. It's on GitHub as you'd expect. And if we have a look at some of the top contributors over the past seven months or so, there's a lot of Googlers there as you'd expect, but also a couple of Red Hat is even among the top few. So this is also a project that Red Hat is actively very involved in. Red Hat is making sure that there's been code being pushed upstream to support all the things that are needed in OpenShift. So they're doing a lot of work in this space. So I'm not gonna have time today to go into any great detail or the finer points of Kubernetes and how it works, but I wanna briefly just cover some of the main concepts and components that I think are important. So first up, some of the concepts, a big one we have is the pod. So in Docker world, we've got just these single containers. Kubernetes gives us the ability to co-locate containers. So we can now have a group of Docker containers, which are all gonna be on the same hikes. They share an IP and they also share storage volume. So that's what we've got illustrated over here. A pod can just run a single container, but we may have multiple containers in there as well. Second important concept, one level up, and kind of layers of abstraction is a service. So this is a way of providing a single stable name and way of accessing a set of pods and also access a basic load balancer for those pods. So that's what's illustrated over down the bottom here. So I've got two pods here that are both running JBoss containers. We've got their IPs there, but we don't really care. All we care about is how we can find this particular service, which has been named Web, and we can hit that and it can worry about where to rout us from there, which pods actually gonna service our request. The next important one is replication controllers. In the last section, I talked about Kubernetes being a declarative model. So this is the replication controllers of the piece that actually manages the state of those pods and make sure that we do have the number of replicas that we've requested at any point in time. And finally, we have labels, which are used to group these other components. So for example, a service will know which pods it belongs to because they'll all have the same label and same sort of deal with replication controllers. So components at the kind of top level, we've got a cluster, so this is just all the compute resources that we're actually using to build all these other things on top of. And in our cluster, we're gonna have a number of nodes. These were previously known as minions in the Kubernetes project, but the name has since been changed to nodes or Kubernetes nodes. And these are just the machines that are running Docker. So may have many of these. They're running Docker, but they're also running the Kubernetes daemon called the kubelet and also the proxy services. So when we do hit one of these nodes and we're requesting a particular service, it's that proxy layer that knows how to rout us to whichever pod is gonna be able to meet that request. We also have in our cluster, a one or potentially several masters. So this gives us all the management pieces for the cluster. So the API server, scheduler, a manager for all those replication controllers and a few other things that are on here. And finally, etcd is also an important part of the system. So distributed key value store for anyone unfamiliar with it. There have been talks on at this conference. So go watch some of those if you're interested to know more. And this is where the actual system state of Kubernetes is persisted. So what are the wins out of Kubernetes for us in trying to build a pass? This gives us runtime and operational management of our containers. So now we're looking at a bigger unit where we're able to co-locate containers and manage those as a unit. And we're also able to communicate across hosts. So with Docker, those you all seem to be very familiar with it. You know, we can do a Docker link and the Kubernetes world services are exposed with a given name, which is an environment variable in all our pods. So all our pods are able to contact whichever service they like, which means we now have the communication going across hosts in this system. And all of this gives us a system that's available and scalable. And we've got all this automated deployment and monitoring going on. So we know that we can declare a system state that we want and it's going to eventually converge to that state. Again, across hosts now rather than just dealing with a single host. So we've looked a little bit at Atomic and Docker and Kubernetes. Now I want to talk about how we bring all of those together and in an OpenShift and what it brings on top. So basically what we're aiming at is Kubernetes being the container runtime. So this provides all the components we need to operationally manage all these containers. And on top of that, OpenShift is adding what we need to do our DevOps and our team environment. So it's supporting, it's a whole bunch of high level tools to support particular common workflows that you're going to want to do when you're actually, so for example, pushing code through a pipeline from dev to production. The newer version of OpenShift, generally referred to as OpenShift 3 or V3, but if we're talking about origin specifically, you might see it referenced as M5 or Milestone 5. So it brings a few more concepts to the table to support those things that I've just talked about. So we have now an idea of an application. So Kubernetes allowed us to group containers together. Now we're able to link these services, which talk to these pods together to create an app. So for example, we might have an app that has a front-end service, which talks to one or several back-end services and that might talk to a database service. So for example, we're linking all these things together now, but we're still able to manage them as a single unit while simultaneously being able to scale them independently. So we get there loosely coupled. We also have the idea of a config. So this is how we actually deploy one of these applications. So this is a collection of objects that are describing the pods, the services, the replication controllers that are going to be part of this app, also things like environment variables. So they might be something like a shared key that you want to be exposed and all of the containers that become part of your app. So it enables you to do that kind of thing, all described in one object. And so this is a whole application, all in one thing. We also have a template. So this is something that can be turned into a config, but it's parameterized. So there might be certain things. Again, perhaps a shared key. You want it to be generated as part of a template and then fed into your config. That's something you can do using this. So it's a way of creating patterns for how you create these applications that can then be shared and you can repeat this. We have the idea of a build config. So this is an object that defines the source code you arrive for our build, how we're going to do our change notification. So for example, the authentication for something like webhooks. So maybe every time we push code on GitHub, we want this to be rebuilt. We would define that here. And also the build type we're going to use. And an open shift at the moment, there are two different build types we have access to, source to image or STI and Docker builder. The one I'll be showing today is Docker builder. This references some source code that is expected to be the source of a Docker image. So it should have a Docker file that explains how to build it. So when that is built, it will just go ahead and follow the Docker file instructions to actually do that build. Source to image is something quite different. This is something a bit more like the open shift to cartridges model that we had previously. So what this enables you to do is to get some arbitrary source code that you've got, maybe your Git repo or whatnot and combine it with a preexisting Docker image that's been created for that purpose and it'll spit out at the end a new Docker image with those things put together. So for example, we might have a Docker image that's been prebuilt that someone said, this is the image we go to, the blessed image, as Steve said, for Ruby or whatnot. So we use that, combine it with our source code and out the end pops our actual Docker image that we're going to deploy. And finally we have the concept of a deployment now. So this is both the image and the settings that go along with it. So the replication controller, how that deployment is triggered. So whether it's triggered when we do a manual promotion of an artifact or whether it comes from a change in the image, if it detects a change in the image that we're using as a basis of that or perhaps a change in the application code if we're doing an STI build. And then also we've got our deployment strategy, which at the moment there's, I just think just one very basic one was just recreate. So destroy it, was there before, replace it with the new. This is probably a little bit old now, but just to give you some idea how all these pieces might come together. This is a diagram someone's done up on the OpenShift all in one. So if you go on your machine, Docker run OpenShift slash origin, that will suck down a container that has everything all included, no config needed to run an instance of OpenShift. And that includes Kubernetes. So we actually fork Kubernetes and it's bundled in, although we keep it pretty close with the upstream. So we've got here things that should now start to look a little bit familiar. We've got a node here, it's got a proxy. So this is, and we've got our cubelet and Docker and different pods. That's then talking to the Kubernetes master, but now we've also got an OpenShift master in the picture as well. So there's an API, of course, and controllers for the different builds and deployments. And of course, we have a command line tool and there's also a web console for that. So what does this give us? What are the features we're getting? This is now giving us the ability to build, manage and deliver application descriptions. So those declarative sort of descriptions that I've talked about, and I'll show you one in a minute that will hopefully make that clearer. At scale, we can very easily do it in a repeatable way. We're able to turn source code into new deployable components. So this is a brand new thing. It's not just an existing Docker image anymore. We can actually just use a repo. We don't actually have to care about Docker at all. We're using the system that you do not want to. And thirdly, the other, that big thing is support for these common workflows for supporting an application lifestyle. So going through Dev and QA and production and whatnot and also management for teams and all the different features you need to support that. So this is some of the things, for example, integration of CICD into Kubernetes, the ability to trigger builds intelligently from a lot of different sources, so either manually or based on changes in the code or Apple image like I was talking about earlier and also support for concept of a project. Actually, that's some code that's just dropped in the last week to now actually have an object for a project now, which gives you namespacing around a lot of these other objects. So enabling multiple users, we can't obviously see other teams' things. And there's also a networking piece with the default network isolation for that as well. This is still being worked on at this time, so it's not completed just yet. So now time to actually show you all of this. Though actually you've been looking at all along because these slides are actually running on OpenShift. So if I go out of this funny image and go to my console, this is just the log output from OpenShift that's running locally there. So the app that's running is, and I'll just kick this build off, I'll explain what that was in a second. The app that's running is Reveal.js within a little application our colleague wrote called GistRevealIt. So it's just hosted at Reveal.js but pulling the source code from Gists. So that's the app that we're building. What I've just done there, the curl command is just to actually trigger a build. So this is what you could have happening through webhooks when you push code to GitHub. I earlier pushed some code to GitHub and I'm now just manually triggering that build. But I'll show you some of the pieces of the system. So OSC is our command line tool we can use to have a look at these things. So if I OSC get services, we see we currently have a service called GistRevealService and it's running on an IP address that ends here in 156. I flip back over to my slides, you can see that's what we're referencing here. I can also OSC get pods to see what pods are being pointed to. We've got a lot of wrapping here. So it's probably a bit tricky to see but you can see we've got two things that are running. Actually multiple, that's interesting. So we've caught the system and state we've got, oh, the build, okay. We've got the build running plus two instances of our GistReveal app service here. And so you can see that the build is running and also get builds. Okay, so we've got one here that's running and that's it's UID. So I can also grab that and actually have a look at what's going on. Well, we've got a lot of, this is a node build, got a lot of NPM going on. If I scroll up to the top, you'll see some of the Docker part. So this is a Docker build, remember? So it's building a Docker from a Docker file using the case for a lot of it. But for the source code, we're actually rebuilding, pulling down all those NPM dependencies. So I'll get out of that for a sec and just flip over to the web console. Well, this is all very new. So a lot of work still to be done here but we can see again, our service that's being pointed at and the fact that we've got two pods that are being supported by that service and that there is a build, I think it should tell us running at the moment. Oh, here we go. So the build is running and we're gonna get a new deployment triggered automatically at the end of that. And to show you why that is so, actually, one more thing I wanted to show quickly was just that Docker is still running here and is behind all of this. So if I do my Docker PS, there's a lot of stuff there. But we can go up to the top and this will be the, so this is the image that's actually doing the build that's running right now. And we've also got this just reveal image. This is the pods that are running. So you've still got all these Docker containers running here behind the scenes. This isn't an all in one but we've got these other things built on top. So I was gonna show the config that I actually used to kick all this off. So starting at the top, this is a pretty long JSON file. So I'll just go through a few little bits. So this is an example of this whole thing is a template. We've got a whole bunch of parameters. So here's an example of creating a secret that's something that's being generated and passed into the rest. If I go down a little bit further, we can see the configuration of this service object. Importantly, the port that it's on and the container port inside the containers that's gonna be referencing. We also have, I'm gonna scroll that down, a build config. So this is where we're defining that we can have our web hook from GitHub and also just a generic URL that's what I hit to get this build going and the actual source that's gonna be used for that and the strategy of the build, the fact that we're expecting to find the source code of a Docker file at that address. And we also have in here some deployment config. So I've got the type of triggers for our deployments. In this case, when the image changes, it's gonna automatically do the deploy. That's what we're configuring this to do and the strategy for what it wants to do with those containers, recreate them and how many replicas we want and some different things about the pods. There's a bunch of different things in this JSON at the moment with OpenShift to get this kind of thing running. You do have to actually write this JSON, obviously based on an example. By the time OpenShift comes out, of course, we wanna abstract all of that away. So it would be a one-line terminal command to generate this kind of JSON file. So hopefully, let's see how this is going. Complete, okay, so our build is finished and hopefully we'll see that on here as well. It's complete, but the IPs are still being allocated on the pods, so we won't be able to see just yet. So the code change that I pushed to GitHub that'll be in this build was removing this image which is part of the core template that's actually part of the app code. So if I refresh now, that should disappear. Hopefully, I'm lying on the internet's here. I might refresh again, because that always helps, doesn't it? When it actually loads, it is in fact gone. So it worked, yay. So I mean, that is all still a little bit rough right now. As you can see, you don't really wanna probably be manually making these kind of config files, but you can see kind of where we're going in the future. We're gonna be able to abstract away a lot of those things. So what have we got from all of this? What are the big wins? If you think back to what we talked about at the start and those different aims, I think you'll see how a lot of this correlates. We now have the ability to build a single artifact which contains a whole dependency chain. We've got a Docker image and also all the environment variables and other config that goes along with that. So we've got a fully reproducible deployment. We've got the ability to share these common technology stacks through Docker images, but also the patterns for rolling out these changes. So with our templating system, we can very efficiently manage lots and lots of applications. Didn't really talk about this, but autoscaling is also something that's part of this picture. So I mentioned with Kubernetes that there's some simple load balancing that the server styles between the pods. OpenShift is also adding an autoscaling piece at the higher level of that. So load balancing between different services with HAProxy or you can plug in something else as well, but the one we'll be looking at first is HAProxy. So we've got autoscaling on top of that as well, like we do in the current version of OpenShift and managing these things independently of course. So we can scale up just one type of pod, for example. Also now the ability to update on mass. So when there is some kind of problem in an image, we can just rebuild the whole lot. It makes it very easy to provision resources at scale and to subdivide them into teams. We've got that network isolation and the other things I mentioned. And we've now got a system that is change aware. So we saw all those different triggers, changes in the image and the source code and whatnot. So we can automatically have things being rebuilt, spun up, updated or without having to actually manually go in and do any of that unless which should we want to. So this gives us a very repeatable automated process for our builds and deployments. And of course, all the open source goodness and all helping us to get more synergy between Dev and Ops. So I think we've met our goals and we're all happy Chuck Norris approves. With that, I'll hand it over to Steve to sum up for us. Not sure why we're getting a proper full screen. So I'm imagining most of your heads are full right now. Is that a pretty good assumption? No, then come work for Red Hat. That's awesome that you could find. I mean, this was a really hard for me to get my head around this entire technology stack and to understand it all here today. So our goal wasn't that you are now Kubernetes, Docker, OpenShift experts, just to give you more of a vision of where we're going to get you to think about how this can help your process. Like, so Katie fired off that build. How many people were involved with that build and deploy? This is not a trick question. You saw it all yourself. No one's snuck up here. Other than, so Katie sends a URL and then how many people were involved in that process? Zero. Thank you. Who, you just won the book. Who said, who's that that said zero? Oh, there's three who are claiming it. I think in Australia, you gotta do that. What's that called? The Maori War Dance and see who like, you have to do like a face off to see who wins. You guys can talk about it. He was the guy in the front. You guys have a book. All right. This is, and I don't want anyone to think that when Paz comes out, because not all of you had heard about it, that sysadmins go away, right? Some people have been saying, oh, Paz comes and sysadmins go away. And that's absolutely not true. Not. Because who actually installed all this stuff, set up the policies and like set up all the software? Who did that? What? I'm bad with translate. I know you're saying something that everybody else here understands, but my American filter is not getting it. No, no, Red Hat, no, no, no, no, no, no. You need a sysadmin to actually install all this software, right? And you're gonna need a sysadmin to keep these machines going. What this actually ends up doing is getting people out of each other's hair, right? And that's the key here. So, where's my clicker? In conclusion, we covered a lot. I already covered that one. The other part is for us, and by us now I'm not just us here in the audience, but I'm talking about Red Hat. We've got five minutes, plenty of time. For us, it's the Linux story over again, right? Red Hat and Linux distributions, that's not just one company owning it, even the kernel, right? No one owns the kernel, except for does Linus actually own the copyright on the whole thing? You can see how much I care about copyright stuff. But so the idea is what Red Hat is doing here again is what we did with the distro from before, which is we're taking, oh, that's a great solution for this, and that's a great solution for this. We don't have to build the whole thing. Oh, but not only is it a great solution, we're going to put our money where our mouth is, and we're actually going to get engineers to contribute to that project. It's not like, oh, let's just leech off the community and build the best thing possible, taking all your hard work. We actually give back, right? And so again, this is what we're doing with OpenShift, and we, how many of you want to learn Go or play around with Go? This is a great, I saw that head shake, I agree with you about no on Go. Just for me, I'm a web developer. So, but if you are, this is a great project to get started with. Our community's friendly, it's all being written in Go. You get to learn a lot of really interesting stuff. It's all Apache 2 licensed. I don't even think we have a contributors agreement. There's none of that, right? It's just come and play. There's Trello cards, where you can read how all the stories are going. There's public peps, and I'll give you URLs for those in a minute, right? So the main message, one of the main messages is, you've seen the beginnings, it's going to get a lot better, and we would love to have everybody come play with it and work with us on it. And then your world as an assistant man or developer is just going to get even better, right? So, you get to now use containers to have an agreed upon way between both of you to manage server bits. No longer do you have to write this long chef or puppet, well, I like chef or puppet, but you don't have to write a long chef or puppet script that you have to teach to your developers and then they have to install that on their local machine and then that's different, maybe perhaps on the server. What happens here is, your IT shop and you work together to produce a container that you like, the developer uses that locally and then deploys that into OpenShift or into the PaaS, right? There is, and you can all be agreed on that. So, developers, yes, you're giving up some freedom, but for some of that freedom, you get to not have to talk to assistant men every time you want to try some new idea, right? Which I think is actually a pretty big win. We can now automate some of the annoying things, right? So, when Katie spun up the entire application from, it was a command line. To create an application, that was a command line. Did she have to touch any of the Linux permissions? Did she have to provision a user? Did she have to do any of that kind of stuff? I'm assuming for most assistant men, that's like the worst possible stuff you ever have to do, which is create a new user, then have the new user come back to you and say, oh, you forgot to set the permissions properly on this folder, can you please do that? Oh, you don't have this binary that I really need. Can you please install that now? And so it gets each of us out of each other's hair, right? Or auto-scaling that Katie talked about. When load comes into the server, HAProxy detects it, spins up another service, automatically plugs it into the load balancer. So, yeah, you wear a beeper, but you don't have to listen to it as often, okay? And then another, I actually think this is a pretty cool feature which I didn't appreciate until I started using Passesmore, which is the templating of an entire application. So when a new person comes to your team, I got one minute, when a new person comes to your team, basically you just say, here's the template, and it spins up everything and they're good to go. That's in infrastructure and code. And the nice part about this, but when you were saying Red Hat, is finally what OpenShift packages all this technology into one nice package. So you just fire up OpenShift and you get Kubernetes firing up, you get EdZD firing up. It's not having to install and hook all those pieces together and figure out how to get them to talk the right way. So that's, I think, the biggest benefit of OpenShift right there. And so with that, here's the references. The ones that are specific to OpenShift are this one. These, actually, when you look at these, these should be seen. These three are the things of canonical truths. This is probably out of date by now. This is definitely out of date by now. This is pretty out of date by now as well. So things are moving pretty rapidly. They give you some of the basic ideas, but if you look at specific commands or specific namings of like, this is a Kubernetes minion. It's not a minion anymore. And if you go to look for that in the source code, you won't find it anymore. So use these for basic ideas, but then these are the sources of truth, okay? And there'll be more documentation and stuff coming out. And here's more to read more about Kubernetes. I'm assuming Kubernetes is pretty new to this crowd. Is that correct? Yeah, I mean, everybody's played with Docker, but Kubernetes is pretty abstract and new. And there's Docker, if you don't know where to find it. And with that, we're done. And what else? Sides. Oh, thanks, Katie. This is why she gets paid the big bucks. There's the link to the slides. So you can bookmark those or take a picture of that slide. And this will stay up as long as nothing terrible and tragic happens to Katie. So be nice to her when she leaves, okay? And there's where, if you wanna like say, that was the worst talk ever, then you can just, there's me on Twitter. And actually a whole bunch of dear. How many of you are Ingress players? We're the last talk of the day. All I'm stopping you from is beer. So you can get up and walk out whenever. How many of you are Ingress player? I saw you Docker guy try to get up. We're switching to Rocket. Everything we said here, we're switching to Rocket now. Docker guy, leaving in the middle of my talk. Okay, how many of you are Ingress players? Blue or green? Blue or green? Yes.