 Good morning, folks. How are things going today? Excellent, excellent. So I'm sure that at least some of you saw the kind of stuff that we were talking about during the keynote, which is pretty interesting, because most folks don't think of OpenStack as just a series of applications. They think of it as this kind of intense infrastructure that you kind of install and curate so that you can then not care about the things above OpenStack, so kind of being in the position where you are even not caring about OpenStack and treating it ephemerally is a little bit of a mind-bending situation. So kind of jumping in here, that's me only. That's not really me. That's me. I don't go by Harrington. Just call me Redbeard. That's kind of the simplest way that we can get started here. So kind of a straw poll here. Who here has ever booted a CoreOS instance? OK, so we have got a bunch of folks who are familiar with that. That is a wonderful place to start from. I will kind of get folks comfortable here with, we're going to be kind of fluidly talking about a lot of different ideas. I'm going to be showing you some stuff that will make it easier to kind of keep CoreOS running and keep CoreOS running well on top of OpenStack. There's a little bit of code that I've kind of contributed into the CoreOS repos specifically because, historically, I was the guy who ended up dealing with OpenStack the most because of background. I used to work at Red Hat and was a subject matter expert on OpenStack on the consulting team and kind of have done a bunch of contributions there. So when it comes to containers, the kind of pejorative comment that I make is who gets excited about tar balls? Because in reality, that's all that Linux containerization really is. You can try to think about it otherwise, but it's really just a tar ball. We like to laugh at folks like Richard Stallman for saying, it's not Linux. It's GNU slash Linux. But he's right. As much as we get our giggles in, he's right. Linux is just a kernel. And in order to have an operating system, you have to have a kernel and a user land. And in our case, that user land is CoreOS, but that user land could be Ubuntu from Canonical. It could be Arch. It could be Red Hat. So to kind of continue through this a little bit, we're just going to plow through with a quick clip on some of these things and make sure that everybody's on the same page here. So consider this containerization 101. And for most of you who have touched containers up to this point or even heard about them, when you think containerization, you probably think of that whale. And that's not entirely inappropriate. Docker has been a great first step for folks actually testing that out. But containerization goes far beyond Docker. This goes back well, well into the 90s with Solaris working on Zones and FreeBSD working on Jails and even back further into the idea of IBM with L-PARS and I think they also, far after my days of dealing with AIX introduced APARS and stuff. But again, this is all kind of containerization 101 or 100 at that point. So next step down, again, plowing through these real fast. This is what most people think of as a traditional Linux distro. You've got everything in black is the stuff that you or that the distro cares about and that they promise to manage for you. And everything in white is the stuff that you are supposed to care about. They claim you shouldn't care about the version of Nginx, just use what we ship. You shouldn't care about the version of MySQL. But in reality, your application does care about these things. Cares about them very, very deeply. It wants to know that you are using the specific version of Python, which is why you end up having things like virtual end being created and the kind of equivalence for other languages. So we took the idea and just stripped down what was in the base operating system to be able to get to a position where you can now package your application with the version of MySQL that you need or with the version of Python that you need. And that really touches on the idea that when you do this though, containers are not lightweight VMs. Be very clear, like sure, we saw evidence of running VMs inside of containers and that's a totally useful design paradigm, but containers themselves are not VMs. They serve a very, very different purpose. And the mechanism of how they're doing this and what makes it different from a container is largely based around the idea of kernel namespaces. Let's talk about this real fast. So we have a host. And in that host, we have a user land. And the user land is all of the things that you want to run on that host. And then under that, we have a kernel. Because we have to have a kernel to provide some ABI and execution contract to be able to run everything from that user land. And that user land might be Red Hat, or it might be Debian, or it could be CoreOS, as I mentioned. But in automotive terms, the kernel is just the engine and the user land is what makes it a Honda or a Ford or a Porsche. So let's get back to this. So you have a user land and you have a kernel. And inside that user land, you have things that you are running, processes which are listening on ports and have assigned UIDs and maybe a network address. And it is all of these things together that provide the utility of a running system. But back in 2006, the engineers at Google had a vision that they began sharing with the world. And that was going through and kind of taking the idea that since the kernel is the most important part of a Linux host, they were imagining if you could do more with it and kind of emulate these like Solaris Zones idea, which is to say that, you know, or taking even kind of in the middle ground there, the OpenVZ slash virtuoso idea from SWsoft, now called Parallels. And Google kind of took these ideas of having a single kernel with all of the kind of abstracted things that you need in each user land and standardizing on it. And they worked with folks from IBM and kind of took previous work within the kernel that kind of started around 2001 in the namespaces era, like specifically with the 2.4.19 kernel to make this more of a reality. So again, in 2001, we have Mount namespaces added. This gives you the ability to have two separate views of the file system with similar sets of processes. And then in 2002, that expanded to network namespaces. And as we get to the point of having different sets of namespaces, we can have different views of the system, meaning that you can have processes with their own IPs in separate locations, all while sharing the same kernel. Now again, any kernel can run with any kind of user land here. You got a 4.0 kernel and a CentOS 5 user land, great. 3.18 kernel, Debbie and Jesse, that works too. The kind of important portion there is what version of G-Lib C you have inside of that. So what most people don't realize though, is that you can share these namespaces. And this is the core idea of a pod in Kubernetes. Which we'll be talking about again in a little bit. It's the idea that you can begin to compose multiple containers together to comprise an entire application. This means that you can have Apache running in one process and then in the same container, or in the same pod, a separate container, for example, running Let's Encrypt. So now Let's Encrypt does not have a view over the entire file system inside of the Apache container or inside of the Nginx container and can only modify the slash data slash dot well known to be able to write in the private keys and the certificates needed to be able to serve TLS content. And what this results in is an extremely high degree of composability for how you put these pieces together. So using this model, you can even put them together in a way that allows you to use the processes that you need, how you need to use them. So if there's a specific tool or a specific set of libraries that you needed inside of a Red Hat host to run, for example, ActiveMQ, which is a little bit of a stretch because you really just need Java for that, you can do that inside of a Red Hat user land. And at the same time, if there's some application that's packaged for Debian but not for Red Hat and you just wanna take the path of least resistance, you can spin up a Debian user land and be able to do a native kind of app get install of that Deb package in. And similar with CoreOS, you could then run a CoreOS instance and then run potentially additional containers on top of that. But this is an important thing to note here is that Linux is not Unix. One of the major differences is the idea of capabilities. Capabilities is one of those things that happens for you under the hood automatically with containerization and then leads to situations where you don't necessarily, if you're not familiar with capabilities, understand how some of the sausage is being made. So, from a historic perspective, this is what people think of as root. If you are UID0, allow things to happen. If not UID0, deny that process from executing. So, if we decompose this and think about what happens when you ping something. Really simple, common practice that folks do on a day-to-day basis, just whether they are a developer or a sysadmin, the kind of patterns of how they establish, making sure that they have correct network connectivity. But when we think about what a ping command actually does, you open a raw network socket. You emit an ICMP echo request. You hold that socket open. You wait for, hopefully, an ICMP echo reply. You may not get an echo reply, but we hope that you get that. And then you close the socket. But because ICMP is not TCP, there is no port number there. So it's not like you can just go, oh, if port number is greater than 1024, allow a non-privileged user to run that. And this kind of became, in 1989, the basis within system five release four, or revision four, the basis four set UID. They added this function to say, for the duration of the execution of this process, we are going to allow a user to temporarily become root. Now, this is actually done by just storing additional metadata on disk, as part of the iNode. And if an attacker kind of got fancy or found additional mechanisms through which they can modify the underlying content on the disk, it presents its own set of challenges because now they can actually change the payload for the binary. They can change the payload of where that iNode actually points at. And they have a backdoor to root escalation. So to kind of begin fighting against this, as of the 2.2 kernel, so for those of you who don't follow kernel news, that's really ancient. This is like circa 1996. So we're talking 20 years ago, they began working on fixing this. And they added two capabilities, Capchon and Capkill. And Capchon allows you to make arbitrary changes to file UIDs and GIDs and Capkill, kind of bypasses the permission checks for sending a signal to a process. And that includes iOctyl and a few other things. In the 2.4 kernel, they continue expanding and making the types of changes that you can make to a process more and more discreet. Now you have the ability to endow a regular user to make nodes on the host and even establish leases on arbitrary files. And in 2.6.37, we get CapSyslog and CapWakeAlarm and CapBlockSuspend. But in the end, this is not perfect. There is still a lot of room to grow on this. And that comes from mainly this idea of CapSysAdmin. CapSysAdmin is this random bucket of we don't know what to do with this permission. So if you have CapSysAdmin endowed to your container, you're gonna be able to perform a VM86 request interrupt or do IPC sets or call mount commands. And mount commands are the biggest thing that most people end up endowing a container with CapSysAdmin for. Because they decide that they need to be able to mount something inside of the container rather than mounting it at the host level and providing a path to that into the container. So at this point, let's actually see some more of these things in practice. Kind of explain a little bit more and make sure that everybody can follow along. So first, let me make that bigger. Everything's good. We're gonna do a little bit of this one equals, don't distract people with my host name. And then we are going to fire off. Well, and then this is just gonna do it again. So we do a Tmux export, do that. And then over here, just to make it easier for me to see, I'm gonna attach Tmux over here. There we go. So now I can see and actually face y'all and we're good to go. Anyone need this any bigger to be able to see? Man, I just have y'all wrapped. This is great. So I'm gonna start off by just bouncing into the AQOROS host here. And it immediately is going to show me a few kind of interesting bits of status here. So one, I'm running a QOROS Alpha image and we see that I have some failed units and these weird SSHD things. This is not stuff that you are normally used to seeing, I would assume, on a Linux host when you log in. So I've specifically deployed this, running wide open on the public internet, no firewall rules or nothing, because I wanted to be able to demonstrate some interesting things that are going on here. And differences in how QOROS kind of operates. So first and foremost, let's just take one brief moment here to kind of touch on some of what makes this additionally different. You heard me talk about in the weeds things about capabilities and namespaces, but nothing of what this actually means. Nothing of showing you what the actual ramifications of this are. And that's actually really, really important. So in the process of doing this, like I was saying, a QOROS host comes with containerization mechanisms built in out of the box. So we can do a Docker runs without having to install any additional pieces of software. So we go through and do a Docker run of busybox and fire off bin ash. And just like that, we're now running inside of a container. I assume that this is old hat for most of the folks here. But for anyone who's not, when you are running inside of container, like we said, you have a concept of a PID namespace, and I'll bring this up a little bit. So inside this namespace, my PID one is actually the command that I use to start the container. I do not have an init system in this. And init system is the piece of a Linux host that kind of starts and then fires off all your additional processes. It's what fires off your getties to be able to make sure that you can log in via a terminal. It's what starts SSH. It's what is going to run Apache and spawn bash and everything else. And what I'm gonna do here is I'm just gonna run a sleep command. And I'm specifically doing that so that I can jump out of the container and show you that at the host level, you can still see the processes that are running inside of the container. So that PID one of bin ash that we saw in the past, just a moment ago, PID one inside the container is actually PID 31 127 outside, and it has a child process of sleep 1234. And if I begin running multiple containers on this host, when I am inside of the container, I have one view of the world, and when I am outside of the container, I have a different view. Now, this becomes doubly important because I will go ahead and use one of the, or some of the primitives that some of y'all are used to seeing, and then we'll talk about what can be different here. So I'm going to do, so a lot of folks realize that you know, you run a Docker exec and some command to jump into a container. So I run a Docker exec with bin ash and did I not give it enough? Three of bin. Well, the way that I would normally do this anyway is using NSEnter. So NSEnter is a tool that's built in to the standard Linux toolkit. It exists on every single Linux host. You just have never had a need to run it. So when we are looking at running NSEnter, you give it a target process. So like I see that PID 31 127 there that I mentioned is kind of the set of namespaces that I want to be in. And now I get to a whole array of choices. So I'm just going to do that real fast and see who went to it. So now once again, I'm back inside of this container only now because I spawned a second bin ash process, we're going to see this as well as my sleep command. You know, we still have PID eight. The reason why I jumped in this way is because if we look at the addresses on the host, we see that I have this 172.17.0.2 address and it's attached to my ethernet zero interface. All standard things that folks are used to seeing when it comes to a Linux host. Though if I exit out, one of the things that NSEnter does is allows me to selectively pick and choose the namespaces I want to be in when it comes to running a container. Now, for anyone who's had to troubleshoot nova networking or sorry, neutron networking or even worse, quantum back in the day, NSEnter is a tool that you may be familiar with, though you probably just as much got along with using IP, net NS and things like that. But as a part of this, I can tell it, no, I do not want to enter the network namespace, so remove dash n. And now I jump back into the same container. I run the same commands, actually I can even just hit up and I see all my processes there. But now, when I run my IP-oadra, I am not seeing a view of the networking from inside of the container because I did not selectively enter into the namespace of the container. So I see the networking of the underlying host. So I see the Docker zero bridge that that IP address was allocated on, but I also see private address from the host. I see my public IPv6 addresses for this host and that allows me, as I mentioned, a greater degree of composability. It also means that now I can selectively pick and choose the tools that I need to operate with each container. As an example, inside of CoreOS, we have a very, very simple shell script called Toolbox. Toolbox spawns the second type of containerization engine that I'm going to talk about here. It spawns a systemDNspawn container. Now, systemDNspawn is containerization that has existed as a part of systemD for many years at this point. It is actually a containerization engine that has existed far, far longer than Docker has. And what this Toolbox script is actually doing under the hood is it's just simplifying the process of retrieving a tarball of a user land to then spawn systemDNspawn with. And when we actually look at this here, it defaults to a Fedora image sitting on the Docker hub and it defaults to the latest tag. If I wanted to change how this operated, I can overload those environment variables. So I could say I wanna spawn Toolbox with a Debian image or with a Java image, et cetera. And then in the end, it just takes and actually runs this kind of under the hood. So I'm just gonna start with this by default. And just like that, I am now in this Toolbox container. So you see, it spawned a container with the name CoreFedoraLatest and it pulled in the path from there. Now, when I go through and say cat ets eos release, it says that I'm running Fedora 23. And when I do a DNF install of HTTBD, it's going to DNF install that. Now, it's also gonna have to update all the DNF metadata, but in this way, if I go through and create a second instance here, I now see that I actually have this end spawn process running and it has bind mounted in some additional paths. So we see that my user path is now from the underlying host is exposed to media root user and my root directory is just in media root. So if I go back to the container here, and I can spell HTTBD correctly. So I go through and kinda yum install all the, or DNF install all the commands that I need and I can system CTL start HTTBD. Nope, in that case, cat, HTTBD. So there's a lot of system D CTL commands that you can end up working with and all of them are kind of extremely helpful. We're gonna modify some things with drop-ins and stuff here in a minute, but I can start from this container and just run HTTBD-D foreground. And so now this is listening on all ports or on all IPs. So if I go back and IP-O-Adra and then I grab the kind of public IP address of this host and then I spawn a new window, we go to that on port 80, we see that my Fedora test page is running. Now, even though I'm doing this with inside my Fedora container, like I said, I can go back to media root and I can get in and change to home core and see all my files and see work that I've been doing with HANA and some other utilities and say touch new file. And now outside of the container, my new file is there. It's also important to note that within this container, because this is end spawn and I've told it that I don't want to split out any of my namespaces, I can still see the entire process list of the host, despite the fact, again, that I'm running inside of this container. And that's one of the major differences because when folks think of containerization, they think of, oh, I'm sitting inside, I'm on a NAT network, I have my own view of the process tree and nothing can step outside of this container to talk to the other pieces. And that's not true. It can to the degree that you wish it to be able to do that and to the degree that the underlying containerization engine will allow you to do these things. So let's take a little bit of a tangent here for a second and we're gonna go back here. And one of the things that I wanna talk about is if we go to github.com slash coros slash scripts and we start here. So now, oh, it even took me too deep on that one. I mean, in the end, that's where I kinda want to go, but we're gonna click through and start from there. So coros scripts is kind of a hodgepodge of some of the tools that we use to actually handle the signing of our actual images and doing a lot of day-to-day management. And one of the things that I really wanna show off to people here just that they know about it is we have a set of OEM scripts which when we refer to an OEM, this is a specific flavor of underlying execution platform that coros is going to run on. So OEMs are things like an AMI, an OpenStack image, which really is just a QEMU image with a little bit of different metadata, a virtual box image. If you wanted to just go through and browse these, you can actually get to that by going to, let me get back over here. So for example, alpha.release.core-os.net has every version of coros that has ever been released on the Alpha channel. It's really, really hard to see, but down here at the bottom, we actually even have a sim link here to current. So you can just always address current, and then in this list, what you see is all of the various OEMs that we have for this release. Now, at the very end here, we have a version.txt file, and what this version.txt file is showing us is information on that specific release. And we do this in this way so that anyone who is interested in doing automatic pulls of the coros information have easily sourceable metadata for providing, for example, environment variables to be able to pull these pieces in. And this is actually used pretty heavily in the scripts that I'm about to show you, where all that we do is we literally source those in and we'll go through and validate the actual digests and the signature, which are all done via GPG to ensure that none of the content has been modified in flight. So going back over here, to start, we're gonna look in OpenStack. And in OpenStack, we actually have a documented thing with a barbarous looking individual who put all this stuff up. And these are a bunch of, or this is an OEM utility that literally just goes through and makes sure that everything is kept up to date inside of OpenStack Lance. And this is done so that you can kind of chain a few of these scripts together very, very easily so that, for example, you can watch the eTags on the underlying object storage that we use. Now, when I say eTag, one of the pieces of metadata that's handed back by the server when you are looking at at least a well-designed object storage is going to be an eTag. And that is a piece of metadata that allows you to very easily see has the content on the remote end changed, has the metadata on it changed, i.e. the user who owns the file or the timestamp or the size, et cetera. So this becomes a very, very inexpensive HTTP call that you can use to see, should I then act further on this? The wonderful part is there's a lot of great plugins for Jenkins that just give you the primitive of watch this remote endpoint and if the eTag changes, I want you to interact with it. And that was actually the basis of this. Because as a part of the automated testing of CoreOS, what we do is we have a Jenkins instance that watches the eTag of our object storage so that we are consuming the same exact alpha images that everyone else is. And then also we have a master channel as well, which you don't want to be running the master channel. Pardon me, I forgot that I don't actually have a mic in my hand, so I can't get it up in my mouth like I normally do. But in the end, we are watching those eTags and then if the eTag changes, or for example, if it exits zero, meaning that everything is synced, we just echo out, hey, everything is synced. And if not, then we trigger this Glance load script. And what the Glance load script does is literally just pulls everything down, does the GPG validations, makes sure that everything looks good on that, and then takes the image, unpacks it and stuffs it into Glance. Now, optionally, I have a bunch of comments in here that show even more of how we use it internally. Because this is kind of the generic case and I didn't want to push a bunch of like CoreOS internal stuff and force everyone to use it. But what we end up doing is we set properties like OS release equals alpha, OS family equals CoreOS, so that now when you do a Nova boot command, you're no longer saying boot this exact Glance ID. Just keep in mind, CoreOS updates itself automatically. And that means that if you are on CoreOS version 962 and you boot from that today, in two weeks, you might be running on CoreOS version 970. And obviously you don't want to boot immediately from CoreOS 960 and then have to go through an update cycle just to be able to be running the most up to date software. So by doing this, we can actually go even into the Nova command and say, boot the actual image with the properties that match OS release and the properties which match OS family. And now even your CI CD commands for being able to spawn machines are automatic. They keep up to date because the robots underneath all of this are doing all the work for you. Make sense thus far? Quiet crowd. Okay, so let's kind of take a little bit of a tangent here and talk about some of the other pieces of what's going on under the hood in CoreOS. I'm going to take a moment and get out of that actual system DN spawn container. Yes. So the question was, Kubernetes is increasingly abstracting away the underlying container execution engine. Is that something that's being driven by Kubernetes? Or is that something that's being driven by individual vendors or would it be surprising to see Kubernetes take on and spawn as an execution engine? So as a part of that, I would say that is an extremely astute evaluation of what's going on. Kubernetes does not want to care about the underlying execution engine. And in fact, in Kubernetes 1.3, you are going to be able to run Kubernetes using our kind of container execution engine which is called Rocket underneath the hood. And none of your APIs for running RC, like replication controllers or services or anything are going to change. And that's extremely intentional by design. And I would say it wouldn't necessarily surprise me if someone extended Kubernetes even further to be able to use N spawn. Like that's one of the benefits of the code changes that we've been working on of making it more abstract and giving you more levels of indirection to be able to hook in additional pieces. Like similarly, it would not surprise me if the folks at Canonical ended up getting LexD working with all of this. I just haven't seen any evidence of that. The work that I've been directly kind of keeping tabs on is running stack netties or rocket netties which is kind of having Rocket be the executor for all your containers. So one of the most important pieces when it comes to Kubernetes though is actually a component called EtsyD. The EtsyD is a distributed key value store that we wrote at CoreOS that was directly inspired by a project inside of Google called Chubby. Google is doing their best to make liars out of me here because historically, now I can't just say they don't, I have to say historically, Google did not do much in the way of releasing open source software. Their traditional pattern was that they would release an academic paper that had all of the core ideas that were needed and were important to that piece of software that they wrote internally and they contributed that to the community to allow the community to implement this software as they saw fit. That was the basis of Elasticsearch, that was the basics of MapReduce, another kind of big kind of data querying components that folks are increasingly getting used to. And this is really just where at CoreOS, at a company of three people at this point, folks took a look at the Google Chubby paper and said, yeah, we can do that. And it takes admittedly a lot of audacity to be able to go, oh yeah, that company full of PhDs created this thing, we got it. And the big kind of things that made it possible there was actually a gentleman named Diego Angaro, who's a PhD from Stanford, at the time was just a PhD candidate, created a new consensus algorithm called RAFT, which is designed to be a much, much simpler to implement kind of competitor to PAXOS. In Chubby, they were doing what they could to successfully implement multipaxos, but even Leslie Lamport, the creator of PAXOS, has said, yeah, multipaxos is more or less going to be nigh on impossible to correctly, mathematically implement. You're almost never going to be able to have a proof that says, yes, PAXOS across multiple systems can actually pass all the Byzantine general type tests. And so with the kind of building of the idea of the algorithm of RAFT, we created a library and go for it and then began implementing the actual service, SED. And the purpose of SED originally was to be like the Etsy directory of your host, but distributed across a cluster. Pardon me, time to drink some more water. Now, before, so I was one of the early employees at CoreOS. And I have to admit that even when I was reading all the kind of marketing materials before joining there, it was kind of confusing to me, because I see, okay, SED, Etsy directory distributed across your cluster, my background is being assisted in, that's awesome. It kind of goes back to the traditional ideas of having you actually mount your NFS root and be able to share the same root across a whole number of hosts in a data center. Like, cool, we're getting back to that point and doing some interesting work with it. But in reality, SED is actually just a key value store. So first let me say system CTL status, SED. See if it's running, okay, so I actually, since this is one of my play around boxes, I've nuked a bunch of stuff. So let's do this, arm dash, R, F, R, lib, SED, two. Do I have SED one running? Okay, good, I do not. So, arm dash, R, F, R, lib, SED. Nothing more interesting than sitting, watching someone type on a keyboard, isn't it? Just as a side note, one of the things that people do on a very common basis to kind of reset SED is just go through and nuke the SED data directory, which is totally fine. But if you aren't being mindful of how SED is actually going to run, like in this case, SED is going to try to run as the user SED and SED is not going to be able to just arbitrarily write or create the new directory var lib SED two. So I have to kind of help it along and recreate that. So now, if I do system CTL start, SED two. Oop, uh-oh, we're about to go down for reboot. Somebody just pushed an actual update. So let's say, locksmithctl status, locksmithctlunlock that, cool. Now, we are, I need to, should still be good to go. So now, if I do system CTL status, SED two, we see that we have SED in a running state. Now, how useful is that? Well, let's show you some of the stuff that you can actually do with SED here. So first, if I say SED, CTL, list our keys, we see that we have a key space called coreOS.com. And what I'm gonna try to do here is split this a little bit, get into this so that we can see both at once. I'm going to go into, actually, export PS1, cool. So now we've got a little bit more space. And what I'm gonna do is I'm gonna do an SED CTL watch, actually the first SED CTL set foo to the value bar. And now, from any host in the cluster, I would be able to see like, oh, well, get foo and kind of see that data. What we can also do is the concept of a watch. So watch the key foo. And now it's just going to hang out and wait. And what my application can be doing is getting ready to send across some data or make changes to a value. So if I were for example, to take the key foo and change it from bar to baz, what will happen is now, as soon as this key gets changed, all of the watchers are immediately notified of the change and can react to it. And what that means is if you had something like, for example, Postgres, where you were running Postgres in a streaming replication mode with a series of proxies that were kind of redirecting to whoever the master is and load balancing the reads across all of your streaming replications, it means that you can actually have a sidekick to the Postgres proxy server that's watching SED to see who the current Postgres master is. And if that master changes, for example, immediately the remote machine sees that change and is able to respond. So while I could not type Baz correctly, we see that my watcher on foo immediately responded of the change of that key to AZ and reacted. And this is kind of critical when you are trying to build coupled systems that don't necessarily fail at the transient nature of individual pieces, which is to say you want a system that is reliable enough that it meets kind of an expected SLA. One of the important things that Google has taught us is that the actual SLA of your service is not based on the real uptime, it's based on the perceived uptime, which means that you can get a system to the point of having five nines and getting that sixth nine becomes exponentially more expensive. So you have a decision to make. Is it worth the value of doing that? Or is it worth the value of managing the expectations of your users and your customers that you're going to keep things up when they need it, but you're also going to focus on the perceived importance of how that service is. That's going to be worrying about things like latency over more than just does the health check say it's up and down. Now, to kind of talk about the Postgres thing a little bit further, there is a project called Stolon that an entity or a Corus community contributor created, but I want to show you here for a second. And Stolon is exactly that. It is Postgres running on top of a Kubernetes cluster in such a way where it handles the master replication or handles the replication for you and makes the proxies kind of fail over back and forth between them. So it's actually a real world example of being able to take some of these distributed systems ideas that are becoming more prevalent and merge them with technologies of the past. This is far from the only piece of software to do this. Another really important piece of software that I foresee that a lot of people are going to get extremely excited about in the future. It's called Vitesse. Vitesse is created by the folks on the YouTube team. This is part of the whole thing that I was mentioning. Google is increasingly turning me into a liar because they're releasing more and more actual open source software and it's extremely interesting to read through. And what Vitesse is doing is it's creating a sharded pooled MySQL instance that runs a top of cloud native infrastructure and gives you kind of extremely high available, highly available MySQL. Now there are definitely caveats to this. Like, and the big one is that today it does not handle MySQL wire protocol, which is a problem. Because most people, they want to be able to take and deploy this HA kind of MySQL on top of Kubernetes and then not have to think about it and just talk to their application and say application, connect to this MySQL DSN and be done with it. Now this is called out as one of the primary concerns of Vitesse and solved by a specific component called VtGate and VtGate V3, which is gonna be coming out later this year, is specifically focused around being that proxy that speaks native 100% compatible MySQL wire protocol. And it brings up interesting questions because now we see, okay, so we have Postgres that's running in kind of this stream fashion where you have these pieces that are sitting and watching under the hood and they use EtsyD to kind of manage how the proxies know about the various pieces. And then we go one step further in like, okay, well this is using native Postgres and this is using some kind of modified versions of MySQL and going back to the idea of Postgres, this is where it gets really exciting with the work that the folks from CockroachDB are working on. CockroachDB is a series of folks who were formerly at Google and they are taking the ideas of Spanner and some of the other internal databases and bringing them back out to a cloud native infrastructure and the number one goal that they are working on right now is ANSI SQL compatibility for Postgres wire protocol directly on top of Cockroach. So now you run Cockroach on top of whatever your cluster is and it will handle kind of the automatic failover and automatic replication and automatic routing to wherever the individual nodes are and all of these pieces are kind of using Raft and or EtsyD under the hood. So, give me a second to go back. Now, you notice there that like this machine did actually go down for boot or reboot and when we started out on it, we were running CoreOS Alpha version 1000. Now we're running CoreOS Alpha version 1010. This gets into an interesting design paradigm. I will say that this is complete coincidence. I did not plan this but it illustrates a really, really important point here and that is that with containerization, the goal is to be designing things in such a way so that you handle fault tolerance extremely fluidly. And this is one of the things that Kubernetes does for you kind of out of the box. You take a set of containers, you define them as a template inside of a replication controller and then you say, I want n number of copies of this template of containers stamped out across my cluster. That's actually what some of you saw Alex do this morning in the keynote. He had this defined template and he took it and scaled it from zero to one and another one from one to zero. If your application has coordination mechanism, for example, great work has been done by the folks who are community contributors to Cassandra as well as folks from Couchbase to be able to automatically scale up the size of your database in those cases. Now, these are databases that have existed well into and kind of built on the concepts of cloud native computing. So they're going to handle these types of changes much more seamlessly than something like Postgres or MySQL will. And that is because you can go through at this point and just say, oh, Cassandra, scale up. You currently were running with five different instances or hell we'll say six different instances where you've got three different shards that are doing two X replication. Now I want you to take and continue that two X replication but I want you to go from four shards or three shards there now to eight shards or to 12 shards or keep linearly scaling that out like that. And what those will do, depending on the underlying database, so in the case of Couchbase, they actually will rebalance all of the data. And this is really, really critical to that concept of a pod because they have the Couchbase instance running inside of the container. They have a sidekick which is watching SED and doing coordination. And when it sees the topology of the cluster change, that is the Couchbase cluster, it will go through and re-trigger movement of the data to do the re-balance. Now that can be a really expensive operation depending on how your underlying database handles it. So this is where the coming of things like Cockroach where it is a cloud native version of MySQL or of Postgres that kind of understands how to be able to efficiently replicate this data across hosts becomes critical. Folks continuing follow along? Okay, got a question in the back. So the question was a lot of things can happen between doing the, or making a change to a value while doing the watch, is there a way, and I'm gonna paraphrase here, and please correct me if I'm wrong, is there a way to persistently do the watch and maintain the ability to know the rate of change or if other things are happening under the hood? Is that close? Okay, so what I'm gonna do there, so this is just using the command xtd ctl under the hood. If you were working with this via some other API mechanism, actually make this one a little bit bigger here for a moment. So if you were, there we go. My connection's being a bit dodgy on that one so let me re-initiate it. So one of the things that you can actually do, if you wanted to continue using the regular command line utilities, and this becomes really, really helpful if you just wanna operate in kind of bash terms, there's also a command called exec watch. So exec watch foo, oh, come on foo, and then we're gonna run, try to minimize the amount of chaff that you'll see here. So now I go over to this container and say xtd ctl so set it back to bar. So we see that we've changed our value back to bar, the index that it was modified at was 11 and we're still hanging out. So if I now change this to open stack again, we see it's been changed again, there's a new index, there's a new value attached to it and all of these are actually being provided as environment variables from xtd ctl so that you could use them inside of a script. But on top of that, all that this is really doing in the background is an HTTP long pull and I like kind of showing some of what's capable here. So I can do just a get and see that value. If I'm trying to just kind of learn about how these pieces work, I can also throw a debug flag on and it will actually show me a direct curl command that I can use to get the same functionality, which is great because now all that you need to know is how to use an HTTP library to be able to receive this. But there's actually a bunch of curated libraries that we at CoreOS maintain and a bunch of third-party libraries which are also documented in the xtd repository. So let me look here, libraries and tools. So we have all the regular command line libraries so we maintain the official xtd client library. There's also a older version that's deprecated, a bunch of community folks have contributed Java libraries and Python libraries and Node and Ruby and C and C++ and I know that there's a Rust one that somebody was working on and this is like the initial question that folks have would be, well, if I just have to speak HTTP, why would I bother using one of these client libraries? And the reason why you would do that is because when you talk to the xtd host, one of the things that happens is we can do a, so in this case I only have a single member on this cluster, but if I had a note or a cluster of five or nine xtd nodes, when I do that member list it will show me the entire topology of the cluster, meaning that the client library now will understand how to fail between individual nodes, so you only have to give it an initial endpoint to handle the discovery and from there you can then find other nodes in the topology or you can find who the master is to directly hand writes to it while being able to scale your reads across any number of the hosts. And this is actually really, really helpful because it means that you can take a web app you can use xtd as a backing store and now because all of the data for that storage is sitting in something that will automatically fail and do leader election if a node fails while at the same time being able to handle loss of even half the machines in the cluster before you lose quorum, it means that you can maintain a high level of resiliency inside of your component applications which is why in this list, there's a whole bunch of projects that end up using that for being able to kind of build in redundancy to the application, which is exactly why Kubernetes uses it as the brain under the hood. It's also a good chance to talk about how this ends up working with other types of applications. So the folks at Mailgun, which are a subdivision of Rackspace and I believe they are now moved on to a different project, created a tool called Vulkan. And what Vulkan is, is it's a load balancer that's actually backed by xtd. So now in the same way, Vulkan is doing watches on those actual key spaces. So if for example, Kubernetes says, hey, I am now moving where this application is running from, I'm changing the ports or I'm changing part of the path, it means that your load balancer can react in real time and immediately begin routing traffic to a different location. So there's a lot of different use cases that people use under the hood. And in the case of Vulkan, they end up doing a similar kind of pattern to the pod mechanism that we were talking about. You would have a container that has your application, you would have another container that is acting as a sidekick to be able to update the xtd key space if you don't have something like Kubernetes managing that namespace for you. So going back over here. So we've kind of talked about xtd, we've talked about clusters of machines. What I'm actually going to do here is I'm going to tear all of this down and actually redo a little bit of it. So actually I may exit out of that portion of the window, bounce out of here. So I think I only have one node on this. So actually I'm going to cheat a little bit. And where am I pulling my, pardon this right here, I'm just doing this one because it's cheap and easy. So what I'm going to do is first, I have a little script here that gets a new xtd discovery token. So if you wanted to just kind of start out in the easiest way possible, we actually run a hosted service called the xtd discovery. And what that does is it gives you a centralized coordination point for allowing all of your xtd nodes to discover each other just by handing it this one URL. So if we were to look at what you're getting under the hood on that, there's not much in it. Like the xtd discovery service is unsurprisingly backed by xtd. So as we go to both boot a new cluster of machines and add them to it, I'm going to say, hey, all of these nodes are going to boot with the same xtd discovery ID. Everyone is just going to start with its own same data. And this is where I was doing a proof of concept before about automatically pulling down and kind of unpacking the JDK. So let's do a launch. DO. So by default, this is going to pull up six nodes. The first one's probably going to fail, but we will see. So just like that, I now have a cluster of sick nodes that's coming online when I do a list DO. Okay, there's one. Wait for the others. We've got all of our IDs there. This is of course why you run your own OpenStack cluster rather than relying on somebody else's hypervisor. So let's go through. So at this point as these nodes boot up, we should eventually see them joining into that though. Kill, ah, we are getting, trying to do one of six. Why are the other ones not coming? Okay, reach the limit. Oh, let's jump into this one. Got to host up. Now, you'll notice that this one, oh, there we go. So this one automatically started at that new version that was just released. That is because we manage the actual push of all of these CoreOS image IDs out to the various OEMs. Obviously when you're running your own private OpenStack cluster, we probably can't do those glance pushes for you, which is why those sets of scripts were kind of showed off earlier. It means that now you can kind of have this same type of real-time reaction to what's going on. Now let's see list. So we've got this first node. It's come online. I refresh this and of course it looks weird. So now let's just do this curl, that same endpoint and add a local member, see the initial cluster, publish its name and changing the cluster version. So if we wanted to see more information on this, I can look at the journal for that service and we see where it's now waiting for the other client requests to come in and hang it out. Same thing, etcd, ctl, list the key space. We see that there's coreos.com. The reason why there is coreos.com in that is because it is running a utility called locksmith. And this is the next important point. So locksmith acts as a semaphore for reboots of the cluster of machines. If you are running a cluster of machines and it's on top of coreos, you probably want to be using locksmith because what locksmith does is it makes sure that when we push out this new version of coreos and it goes out to the world, that your entire cluster doesn't all of a sudden decide, well, all of us need this new version right at this exact second. So we're all just going to go down. And what it does is it sets a number of available semaphores for the cluster, which by default is one, that is the locksmith ctl status, one available, maximum of one. So a single machine can take that lock, it can go down for reboot, come back up and release the lock so that now you actually have a rolling reboot even of the base operating system of a cluster of machines, which is extremely important for resiliency. I mean, to give you a bit of a sense of how we end up doing this in production, all of our services automatically update. We strongly believe in this. So as head of infrastructure at coreos, I can tell you we run alpha for all of our services. Alpha does not mean that you're just doing a YOLO, it means that you want the newest, freshest pieces of software. And real getting real here, if I can't stand up here on stage and tell you we run things on alpha, things stay up and running, and we don't care about reboots, how would I expect any of you to be able to trust that as well? And this gives us an operational advantage as well. Because the members of my team, we do maintenance at 10 in the morning. Doesn't matter what day of the week, because I would rather do two things. One, have the folks on my staff know that as kind of SREs, they come into work, they get a cup of coffee, they check their email, and then they know that maintenance starts. And if that maintenance for some reason is scheduled for six hours, they still go home on time. Now I wanna ask you, as an operations person, don't you think it's a little bit silly that we are still stuck in this paradigm of you have to do maintenance at 11 p.m. on a Saturday night? I mean, at this point in the second decade of the 21st century, all of us are global companies. Whether we like it or not, you have a website, there is a chance that someone will be going to that all hours of the day or night. So now, all that you're doing by scheduling maintenance at 11 p.m. on Saturday is making it harder to hire people that are willing to bear that burden. Because if they have the choice to do that, or come work for a company that says, oh yeah, we do things when folks are working fresh, we do things like reboot our clusters on an extremely regular basis to make sure that in the best case scenario, you don't even notice that there is any downtime. Now, I have my systems notify me when these things occur, but it just becomes a stream of things that I notice and yeah, okay, check, check, check, done. There was a question. So the question was, seeing all of this, why would you undergo a burden of running VMs of CoreOS, a top open stack? Why wouldn't you just run CoreOS on bare metal? There's a few different answers to that. One, lots of places, lots of organizations already have open stack firmly entrenched. They have said, I have made an investment in open stack and to hell or high water, that means we're gonna be using open stack. And I don't know what this new fangled CoreOS thing is. But, if you can get to the point of starting to deploy infrastructure like this, you can take those green field applications and run them on top of CoreOS and know that the base images on the systems are going to get updated automatically and that they're going to undergo these patterns of deployment, resiliency, et cetera, and test them on a regular basis. It develops the operational, both stomach and operational practice around getting used to this so that now it becomes kind of a regular kind of day-to-day occurrence. And even more so, it means that you can just use the existing tooling that you've got today to begin to take small steps into how this works. Now that being said, are bare metal machines? They run CoreOS. And the bare metal machines then run Kubernetes on top of that. And this is how we've been doing it for a while. OpenStack is a new addition of things that we are layering on top of that, twofold. One, because customers wanted to see it. So we said, okay, well, let's figure this out. You know, the 10th time that you're sitting in a meeting with somebody and they go, well, we've made a large investment in OpenStack. Can we just, you know, run OpenStack atop Kubernetes? You start going, yeah? There's, like, from a very technical perspective, I can tell you there's no reason why you couldn't. I just hadn't thought of it that way. And then you start decomposing it. And, you know, for us, you know, our OpenStack cluster at this point is now very, very modest, you know. Because I used to work at Red Hat, our OpenStack cluster started as RDO. And then, you know, as we began this work, we've been migrating everything into running atop Kubernetes. So now, when we need to spin up a VM to do full testing and full builds of arbitrary software, we can do that inside of a VM and rip it down and know that that VM will have no direct kernel access to the underlying hypervisor as a security mechanism. There's a couple. Just to add to your, so like, openStack. Yep. So, you know, to repeat back for the video and everything, you know, the individual was pointing out that, you know, the big reason here, especially focuses around multi-tenancy and allowing you the ability to kind of choose the path that makes the most sense for your organization. Because if you're in a situation where you are a public cloud provider, you probably don't want to rip everything out that you already have just to make an accommodation for this. Whereas just being able to pull in the images and being able to provide that to your users today is immediate value. Whereas in the longer term, it may make sense, or if you're doing a new deployment, you can explore this type of deployment mechanism right out of the gate. Now, I also kind of mentioned here, yes. So the question was that, you know, during the keynote this morning, we showed doing a real-time update of the components of OpenStack. You know, is that just something meant as a party trick or is that something that CoreOS is actually committing to in the future to maintain? And what you saw earlier was kind of the public announcement of Stack Netties, which is the ability to run OpenStack atop Kubernetes. And that's something that we are doing directly and committing to in partnership with Intel. Like we are kind of curating the tectonic distribution of OpenStack, which is specifically to be run atop Kubernetes. So it's our goal to be able to make that a viable option for folks who need to run OpenStack but wish to do that atop Kubernetes. And that would be curated by us. Now, I mentioned Docker, I mentioned Rocket. I also wanted to talk about, or sorry, I mentioned Docker and I mentioned SystemDnSpon. I also want to show you Rocket firsthand. And Rocket is another component that's built into CoreOS. It is soon to be shipping in Debian if it hasn't already made it into the Debian Debian repositories. And through that, it should then be available kind of as it's pulled into Ubuntu. And folks from Red Hat are also working on packaging it. So I believe in fact it's even already in Fedora. So with Rocket, you can actually start out at a very, very naive level and just even continue to execute your Docker containers today. So if I were to do a Rocket run, okay, now in this case, I want to have it allocate me a TTY. So let's do this, Rocket run help. And so like I said, I want it to allocate me a TTY and I want it to kind of provide input from standard input. So that's going to be adding dash dash interactive. So just to start out, let's do Rocket run dash dash interactive and because the formalization around how the actual image signing is not completely standardized yet with Docker, we have to do this. So all that I'm doing there is I say, pull in this Docker URI. I'm going to make it just a little bit kind of smaller here just to be able to fit the command a little easier. So hopefully folks can, there we go. So Docker, so when I go through and do this, what it's doing is it's firing off the stage one. Oh, just that, so that same container. So now what it's doing is it's going out to the Docker registry, it's pulling it down, sees the SHA-256's of the images, kind of unpacks them, starts up the network and then I am up and running. Same sort of thing, only with this we actually have systemd exposed inside the container too and it's the main systemd journal on the host which is managing the kind of logging from that container. So now you can even use all of the same systemd journal upload or systemd journal remote commands that you're used to using to do remote log streaming to be able to aggregate all of the logs from your individual container. And I call this out because the big question that folks always ask is like, what is it going to take for me to move to Rocket? I have Docker containers. How do I convert them to the Rocket format? Well the most naive basic answer is you don't have to. You just have to tell it that it's going to be a Docker container and that will make it work. There are also additional utilities like Docker to Aki which you can use to convert the containers from one format to another. But as a different type of example, let me go to trustedcontainers.com slash Tor. Yes, Tor, the onion router. The thing that makes the dark net the dark net. Also by the way there's no such thing as the dark net. There's not. There's just everything's encrypted. Every time you see some news outlet go, the dark net, are you, like there are people selling drugs on the internet? Yeah, well there's people selling drugs in real life too. It's just everything that happens over that is all encrypted. So we go to do that and we go to pull down this image. I did not actually kind of give it any additional information there. Also I don't want to do insecure images. There we go. So now Tor, not working. Oh, that's why, because I'm a dummy. I don't want to tell it specifically HTTP. I just want to say hey, discover it. So what happens there is it goes out and it's reading metadata from trustedcontainers.com and that metadata is, so this website is just an S3 bucket. Just an S3 bucket with cloud front in front of it on top of AWS, meaning that this could easily just be something inside of object storage on top of OpenStack. Just a very, very simple hosting of a file. And because of how the security works inside of Rocket, it bases things on the same concept of a prefix. Only the prefix can contain a full domain name as well. And it also does image signing with GPG. So in this case it says hey, I see that this container was actually signed by the key fingerprint that we see there and specifically this subkey and that that belongs to Brian Redbeard. Is that, do you want to trust that key? So let's cancel out of that for a moment. Because in that case, if you're not an astute user of GPG, that's just a bunch of gibberish. And how do you know that that is actually the correct key fingerprint? Well, if you wanted to precede your host with the kind of information, you can do things like GPG receive key and then specify something more like that. Which is actually not a key ID. My personal preference for this, because I do a lot of stuff and I'm big into security and web of trust. Folks actually have, or there's a wonderful hosted service that kind of can do the attestation for you. So like we see here that John Callis, who's one of the creators of PGP, is actually following me here and watching for my various changes. John, somebody who kind of helps organize another conference that I work on called Shmukan. But it also, I can go here and I can see the direct actual GPG block and I can also see just kind of the short version of the fingerprint. So here I can see the actual full GPG key so I can even go down and either GPG dash dash import and hand at that or I can pull down that key and I can go to Etsy rocket run. And you can actually side load these keys. So in the same way that in my metadata for this box, if I were to go to 169.254.169.254 and as open stack operators, if you've never kind of just gone through and queried through the metadata, I highly recommend that you do that. But this is also not directly on top of open stack so we can't just browse it that way. But kind of looking over here, when you are booting a CoreOS instance, you are going to supply either an ignition manifest or a cloud config. This is going to be the same cloud config, basic ideology that you have that you hand to Nova to have it bring up an instance. Only because of my time at Red Hat and seeing that, oh, if you're deploying bare metal, you need this format in being a kickstart file and if you're deploying on top of open stack, you need this other format which is cloud config and if you're doing an image based deployment, well, you're not really gonna be able to use either one of those so it's going to take something like Ansible to go in and change the files. I kind of was actually standing on a table in Lake Tahoe when we were kind of preparing the first version of CoreOS that was going to have some of these things built into it, screaming about how we had to have a unified deployment mechanism and that deployment mechanism is either cloud config or ignition. Now they operate in different ways but that is to say if you are deploying a bare metal machine you can still use a cloud config. Now the question becomes how do you supply it? Well, you can use a config drive in the same way that you do with some variants of open stack images which is to say you create that ISO 9660 image or that floppy disk image that has the cloud config file embedded in it and has the disk label config to and when you hand that to the underlying host it then discovers it and does all the work for you. But you can also give it a kernel command line option. Every time you boot a system, one of the things that's provided on a Linux host is what the kernel was booted with in the value proc command line and that means that any applications on the host can read through that and do sensible things with it too. As an example, I'm going to jump through a few different hoops here and this 111 isn't up, about 112. Oh man, do they shut everything down on me? Oh there we go, okay. So this is in our private lab of equipment and when we go through and take a look at this, okay so we've got 48 cores on this box and a quarter of a terabyte of RAM. So to boot this, actually proc, we see that this one actually was not booted with a cloud config so they're making liars out of me here. But what we do in general is you would have an option of coros.config-url and then you specify an HTTP endpoint and it will retrieve the configuration from that on boot. This is going to be the same, again, whether it's an ignition manifest which is a JSON format or the actual underlying cloud config file. Kind of going back to, bouncing out of that and going back to our host over here. That means that I could take and through a side channel, embed all of the public keys that are used to sign my images as a part of my deployment process. And now I am only going to run code that I have flagged as trusted in advance which means that it would not end up prompting me do I wish to trust this key because that key was already flagged as trusted. So then when I kind of continue on from here it will refresh that image, verify the signature on it, make sure that all the security is good on it and begin spinning up that actual tour node. And we've done a lot of testing with this where we've kind of emulated a Sibyl attack which is an attack on the tour network where you spin up a thousand nodes from various different end points so that you can try to capture and correlate traffic from that. Now, we are not trying to execute a Sibyl attack but we want meaningful ways where researchers can easily do this so that they can bring greater resiliency to the project. Similarly though, let me shut down tour. I have other things like Gen2-Stage3 because it's also important to have containers to be able to build the pieces of your infrastructure. So because this was not trusted for the prefix Gen2-Stage3, actually one thing I can do, so let's just say rocket trust. As a part of this trust I'm going to say HTT... So I have to tell it the prefix or whether or not it's root and the public key. So because it's me, I'm going to say I allow this to assign all things and I'm going to pull it from htptrustedcontainers.com slash redbeard.pub. Now, because I know that this is the correct key fingerprint because it ends with foxtrot5408 3delta6delta foxtrot566 which also, that is one more validation that I, that is my key. So if you're looking to validate that that is actually my public key in a traditional kind of web of trust model, that is me and it is trusted. So now that key is now flagged as trusted and anchored for the root. This is actually the kind of full fingerprint ID. Again, foxtrot54083delta6delta and now when I go through and run this, it just says, hey, I see that that's there. Nope, it did not trust that for the root. And goes through downloads that ackee. Now this one is much, much bigger because this is intended to be an entire bootstrap development environment. You know, this is everything that you need to then build an entire Linux distro. Oh, and this is what happens when I go through and am not signing my images correctly. So this is where now, because I did not re-sign this one, I have to do horrible things. So the moment I just wanna show you kind of what we're doing inside of this. So it pulls that down because it did not validate the checks from last time. It took a second. So what it's doing right now is it's kind of exploding that ackee file out to the disk, carving out the namespaces and getting the process actually stood up. The reason why it's taking so long here is one, this is an extremely tiny digital ocean instance. So it means that it has to compete with all of the other users and is kind of 178 megs worth of stuff to blow up. And I have it set at the moment to not use a copy on write file system. Oh wow, even my network is going slow. So now one of the things that's also really, really interesting here is kind of I drilled pretty deep at the beginning on the idea that a container is just a user land and then you have a kernel that user land gets attached to. And the reason why that's important is that you can also swap out for different what we call stage ones in the term of rocket. So additional stage ones that you can do are things like automatically injecting a kernel at runtime to dynamically turn a container into a VM. That means now, if you're worried about multi-tenancy, if you have an application that you don't fully trust and you're not comfortable running on the same kernel as the underlying host kernel, you can use LKVM to slipstream a kernel in at runtime to then turn it into that. So from here, I could emerge in like Git, for example, and it will just go in. So this is full gentle environment, but we're actually running out of time here, so anybody have any questions? Okay, I guess it feels good to be this good. So here in a moment, I'm gonna be heading back to the CoreOS booth in the marketplace. So if you wanted to ask questions in more of a one-on-one basis, I'm gonna be available to kind of talk through some of this stuff. We're also going to have other folks over there kind of focusing on different areas. So if you have questions more about product roadmap, Wei Deng is definitely the person to talk to. And if you're interested in kind of marketing or cross-promotional ideas, kind of mel our head of marketing is going to be there as well. So thank you very much and have a wonderful afternoon.