 Well, we can go ahead and do that too. Hello, everyone. My name is Brian, and I am from CoreOS, and I will be walking you through with my dulcet tones about Kubernetes and CoreOS and all that. But first, what we're going to do is I'm going to come clean. I am that guy you've seen on television. And it is true that I love the band Bolt-Thrower a lot in Battle There is No Law. It is one of the greatest albums of the 1980s for anyone who's a fan of crust, heavy metal, or tabletop strategy games like Warhammer. But this is a little bit orthogonal to why everyone actually came into the room. I'm sure one or two people is really psyched. They're like, this guy knows me really well. But getting back to it, I am Brian Redbeard. Really, just make it Brian Redbeard. It's easier for everybody to remember. There's a whole lot of naming collisions around the name Brian to start with as it is. And here's a bunch of the ways that you can actually stalk me on the internet. So the most interesting ones are probably GitHub, the ones where I say the most offensive things are on Twitter, and we can just kind of roll with it from there. So first I have to ask, is anyone here from Dreamhost? Great. So our marketing team named this talk. I don't actually have anything to do with Dreamhost. They have a product called Dreamstack. So there was a little bit of confusion between our marketing team and myself when we were actually talking through this. But moving on, the big reason why we are here is to talk about things that go really, really well together. And for me, Peanut Butter and Jelly is still one of my great vices, but as far as the pieces actually go, they're all easy to consume in small parts. It's how they start to combine together in the long run where it really gets fascinating. So the first one, obviously fans are, or obviously folks here are fans of OpenStack, otherwise you wouldn't be in the room. It could be that you're ATCs, it could be that you're trying to figure out how to implement it in your organization. Doesn't really matter. I mean, everybody is kind of clear on the overall goal of what OpenStack is trying to solve. For a lot of people here, it will be less so as to what CoreOS even actually is. It's kind of my passion project that I work on with a bunch of other nut jobs in San Francisco. It's an open source Linux distro, but it's built to run container workloads from the ground up. Like that is its kind of primary use case as initially considered. And the third piece here, the third kind of title slide thing we're actually talking about is Kubernetes. And all of these are just pieces in a larger puzzle of how it all fits together. So we're gonna start, since everybody knows kind of the 30,000 foot view of OpenStack, we'll come to that later and we'll pop it back onto the stack and make some sense out of it. So let's start with CoreOS. First and foremost, CoreOS is extremely opinionated. As far as Linux distros go, it's opinionated in a way that a lot of folks are not used to. It's opinionated in a way that a lot of folks find unnerving at first, but it's one of those things where it's, you learn to love it. There is no frame buffers or text, fancy text editors or modem drivers or Windows managers or interpreted languages with the exception of Bash. There's none of that. It is this idea that we have really been trying to drive around minimalism in your infrastructure. We're driving on this path where things are getting increasingly complex and they need to become increasingly complex in order to have the kind of automatic failover HA type setups that every kind of big organization is actually desiring. But where this actually drives to, and one of the first initial linchpins of how it loops back to really being a good fit with OpenStack is that we're trying to drive things via this idea of an API-driven infrastructure. So everyone who wants to use OpenStack at some level is trying to achieve this API-driven infrastructure. It's why, like I said, it makes so much sense for CoreOS to be here. Our goal is that in a perfect world, you aren't SSHing into hosts, like you actually never need to SSH into hosts. And in really the perfect world, when you spin a worker online to do some type of compute job for you, you don't even put SSH keys on it. Now think about that. You have this worker where you don't want anyone to even be able to log into it because ideally what you're doing is you're sending telemetry from the box back to some central location so that you can actually do diagnostics and see what's going on with your application, see the health of your servers, get an idea for the amount of users. This is the whole idea of why companies like New Relic are also taking off and why Elastico kind of grabbed LogStash and has been really making the Elk stack a critical part of a lot of companies and why so many other entities like Datadog and these others are coming up because they give you easy ways of actually collocating all of the data about your hosts together into a centralized spot that you can work with it and be more informed about your infrastructure. And in this type of world, nobody needs Vim Enhanced. We get questions like yesterday, there was a question on our mailing list. I need to run Nano on my CoreOS box. Why? Why? Like, yes, I understand it's easier to use than Vim. Personally, I'm a die-hard Vim user. I go back to the era where the reason you use Vim is because it fits on a single floppy versus Emacs and that's just the way it happens. But in the type of world that we're trying to drive where nobody needs Vim, they don't need any of this other stuff either. And it's kind of a, it's an easy thing for most folks to kind of step in and go, no, I don't need a frame buffer. The fact that my console that I'm hopefully never going to log into, it's okay that DevTTY1 or DevTTY0 is 24 by 80. Because I'm going to, if I must, SSH into that box and then SSH is going to handle resizing all that bits. And it doesn't really make sense to have modem drivers on there because you're never gonna hook that device up to it. So CoreOS has been this exercise in stripping things down to the metal and then slowly adding in the pieces that we really just need to have there. And that's something that is challenging for folks to accept. When you've got companies that ship proprietary drivers in order to make their product work and they're not willing to push those drivers into the upstream kernel, they get chafed a little bit. When you say, well, that's kind of, it may not be against the business sense of open source, but it is against the spirit of it at the very least. So it's not to say that you can't do things like load modem drivers or load frame buffers. It's just, we don't make it easy. It's not going to be a DNF install to push that onto the box. It's not going to be a DPKG-I to get things on there. But you've listened to me, you're railing for the better part of a few minutes about all these things that CoreOS is not. Let's actually talk about what is in a CoreOS host. So the basics of it is that you're containerizing your applications. Now, in my case, that containerization engine ends up being Rocket. There are other folks who use Docker. There are other folks who use, who kind of have these setups to do, let me contain that for you and other components there. It doesn't really matter. We ship Docker on there, but personally, like I said, I use Rocket and it works fantastic. The next part, which is where it gets kind of strange, it continues down this path of opinionation is that it's a self-updating operating system. It's not something where your administrators have to go in and do this patch cycle. It's very much like the Chrome browser, where like an Android phone or an iPhone. It's something where the service provider, which happens to be us in this case, we package up these updates, we push them out to the hosts, and we do this on a regular cadence. And there's a few reasons for that. Because one of them is if you get used to this idea that you don't have to log into the host and that the hosts will go down and they will come back and the applications are containerized so they can move around the cluster with a relative amount of ease, then it gets you into this idea that you don't do maintenance at 11 p.m. on a Friday night when your ops guys are angry. And if they're not angry, they will be eventually. That resentment, oh, I have to be sitting here doing this when I could be at home drinking a beer and this is annoying. No, it's the idea that you should be doing maintenance at 11 a.m. on a Monday morning. And the reason for that is, everybody comes in, they've had a chance to check their email, they've had a cup of coffee, they're fresh, and everybody is ready to tackle any problems that come online. And when you get used to this idea that boxes will go down, like that's one of the big kernels of truth that I think Verne Vogel shared with the world of really driving it home, of everything goes down all the time. And you just have to get used to that. And once you can get used to having that happen and you can get used to designing applications for that state, it becomes easier to keep them online because you loosely couple things and you are able to handle failures differently. And there is a little bit of paranoia about, well, if you're just going to push updates, well, how do you handle failure? And I'll talk about that in a second. But the other thing that we do is we have a lot of distributed systems tooling that we worked on. At the same time in parallel as beginning the operating system, we realized that if you're going to have a large cluster of machines, you need a strong source of truth across all of those machines. In our case, that's a product called ITD, which I'll talk about in a second. But we put together these other things like Fleet and Flannel and all that. And it's just, it's a bunch of tools that fit in the white space between the existing gaps. Like, okay, you've got SystemD on a box and you've got Mesos that's handling cluster management across a large number of hosts. What if you don't need the weight of Mesos? What if you just need a little bit in between? That's kind of the idea of where we're trying to fit with a lot of these, a lot of the individual tools that we're building. So to go back to the failure case on updating the actual operating system, let's say, so you have this kind of peculiar partition scheme where you have your EFI, where your bootloader actually lives, you have a different partition where your kernel actually runs, you have a primary data partition, which is where the state of the actual host lives. Then you have two copies of slash user. Well, if you were to read like the Linux file system hierarchy, you'd know that the idea of slash user is it's binaries that are shipped by the actual vendor. And if you wanna add in anything particular, then you, the local kind of entity, you add that to user local, but everything that the vendorship should be in slash user. And configuration goes in Etsy and all of that, but the binaries kind of are their own thing. So we have two copies of this partition. And what happens is when we boot this host, we're running off of partition A. All the data is sitting on the data partition. And we will stage down this update onto a RAM disk and we'll go through and do cryptographic checks. We verify GPG key, we verify hash of things. We go through and check that all the metadata actually matches and that the data came from a trusted entity. Then we apply that to the B partition. We reboot, YOLO, we just reboot the thing. And when that box is coming back online, we have a custom grub module that checks everything to make sure that everything looks the way that it's supposed to. And if the tests on the partition succeed, we just keep going. Everything is set. Continue normal operation. We're just now running from the B partition. And we write this metadata to the actual good partition table on the disk. If for some reason that fails, well, we're set. We can fail back to that known good partition. So we can keep doing this process until things work. Now, obviously you don't want your boxes just flapping. So it gives you the opportunity to, through storing that metadata, say, hey, things are going a little weird here. Maybe we should not actually be rebooting that. So we actually tag like the last known good, the last failed version are able to work through that. But it's through actually storing that on the GPT on the disk that we're able to work through it. But after we've done that, we need to actually have a workload. And that's where it comes into application containers. And as I mentioned, in our case, that's a tool called Rocket. So Rocket is a thing that runs in the foreground. It is designed to be tightly integrated with system D. Our init system in CoreOS is system D. Rocket was designed to be an implementation of the app C spec. App C spec is something that we started working on back in November of last year where we said, hey, if you're going to have containers and you're going to design these containers, you probably want them to work on different operating systems, not just opinionated ones like CoreOS. So having a specification allows you to have different implementations of that specification. And Rocket is just our implementation. The folks over at App Cera have Kerma. The folks over at Three of Coins in Poland actually wrote an App C implementation that runs AKIs, the containers for rockets, on FreeBSD. But instead of using kernel namespaces, they actually utilize FreeBSD jails and then ZFS for the underlying graph driver rather than ButterFS or OverlayFS like we do. The next piece that we've got inside of it is SED. So SED is a highly available key value store that handles storing data and gives you kind of semaphores across a network. So now you can actually begin coordinating locks between components of your application regardless of the size of the cluster, regardless of the number of members in the cluster, regardless of whether a process is sitting on the same host. Now, I want to reiterate that all of this stuff is open source. All of this stuff gets consumed by things beyond CoreOS as well. SED is actually the storage engine inside of Kubernetes, which is why I bring it up now. So SED resides on every single CoreOS host, but it's also the source of truth inside of a Kubernetes cluster. One of the things that was explained by the folks over at Google through their operation of containers for close to a decade now is that it's really important that every container gets its own IP address. Don't try to deal with NAT and like port mapping and all of that. It just makes a giant mess. And after kind of really thinking through that, we put together a tool called Flannel. Flannel is interesting because on top of, at SEDB incorporated in Kubernetes, Flannel is actually incorporated into Magnum as well to make all these pieces easier. So it's interesting because we've built all these tools, we didn't build them necessarily with specific projects in mind beside our own, but we tried to always be very, very mindful about how we put these pieces together so that they could be consumable by other things upstream. So now we're gonna move a little bit over and we're gonna talk about Kubernetes. So this actually came up in the Super User Magazine. I apologize to Warden and OpenVZ. You know, Kieran folks, you're in my hearts, but unfortunately we had to chop them off of the bottom of the slide there in order to make it fit. But you know, it kind of illuminates that how the questions in this space are also being a little strangely framed right now. Like Docker and Kubernetes and LXC and Mesos and Rocket and LexD are all being thrown into the same bucket, but it's not like it's an apples to apples comparison. Like comparing Rocket to Mesos or comparing LexD to Kubernetes is a little bit of a strange comparison because some of these are containerization engines where they actually execute the container. Other ones are orchestrators of the containers. So Kubernetes and Mesos do similar functionality. They take a container and they schedule it across some control plane or some worker plane of execution to make sure that that container stays online and running. Whereas Docker and LXC and Rocket and LexD all do, they're all container execution engines. But so inside of Kubernetes, the most basic thing that you have is a single container. So in this case, like I was saying, we'll use a Rocket container. The next thing that happens is that in the world of a container, it's run inside of a pod. And a pod is a set of kernel namespaces and an address and this pod will run on a single host. That is, as far as Kubernetes is truly concerned, the smallest atomic unit that it will ever directly manage. Now, the thing about a pod and why this is different than some of the implementations of what folks are used to with pieces like Docker is that a pod can contain n number of containers. You can just, you can have one container in a pod, you can have 50 containers in a pod. And what will happen is, is they will share all of those same namespaces. This is really useful if you wanna run two processes side by side. You know, let's say that you had three containers in this case, you know, I kinda set it up for that. Let's say that you have a worker which transforms information and writes it to a remote location. It's a general API worker. Grab some piece of data and does something with it. You have a process which also watches the worker and sends logs related to a remote location. Especially if you've been writing things using like Unicorn to be able to fire these off, you begin to hit a point where running a single process in a single container doesn't make sense, but it doesn't mean that you start putting a knit systems into your container because that doesn't entirely make sense as well. At that point, you're beginning to just create virtual machines. And virtual machines have their place. And virtual machines, I don't wanna make it sound like virtual machines versus containers. Like they have their purposes and they provide different benefits. And I don't even wanna go so far as to say that performance is really one of those because that's a little bit of a straw man argument. You tune things right and play some games and you can get VMs that are just as performant as your containers. But let's say that we then have this third process which is just, it's running some job on a regular interval. It's a cleanup process which purges old logs and makes things happy. So all three of these processes will run inside the same PID namespace, the same UTS namespace, the same IPC namespace, the same network namespace. And because they've got that, they're all in that same network namespace, they're all gonna share a single IP address or whatever IP addresses are actually assigned to the pod. So once we have a pod in place, pods are instantiated by the idea of a replication controller. And a replication controller is a reconciler that says what is the state? What is the desired state? You can think of it like what puppet is doing. It's what is the configuration on this box? What is the desired configuration on this box? In this case, our replication count is one. But we can also say across our entire worker system, replication count becomes three. And it will stamp out three copies of that same pod, that same set of containers that are all running together across the entire worker plane. And you can just keep going. That's the basic idea. The next piece kind of moving across the stack is you need to be able to get to these. And that is where you have a service in Kubernetes. So that becomes, in this case, the service has an IP address configured on it. The service is the IP address that will be reachable by a load balancer. It's the desirable thing that you want to expose to the outside world. And an end user would never directly hit a pod. But there's some problems with this. And there's some problems with this, which OpenStack is already poised to solve. And that's why I wanted to talk about OpenStack after we had a foundation of understanding what the various pieces are. So obviously, if you need some place to run these, and the most obvious choice is that you're going to use Nova. I mean, that's a given. You're going to use Nova to actually instantiate VMs, likely, to handle the machine workloads of the components of Kubernetes. But when you are designing things to run inside of a container, there's a few kind of anti-patterns that you should avoid. One, you shouldn't actually be storing files in the container. The container should be ephemeral. Think of this as going back to the original ideas of the early OpenStack days. You're not using OpenStack to spin up these long-running machines. You spin up a machine, you tear it down. You spin up a machine, you tear it down. It should be ephemeral. Containers are the same way. If you need to save the state of something, if you need to save files, that should be placed outside of the container. Unfortunately, we've already got something to do that. Now, that means that your applications actually need to know how to talk to an object store. They need to be able to take their files and say, OK, I've retrieved this asset. I'm going to do some work with it. I'm now done. I need to commit it back. So Swift allows us to handle that. And you take this the next step beyond. You go, well, if you're not supposed to store files in a container, that's going to make it really, really hard to handle persistence, and not just file persistence, but database persistence. Fortunately, this is a problem that's also been solved, mostly solved. It's pretty good. You use Trove. Trove is the database as a service component. Trove is the thing that now lets you make the API call to say, hey, I need a MySQL database. Please bring that online for me. It's not the only answer to this. It's the answer that OpenStack provides today. Now, this could also be that if you didn't need that, if you wanted something more like CockroachDB or Rethink or one of those, you can just run it directly inside of Kubernetes. Because in that case, the database software itself is going to replicate the individual files to all of the containers that will be running them. So when a container goes offline, it's not that big of a deal, because you've got additional copies of it that can handle that workload. Trove is really for the older applications, the applications like WordPress. I love picking on WordPress, partially because I've used it so much, but also because people really understand it. If you break apart what WordPress actually is, it's a PHP application that needs to store things in a database and needs to store files on a disk. And that was never designed for this era of compute workloads. Open source software is still catching up to this idea that we need to design applications in a new way. I mean, that's evinced by the fact that if you look at a lot of examples of containers today, you go to instantiate them, and it's a gigabyte worth of stuff. You know, it's pulling in Ubuntu and then doing app get installs of software, which I think that that's extremely valuable as a learning exercise or an initial development exercise, but in the long run, we need to get to a point where we understand the pieces of software that we are shipping better, so that we understand the dependencies and can limit that down. Now, no amount of, obviously, if you've got these containers running all over a control plane, you need to actually be able to get to them, and that is where the load balancers of service through Neutron comes in. Because now you can make these API calls to say, I've brought up a new pot or I've brought up a new copy of a service over here. I need to be able to route components to that, make this API call to point your, to add me to a member of that pool. And then things get a little weird, and this is one of the things that I'm really stoked on. Devananda and folks, Devananda from HP and Russell and Jay and Paul and a bunch of folks at Rackspace and formerly of Rackspace did a lot of work on Ironic. The Rackspace implementation in its commercial form is called onMetal. The idea of it is that you use the Nova APIs to provision an actual physical machine for by minute usage. And they actually used CoreOS inside of this to make this happen. So what this means is that using Ironic now, you don't even need to be sequestered to virtual machines. You can actually bring these components. You can bring the CoreOS host online itself to be able to then run Kubernetes inside of a container on top of that. And it kind of goes turtles all the way down. But when it comes to Ironic, it becomes interesting because CoreOS runs in RAM. So they run CoreOS in RAM. They pull in the remote API and they pull in the API worker that can talk to Nova. And then on the back end, they go through and set up the like pixie configurations and how to actually talk to the BMC or the IPMI on the box and be able to control it from there. But this is kind of how all of this starts to look in practice. You have some set of Kubernetes nodes that you designate as controllers and you have some set of Kubernetes nodes that you designate as workers. And then you handle the sharing of information throughout all of these through components like the load balancers of service in Neutron and Trove and Swift. And I differentiate these in colors more because there is some other problems today in Kubernetes and they're being solved. We at CoreOS actually have a solution that works for us but doesn't work for everyone else, which is why we do it. But in this case, the nodes in red up at the top, those are your unicorns. The ones down in blue, those are your robots. Robots are easy to crush and remanufacture and redestroy. But if somebody kills your unicorn, you are really bummed. This is the idea of pets versus cattle. But there's the realization that some people really get kind of bummed out when you talk about killing Fido or wholesale slaughter of cattle. I prefer unicorns and robots. When that unicorn sheds its single tier and you break off its horn and then use it to power up new portions of your infrastructure, you can gain its power, but it's going to be a little bit of a bad time for a little bit. Now, all of this may seem familiar to folks who have been going to some of the other talks here. It's like, well, this sounds a lot like Magnum. I don't know why it is, but it's not Magnum. And there's a specific few reasons why here. Magnum uses Kubernetes, but instead of just being instantiated directly as containers on top of a box, it leverages other orchestration mechanisms or like the application packaging of Murano to be able to schedule that throughout the entire cluster. So it means that you're going to be able to run it on things beyond CoreOS, which is obviously the goal of Adrian working on that. But it gives you more flexibility, but it's going to add in additional layers. You're going to have to have someone curating that heat template for your image, for your chosen operating system, for your version of Kubernetes. Having, that's where personally, just because I'm closer and obviously aligned with a specific distro, I've identified how some of those pieces can be removed out of the stack and how you can solve them. But at the same time, Magnum does use other pieces. Like I mentioned before, we've got the CoreOS pieces like Flannel that are involved as well. So Flannel's giving you that overlay network. The easiest way to think of Flannel for folks who are not familiar with it is, I can say, for folks who aren't familiar with Flannel, but are familiar with Docker, Flannel is giving you a Docker zero, that Docker zero bridge across all of your workers. It means that now a worker or a container with an arbitrary IP address on one node can talk to a different container with an arbitrary IP address on a different node. It uses this overlay network to be able to do the communications actually between them. So it's fascinating seeing how these pieces can be composed in lots of different ways like Legos to actually put together the system the way that you want to do it. And that's been the exciting part to me. So I've been talking for a little, just about 35 minutes, a little short of that. This was scheduled for 40. And I wanted to make sure that we had questions or time for questions from folks. And we've still got that moment where everybody's lined up or near the microphone. So hopefully that means that folks will have some things there. So all of this kind of gets put together in myself and another choreo, as we call ourselves is working on it. This is actually BC Walden, who is the former PTL of Glantz. So we've got some kind of ties even back into this. He doesn't really like having my beard rubbing on his shoulder, but he can suffer through that. So at this point, I'm just gonna leave that slide awkwardly up and we'll take some questions. So if folks want to direct themselves over to the mics or a mic, it appears, we can do that. I'm sure that folks at least are a little bit curious about this stuff. I have some other things here that I can kind of show off as well that just give you a little bit more view into Kubernetes itself, the idea of how you instantiate and remove pods, how you build the service controllers and all that. But so the question was, what is the timeline on my beard? It's actually funny because I was in the shower this morning, I was thinking, I need to just actually put together the frequently asked questions section on my website, because here's how this normally goes. Hey, excuse me, I got a question for you. 14 months. What? You were about to ask how long I've been growing my beard. At the moment, it's 14 months. Now the photo back at the beginning where my beard was down to there, it was actually four years. I got tired of all the Duck Dynasty comments and shaved it off. The beard is actually the same length of time that I've been working at CoreOS, so there's two major CoreOS epics, May 3rd or March 3rd, 2014, which is when this beard started growing, and then also, funny enough, July 1st, 2013. So for anyone who uses CoreOS, you'll notice that we have this semantic versioning of our actual images. So you'll see that there's version 512, then there's version 638. That first digit is the number of days since the initial CoreOS epic. So you can look at it and go, okay, this image came out 638 days after July 1st. This other one came out 681 days, and then we kind of bumped them from there. But in CoreOS, because we're doing this auto update process, we have three major channels. There's an alpha, there's a beta, and there's a stable. Every image that becomes a stable has been through both beta and alpha. We make sure that everything gets tested that way. The cadence is roughly that a CoreOS image gets released to alpha once a week, roughly to beta once every two weeks, and roughly to stable once a month. Another question. Wow, I have failed if I, either I'm, either one, it feels good to be that good that just nobody has any questions, or yes, yes. So the question was, given that I explained that there was the load balancers of service through Neutron, and then that Kubernetes services are exposed on individual CoreOS hosts, how do we actually expand the size of the cluster and handle that was, how do we handle like the IP addresses of all the nodes? So, in the case of Kubernetes, individual, the way that we're doing it on CoreOS, individual workers will proxy traffic to wherever that worker actually happens to be. Kubernetes uses this idea of label queries inside of it, so you can say that all traffic coming inbound to this IP address will get redirected to all pods that have a specific label that you specify. So what actually happens there is because you can have multiple containers in a single pod, you can actually schedule components there that say, hey, when this pod comes online, it should actually make an update to Neutron for me to say, bring this component in place. Now that also means that you do need to schedule a set of IP addresses upfront that are kind of in the pool available for, the load balancers of service similar to how you'd have fixed IPs for Neutron. Yes, I definitely, so in the case, the question was, do we see that this is a core change inside of Kubernetes to be able to handle OpenStack? And one of the things that they've been doing is a lot of work around specifically making external load balancers work as a first class citizen. So for us at CoreOS, it has meant that having the components like that that can make an update directly to the external load balancer on behalf means that the containers don't need to worry about that. It's not there today, but it will be landing relatively soon. They're actually completely, they're in discussions to revamp how the external load balancers work inside of Kubernetes. So it's something that we're hoping is going to be finished for 1.0, but it's still a little bit undecided. I'd have to go back and look at the pull requests or the issues actually discussing it, but because they've got concerns where they wanna be able to, so today they can just do that type of update to Google's load balancers of service. There's other folks working on the Amazon one and still other folks working on the updates to Neutron, so it's definitely the idea is going to be supported. It's just not quite there yet. Yes, question in the back. Okay, so for the benefit of folks who are watching at home, I'm going to try to repeat that. I definitely have an answer, but so the question was in the case of Docker Zero or the case of Flannel being used with Neutron, Flannel expects to be able to manage some of that IP addressing rather than how Neutron would traditionally do it. So how do we actually handle that when you could have multiple containers coming up at the exact same time, potentially trying to have the exact same IP address? Was that a good summary of it? Okay, and today that works with how Docker's looks at the existing host routes to figure out how to choose an IP address. So what Flannel actually does is you would route a larger subnet to the entire cluster and then what will happen is, since Flannel is an overlay network, it can totally operate where it's working just with RFC 1918 addresses that aren't available outside of the cluster. So you have that bridge at the worker level that then acts as a proxy getting into Flannel and then can route it to the right place. But because you actually assign like a slash 24 or a slash 23 to each individual host, it means that you'll never have an IP conflict because the routing actually occurs at a different level. Now in the case of Kubernetes, there's a competing project to Flannel called Calico which actually does this implementation through IBGP. So they actually manage each route to each container as a slash 32 route and they redirect the routes using IBGP to give you more of like the IP address could be on any host at any given time. And Calico actually uses SED as well to give you that central source of truth. So the idea is as soon as I request an IP address from the pool that SED is managing, that SED has that on, everything that's reading it immediately sees that it's been kind of knocked out or that it's in use. So it gives you the ability to really lock it down and get an atomic point in time for making a request. That's the exact idea of how SED can be used as a semaphore. So it looks like we've got time for maybe one more question. Otherwise I will go ahead and you'll, okay, so we've got one last question here. Yes, so the question was, does Kubernetes have inventory of the known VMs at any given time? The one thing I'll say is it's not, it doesn't have an inventory of VMs specifically, it has an inventory of the machines involved because it's not specific to virtualization. So it just cares about who is a worker but it knows all of the individual workers who can service requests. It also keeps a complete inventory of all of the running pods on every single one of those workers and all of the services that are routing requests to each one of the pods. And if you grab me afterwards, I can actually show you some pretty good visualizations of it. Yes, so tomorrow, the question was, do we have a workshop? And then tomorrow we are doing an entire kind of development day type thing via the community days where from, I believe, 130 to six with a little bit of a break in the middle, we're gonna be doing kind of development assistance and getting started around all of the various components that we've been talking here. EtsyD, Rocket, Kubernetes, all the above. So if you take a look at the schedule, you'll be able to see where that's actually being done. So at this point I say thank you very much for your time and feel free to grab me if you've got more questions. Thank you.