 Hello, thank you all for joining us. So I'm Marco Cepi. And I'm James Page. And we'll be talking briefly about containerizing OpenStack and Kubernetes, and how you can get a containerized, really dense production deployment of each in under 16 minutes. In fact, even faster than that, because we only actually have 40 minutes. So it's going to have to be quick. So before we get too deep into actually how we're going to do this, and we're going to go real quick because, again, 40 minutes, I want to talk briefly about what is a container. What's that? Oh, oops. Cool, thank you. So we're talking about containers. Containers itself is actually quite an overloaded term. It could mean one of many different things, actually. So what I want to do is walk through what we mean when we talk about containers, what containers we'll be covering today, and how they apply at each of these scenarios. So to start, let's pretend the screen is going to pop up in a second. Traditionally, all containers start at that lowest level piece, which is your hardware. So this is a Linux operating system running on a piece of machine somewhere. It could be your laptop. It could be a server in a rack somewhere. It could be a cloud, potentially, as well. And on there, you have a set of familiar things. You've got a set of processes running, those vertical lines. And that's things like your NITs, Cron, SSH Daemon, those pieces of software that have existed in Linux distributions since effectively the early 90s. You've got a disk. Some semblance of disks could be an SSD. It could be a rotary disk. It could be a set of things. And then finally, at the bottom, you have some kind of networking, typically, so IP stack attached to it, one or more NICs, et cetera. And then you're running your application on there. So you're running an Apache web server. You're running a bunch of custom code you've written, some Python code. It could be some Golang bits. You're running a process on there that effectively is executing what you desire. So the evolution of that is virtual machines. And that's the ability, effectively, as I'm sure many of you know, because we're running OpenStack, to take a physical piece of hardware, slice it up. What you get is the same view, except you're chunking out resources, CPU, memory, and you're virtualizing all of that onto a hardware platform that's virtual hardware, basically. So you can run different operating systems into it. It's not a tie to the post hardware, except for the fact that it's utilized in those resources, and it has them sliced out. You've got disk processes. You've got your process running in there. You've got networking attached to it, et cetera. So when we talk about containers, most people think of process containers. And that's your Docker, your Rocket, your RunC, your Ocid. Those are all projects that implement Docker-style containers. And really what you're talking about is a process container. The isolation of that actual process, the thing you care about, your app running, and enough operating systems to provide that. But you're not loading the whole suite of tools. You don't have a NIT. You don't have SSH Damian. You don't have a CRON tab. None of those things are running. You're just running that specific process. You've got some form of disk. It's usually in a mutable place. It's got a little bit of operating system and enough dependencies to get things running. And then you've got network stack. What lends itself to this model is that you can stack a ton of these things on a single host. Because they're not taking and isolating resources from the machine, they're just simply virtualizing the process space and enough to get it running. You get a lot of density out of these things. You can run a ton on a single machine. And when it comes to managing these things, when you have virtual machines, it's traditional operations. You have SSH to it. You run configuration management against it. You've got a suite of tools available that have assisted in building up machines from scratch to a complete running operating system since effectively the dawn of more than one machine in Iraq. And when it comes to things like process containers, because they're so different, you need a new breed of management to operate these things. And that's where platforms like Mesosphere and DCOS, Kubernetes and Docker Swarm come into play. These are all platforms that provide the operational material that you need in order to manage this new type of immutable siloed process container. The next container you'll come across typically is a machine container. This is kind of the mix of the two breeds. What you have is, you've got a machine. It's got a disk like you would expect it to. It's got a networking stack for accessibility like you'd expect on a virtual machine. It's got a series of processes running. Same thing you'd find on any Linux operating system. SSHD, initd, cron job. And finally, it's got your application that you're also interested in running on there. But unlike virtual machines and process containers, unlike virtual machines, you don't have the overhead of the actual physical allocation of those resources to that machine. You're using and borrowing the same technology that process containers build on to do the isolation for that. So as a result of that, you get a lot of density of machines. You're able to pack in somewhere around 14 times more machines on a single piece of hardware than you would a normal virtual machine experience. And it's also managed the same way you do traditional applications. So it's got SSH, you can do configuration management too. All the tools you've come to learn and love will all work against these type of machines. And when it comes to breeds of containers, that's the majority of things people come across today. This ecosystem is ever expanding and always changing, however. Just as we had started with process containers and machine containers very early on, it's being expanded to encompass more types of containers. And the last one is an application container which effectively extends the host, does not have its own TCPIP stack, doesn't have its own file system, it's own operating system, but does the same isolation and constraints that you'd expect to run your process. This is for when you want security, but you don't need the density, you don't need the overhead of having an entire operating system disk to serve a single process. And in today's conversation about containerizing opens at Kubernetes, we're gonna cross the breadth of these tools, how they all come into play and how you can actually utilize them to make much more use out of your hardware for these deployments. So in these conversations we're gonna come across, the top side you see a lot of this is the process container style management. Rocket, Docker, OCID is a bunch of tools. At the bottom you're getting down to actual virtual machine management platforms, that's VMware, Hyper-V, KVM. And then finally also LexD which provides you that machine container constraints. These are kind of the players that we'll be dealing with today. And so I'm gonna pass off to James Page. James Page is gonna walk you through how we can use this technology in OpenStack. Go on, do some cool stuff. Thanks Marco. So we're gonna be focusing specifically on how we use machine containers both to provide instances to users of your cloud and how we can use machine containers to containerize parts of the control plane of the cloud as well. So we'll start off by talking about Nova LexD. Just out of interest, who's heard of Nova LexD? Okay, about 20 or so people. So Nova LexD is a driver for Nova that allows you to integrate with the LexD container hypervisor as a hypervisor choice rather than KVM or Libvert, for example. It has very similar semantics in terms of how you manage the instances that are running in LexD containers rather than KVM machines. You can use the standard Nova API, your all standard kind of lifecycle operations. You've got standard resource constraints. So you can apply CPU and memory configuration to the instances that your users are using and charge your end user quarterly based on the resources that they're actually consuming. And we've also got some other nice features in Nova LexD because it's a very thin layer on top of the hardware. We can configure our clouds to do what we call an exclusive machine scheduling trick. And that basically means that the LexD container gets 99.9% of the hardware resources on the box with a very thin overhead. But it can also plug into things like the underlying neutron virtual networking, consume block storage from Cinder still. So without any of the complexity of actually giving a user a piece of hardware, we can give them an almost close to hardware experience via an exclusive machine. Okay, so that's how we can use machine containers within Nova and within OpenStack specifically. I also want to talk about how we can use machine containers when we're deploying clouds. So there are two projects currently in the OpenStack community that do this. OpenStack Ansible and OpenStack Charms, which I'm the PTL of. And they both leverage use of LXC or LexD containers to containerize the control plane of an OpenStack cloud. So what that allows us to do is segregate those control plane services into their own discrete file system with IP address, multiple network interfaces potentially. They can be SSH managed, you can monitor them exactly as you would do traditional applications. So things like Nagios or Ganglia or maybe FileBeats and an LStack can be used to monitor all those things just as you would normally would. But it gives us a lot of agility in how we can deploy OpenStack across a given piece of hardware. So by using LexD in conjunction with something like MAS, we can put down physical servers of Ubuntu and then we can place the control plane as we desire across that infrastructure. So this is a kind of classic monolithic approach to the control plane where you have all of your OpenStack services on the left-hand side there for the control plane on, say, three servers. In that approach, we'd still containerize those things so that they're all isolated from each other. And those containers are bridged onto the underlying network fabric so they have all the same network access as a host does, but they're just segregated from each other. And we push out the compute and storage onto its own physical hardware. Or we can think about that in a slightly different approach. So in a converged architecture, we can combine all of those things. We can run compute and storage on the underlying infrastructure and then we can spread the control plane again in LexD containers right alongside tenant instances. Now with some risks to that, if you get some noisy tenants, then that can potentially cause problems. But especially in larger cloud deployments where you can have hundreds of compute hosts, the overhead of running the control plane is probably somewhere between 20 to 30 containers. So that's gonna be a one in three chance of actually a KVM instance potentially impacting on a LexD container. So what I'm saying is that LexD and machine containers gives us lots of agility to be able to slice and manage our infrastructure in a much more intelligent way than where if we're just trying to push everything onto the physical infrastructure itself. So we're gonna take a quick look at what that convergence looks like. So Mark mentioned this tool in the previous talk. This is a cloud that's been deployed using Juju, which is the service modeling and deployment tool that we've produced at Conoco for the last few years. It's using the OpenStack charm set and a number of supporting charms, including things like NikeOS and HA cluster to provide clustering within the deployment. But the key thing here is this is nine physical machines. So we've got a complex model of OpenStack in terms of relations and how things need to be configured and communicate with each other for messaging and database. But this is actually only nine physical servers. So each of these servers has a number of services deployed on it. So some of them don't have very much on it. So we've decided to dedicate a piece of hardware to, for example, for the North-South traffic routing and the network services being deployed to service tenants via Neutron. But in some of them, we're using a number of LexD containers, which you can see on the right-hand side there, for the various different components of the cloud. So we have high availability in all of the services. We're providing VIPs for access and we're able to spread and manage the LexD containers over the underlying physical server resources. And this one, we've actually cheated a bit because we've got some of these servers are spindles with NVME B-cache front end, which has nice IOP characteristics. So we've pushed things like MySQL and Rabbit-a-Q onto that particular infrastructure and made an architectural choice about where we want to place those in our cloud. Okay, so taking that trick of using LexD containers and physical machines, actually can apply all of that same technology set to condense a cloud onto a single piece of infrastructure. And I wasn't gonna do this on my laptop, but I'm a little bit shorn of storage because I'm very scruffy with files, I'm afraid. So I've got an instance running on a cloud. And what I'm gonna do is I'm gonna use a tool called KunderUp, which is a downstream project for that consumes both Juju and the OpenStack charms. And I'm gonna use it to deploy an OpenStack deployment, completely in LexD containers running on this one piece of infrastructure. So it's an eight-core 32-gig server. I'm doing a lot less than that, but I wanted one with an SSD to give you some nice IOP characteristics. So a typical modern laptop, 16-gig of RAM, four cores and an SSD that's completely feasible to do this on. So I'm gonna kick this off, it does take a little bit of time, and then I'm gonna hand over to Marco to talk a bit more about OpenStack and QBIT 80 specifically. So let me just kick this. And hopefully in about 15 minutes when Marco's finished talking, we'll have a cloud we can spin up some instances on. Yep, let me show you. No, it's a cloud instance. I'm not doing it on my local machine because we wouldn't be able to change the slides while it deploys. It's quite intensive, aren't you? Sorry? All in one. All in one, yep. It could absolutely be your laptop. In this case, I don't think James's poor little think-pad might be able to hold up with presenting. And present at the same time, that's a trick. Okay, so I'm gonna hand back to Marco now and he's gonna talk about. Let's, okay, so this approach allows you to make basically your laptop or a single server. The first part of your pipeline in how you approach deployment of OpenStack. So taking that from developing and affecting a change into your architectural configuration for your cloud and then taking that free kind of testing and production environments. You can use exactly the same tools and concepts just with different architectural choices in terms of placement to take, say, a config change or a charm change through that entire process. Okay, Marco, QBIT 80s. So who is actually running Kubernetes today? Cool, a few of you. How many in the room are actually looking to get to using Kubernetes in the near future? Yeah, quite a few of you all, great. So I'm gonna talk briefly about Kubernetes and Kubernetes as an architecture and where we can apply a lot of these same characteristics that we do for OpenStack. Containerizing control plane services and even using LexD to provide CPU pinning and other isolation mechanisms into Kubernetes. The first I just wanna walk through because generally speaking, Kubernetes at an OpenStack summit's pretty much a new topic. I think, so I wanna spend a little bit of time just walking through what Kubernetes is as an architecture level. And so Kubernetes is, at its core, is a means for coordinating containers. And so what this provides is that same infrastructure, that same language, the same mechanism for managing Docker and Docker style process containers, in a reusable, reliable API fashion, much in the same way that OpenStack provides you a consistent API entry point for managing machines. Kubernetes aims to solve that same problem for that Docker layer, that process style container. And so it commoditizes and provides you the means for abstracting compute network and storage, much how OpenStack does for VMs, but at that container level. That's ultimately the goal for this project. You'll notice that they'll be opening up or they have opened up support for other container mechanisms. So it's not just Docker anymore today, you can use Rockets. And as more different process container projects spring up, we imagine we'll see those supported in Kubernetes as well. So this is tandem out to them adding hypervisor support for different types of hypervisors where the container runtime has changed. So Docker isn't the only one available. Rocket from CoreOS, there's also the OCID project as a container D in general. And lots of different projects that aim to provide varying features where Docker doesn't necessarily fill the gap for those projects. So because Kubernetes is providing this mechanism, because it's doing it around container processes, it actually gains a lot of different feature sets that we necessarily haven't seen before or have struggled seeing on different platforms. These things like being able to roll out and roll back, this is I deploy a change and if something goes wrong, I can just roll it back to the previous version. Because we're dealing with such a smaller surface area, a process container is an immutable device that's a single process running that is essentially a snapshot on disk of a file system. Because of that, we get a lot of additional extra primitives, much in the same way we gain flexibility with VMs when we moved into things like OpenStack Clouds. Being able to migrate a VM between hosts is unheard of for a rack system because you have to literally go and plug it, move it somewhere else. So because we keep moving up the stack of technologies, Kubernetes provides additional primitives that we never really haven't seen in full force before in other platforms. I rolled these pretty quickly here. So it's being able to scale up and scale back while those are available today. It comes natively out of the box things like Kubernetes, because they are, again, those little pieces of processes spinning up 10 more of those things is a very cheap and inexpensive operation. Service discovery and load balancing and self-healing are all additional primitives in addition to more, where self-healing being that because these are process containers and they're very easy to have observability into that stack because of how small the surface area is for those items, it's very easy to see, oh, I'm expecting to have 10 of them, I only have four, because it's so cheap to scale up, I'll just add a few more of those things. And so Kubernetes has those components and mechanisms to manage those kind of features. And Kubernetes at its core is really built up of these three pieces. So we talk about Kubernetes, I'm always gonna be referring Kubernetes in a production grade. So absolutely, if you wanna go and set up a Kubernetes today, you can just go download and run at CD somewhere, you can go download and run Kubernetes somewhere and connect those components up. When I talk about deploying Kubernetes and what a Kubernetes looks like, I'm talking about a production grade one. And for those, you need three things. Well, first you need Kubernetes and that's the third most icon on the right. You also need at CD, at CD is it's backend data storage where it uses to coordinate all of its data for things like what versions of application processes do I have running? How many of each do I have? How do I coordinate across multiple pieces of a cluster? And the last one is you need TLS certificates, you need some form of certificate authority, whether that's you using Let's Encrypt to grab certs from a public domain or buying them through any certificate provider or using something like EZRSA or Vault to do your own in-house private key infrastructure management. And those three things are the main components that comprise a Kubernetes cluster. So it is because the surface area of process containers are much smaller, the management concerns for them are a bit more decreased as well becomes a bit more of a tangible stack. And how these look, my slides are blanking out, yes. So what these look like actually deployed is deployed in architecture similar to this. You have EZRSA, which again manages that PKI. You have EcD, which is your distributed data store that backs your Kubernetes data. You have a master control plane service. This runs your API server, your scheduler manager, which goes and makes sure that I have X things running but I need Y, so I need to boot Z machines or Z, sorry, Z containers. And then your workers. This is where you actually go and run your workloads. This is running your container engine, Docker, Rocket, et cetera. It's running Kublet, which is that agent that talks to the container orchestrator there. It's running your SDN for networking management that could be Flannel, could be Calico, could be Weave, could be any number of the CNI, the container network interfaces that are supported, and running your Kube proxy service, which allows you to do networking across each of the nodes so containers can go ahead and speak to each other and address each other for service discovery. And so when it comes to deploying these things, this actually becomes a little bit more complex of a topology. So this is the Kubernetes diagram to what you need and where you need it and how to set it up in the order to do these things for setting up a Kubernetes. First, out of band of any of the scope of Kubernetes project, you need to have something to address. You need machines somewhere. And what's great is we have a tool that already does machines on demand for us. It's OpenStack. So from looking at a perspective of where do I put my Kubernetes? If you have an OpenStack already, if you have VMs that you can address on demand, you already have Fulfilled Step Zero, which is networking, storage, and a compute system to deploy these things to. You could also use things like Maz for bare metal. You can use VMware directly as well if you don't have an OpenStack cloud or don't wish to put it in an OpenStack cloud or even into a public cloud if you're looking to experiment and don't have the resources available. That's effectively the first step for any Kubernetes cluster. The next is setting up the data plane, the prerequisites before you can even have a Kubernetes cluster running. And that's your XCD cluster, setting up, managing, configuring that, adding your SSL certificates so you can encrypt traffic, not just to the cluster, but within the cluster, getting your clients for Kubernetes all set up and downloaded. And this is handled by those two components. You have XCD, easy RSA, that kind of takes care of your step one prerequisites. Then we get to more interesting pieces. When you set up the control plane, that's your scheduler, your API server, your controller leader, and then you also need to set up the nodes where you run your workloads. That's your proxy, your kubelet, your container manager, your networking interface, any additional infrastructure APIs you want to consume. So if you have the load balancer as a service or if you have DNS available for the cloud you're in, whether it's OpenStack or a public cloud, having those mapped directly to your cluster all needs to be configured, all those concerns need to be taken into consideration. And then finally, you need to bootstrap the last process of the Kubernetes cluster which is everything else Kubernetes uses and depends on, which don't actually run as Benz on disk or run inside the cluster. They actually run as Docker containers on top of Kubernetes. So DNS management internal to Kubernetes, metric collection and management, watch your functionality for rectifying control loops. All that stuff runs as Docker containers on top of the nodes. And then finally, step five of this tier, if you move through all of those steps of configuring, managing, reconfiguring, is to actually run your workloads. And those are typically addressed as pods inside of Kubernetes. And that's a collection of one or more container stitched together as a workload. So all that gets you to the point where you can now execute and use a Kubernetes cluster. But as we've seen with things like OpenStack architectures, we entered into different types of topologies and architectures underlying and underpinning this depending on, well, resources you have available, the things you're interested in getting out of your cluster, et cetera. So what I wanna show is what a traditional, I guess, based on this diagram, what a traditional setup typically looks like, the cost of that, and then how we can use things like Linux machine containers and LexD to actually condense a lot of these pieces as containers themselves as machine containers on a few hosts to get the most density for the smallest amount of hardware. So classically, you have a bunch of VMs set up. You need a certificate authority somewhere, that's your easy RSA. At CD, in order to run in a production grade, you need more than one. One is not enough. If you lose one, you're out. You can't just run two because it needs a quorum of three or more to continue. So at a minimum, three gets you the closest to a highly available at CD cluster as possible. But going even more into production, larger clusters need five or even nine nodes to have that robustness, the higher performance throughput, and also make sure you have the most data integrity and quorum available. Then you need to run your API master control plane. That's at least two, if you wanna be able to do load balancing and high availability. And then finally you run your cubelets as many as you want, as many as you need to run your Docker containers on. And so right away, we're looking at six machines just to set up the control plane services and then however many machines you need additional to that in order to run the workloads of Docker containers, the process containers. So with a converged architecture, much in the same way we do this for OpenStack, we can actually co-locate a lot of these services as Linux machine containers. At the end of the day, with the exception of SED, a lot of these services don't actually consume much idle resources. They're little going binaries that have API endpoints that just respond, save to database, read state from database, and then do something else dispatch another message. So with the exception of SED, which typically needs a little bit more hardware, a little bit more oomph in order to keep your data going, which would be the first thing I would pull out of a converged architecture. But with the exception of that, even in medium-sized clusters, containerizing those control plane pieces actually gets you a lot of density. So we can do in just three machines for the most high availability possible. You can spread your three SEDs across those, several masters and your certificate authority, and also run workloads on those machines, all isolated, all constrained from each other, but still completely separate and deployable but at a very low cost of machines. So let's show what this looks like. Find me a terminal. Cool. So we've got here this slightly noisy box over here. This is a box from Contron. They're a Canadian hardware vendor that typically builds hardware for telcos and such, but what we have here is a nine-machine server in a 2U form. These are all individual SLEDs. They all have their own memory, their own storage, their own compute processors and sockets. It's a really cool device. And what I'm gonna do is I'm going to actually deploy Kubernetes on just a few machines on here without having to really burn through my nine nodes. So I'm gonna do a super-converged architecture. I'm just gonna go ahead and type Conjureup. And so what Conjureup is, I'm gonna install Conjureup. So what Conjureup is, is it's a front-end, as James mentioned earlier, it's kind of a downstream project for Charms and for Juju. And so much like the OpenStack project, how they have an OpenStack Charms project, there's actually a set of collection of Charms for Kubernetes. They live in the Kubernetes upstream repository. You can go check them out at any time. And they will also have Charms for those additional services that don't exist in Kubernetes. So for STD, for SDN, network layers, et cetera. So we'll wait for this to finish. And so what we'll do is we'll use Conjureup. What Conjureup does is it gives you kind of a starting point for topology. It's a great way to compose and say, here's a starting point for a very small tight cluster. Here's a starting point for a very large production grade cluster. And then from there you can change the topology, you can modify it, you can execute it, destroy it, execute it again, change something. And then when you're finally done, you can export that and say, this is my architecture, the topology that I want. And you can replay that over and over and over again without having to go through the UI process. So I'm gonna go ahead and Conjureup a Kubernetes. And much how we Conjureup OpenStack, this will give me a few options to choose from for a Kubernetes topology. So the first is Kubernetes Core. This is a two-machine Kubernetes deployment with as much density model as possible. We containerize as many things as we can, put them on just two machines. So it's not necessarily production grade, but it's enough to get you a good experience with a developer box, one machine that's dedicated just for doing workloads, and the other machine has containerized control plane services on a single host. The canonical distribution of Kubernetes is the next logical step from there. So this includes things like a load balancer for your API control plane traffic, multiple API servers, a larger LCD cluster, and a few more workers to back it. So this is normally a nine machine topology. I'm gonna go ahead and choose Kubernetes Core. I'm gonna go ahead and say, I'm gonna use my MAS controller. So we're using MAS, which is that bare metal manager. Looks a little something like this. Memorize address, remember address from memory. Find nodes that I mentioned earlier. So that's all right. They're all powered off, they're ready. And what I can do is when I run this conjure up, I can go and mess with the architecture so I can see, okay, so what does the actual architecture look like? I've got these two machines. I can see that on the first machine here, we have a bunch of containerized services. I've got just a single worker on the single machine here. I can go and muck about with this. I can add more machines. I can change configuration for components. So right now this is gonna deploy the latest stable version of Kubernetes, which is 1.6.2. But if I was using an earlier cluster and I wanted to set up something for conformance, I could just say give me the 1.5 version of Kubernetes in the stable channel from there and maybe do an upgrade test or something of that fashion. But for now I'm gonna stick with the 1.6 stable. I can manipulate all these things. These give me a really nice UI for managing the architecture and topology in a simple repeatable fashion. So we'll keep those changes and I'm just gonna go say deploy these things. So this is gonna do is just like it did with James's demo earlier, which we'll get back to in a few seconds. It's gonna grab those components and start requesting machines from the provider. In this case it's MAS, but this could have easily been pointed at an open stack. It could have been pointed at a public cloud. It could have been pointed at VMware. It could have been pointed at like it is now bare metal. Well here it's gonna start getting a little whiny over here. But it's gonna start booting those machines up. It's gonna provision the operating system for bare metal deployment and then it's gonna start playing those operational codes from those charms on top of them until it builds up a topology that I've requested. And once it's done, I can export that topology. I can go give it to a coworker. He can go run conjure up against that topology on his laptop, on his open stack tenant, on his public cloud and get that same experience, that same architecture that I just built. So this is gonna take a few seconds to boot up. So it's gonna go and provide those machines. Let's get back to here. So these operations, the whole goal of this is to provide that seamless operation experience. The idea that I don't have to know intimately the internals how this is set up. I'm not gonna produce some snowflake deployment that's gonna be hard to reproduce somewhere else. It's gonna be hard to upgrade and manage. I'm gonna wrap all the operational expertise in the community in these charms. They're open source, they're accessible. They're changeable, they're mutable. You can go and contribute to them. You can see what contributions have been made, file bugs and then benefit from the same expertise and operational knowledge that's going into those charms and other organizations, consuming those for yourself. And doing that across really any substrates, any cloud that can get you machines on demand. So while that's running up, let's see how far we've gotten with an open stack on a laptop. Well, we're not quite there yet. So what we can see here is that we've got a partially deployed cloud at the moment. This also has deployed Ceph as part of the cloud for some backing storage. That part's complete and it looks like we're just pending some relation data exchange to complete. And then we should have a running cloud. So we'll just have a little, while that's going on, we'll just have a look at what that actually looks like on this machine. So this is the juju status output, which Cundrup is actually just querying that programmatically via juju's API to give that feedback. So we can see stuff going on there. We can see that we've got most of the open stack components deployed and we're just waiting for some relations to complete. Relations are the data exchange between services in the deployment of things like database, user names, passwords, that sort of stuff. We can see the list of LexD containers that are running each with its own network address. That networking is all private to the machine. So that's not accessible from outside of the box. You could do it directly onto a network if you wanted to. That's also an option. We can see all the virtual NICs that have been created on the various bridges and we can probably see it loading relatively highly as well. So it turns out that they're deploying on how many machines have we got there. So 15 containers and installing open stack even as devs is relatively IO intensive for a short period of time and we'll use as many causes as you can give it. I think this machine loads at about 170 during deployment at peak time, but it drops off fairly quickly and then once the cloud has deployed it, idle is very, very low. So I can run that on my laptop and not have to drain the battery in 20 minutes. So that's fairly nice. How much time have we got there, five minutes? So you want to start taking some questions? I'll wait for these to finish. Yeah, so we'll wait, let this finish up. If anybody's got any questions, we'll do questions now and hopefully come back to a running cloud in four and a half minutes time now. If you have a question, just queue up in front of the microphones. Absolutely, yeah. Then it gets recorded. That way your lovely questions can be reserved. So I had a chance to try deploying Kubernetes with Juju. I noticed one thing quite special. I haven't experienced that before. It has, the Juju charms has diminished in snap. Yes. It has much bigger footprint than just normal Debian packages. So absolutely, so this is that last type of container type that we mentioned earlier on in the talk, a snap container. Why we snap things, especially Kubernetes? What snaps get us is that confinement story that we're looking for for software. So when we're running effectively, this is reproduced upstream builds of Kubernetes. While we do typically trust the upstream, this is not to say something might happen where potential compromise is available or security threat. By snapping those containers, what we do is we include not just the components we're running and its dependencies, but we also provide a confinement mechanism. That process can't touch any files on disk. There's no writable space for it. It's effectively a read-only image and it gives us the ability not just to provide a security confinement, but also gives us a really nice upgrade story. So for example, when we roll out the next release of Kubernetes 163, we're about a week away, I suppose now for next point release, you'll be able to upgrade to 163. And if anything fails during that snap upgrade process, the snap will roll back to the previous version. So it takes a lot of the burden of operational code that we'd have to write to do health checks, assertments, and then roll back in a snap format. So while they are, I mean, they're very small in size in general. We're not talking more than a couple hundred megs in total for an entire snap master control plane. It's true, but I think comparing, you also have the ADCD Debian packages, which is much smaller. I think maybe 10 times smaller than the snap package. We can take a look at that afterwards. I'm not sure that's quite the case anymore. We've done a lot to prune our STD snap down. I mean, not to get into snaps in too much detail, but there's some nice features in terms of how snap upgrades work as well. So when you move between versions of a snap, that's actually binary deltas coming down the stream for the update itself. So although the initial install may be quite large, because it is basically a big static link of everything that particular component needs, any updates past that point in time come down as binary deltas from version one to version two. So there's a nice optimization and you still get that transactional capability because the snap gets reassembled as part of that process. So over the lifetime of a deployment, I would actually expect it to be smaller rather than larger. We've actually been looking at open stack snaps for snaps of the open stack components as well is something we're looking to move towards. And in that case, the initial install is actually smaller than open stack itself because of the number of Python dependencies that open stack has. When you do an initial, say, Cinder API install, it pulls down something like 190 new dev packages on a clean install at the moment. So moving to a snap, moves that from about 75 meg of raw devs, unpacked to 160 to about 28 megabytes. So it's actually a reduction there. It depends on the application is what I'm trying to say. Another thing is it has dependency on the APP armor. Is that the... So snaps utilize things like set comp, app armor and such for confinement. That's how we're able to make sure that not only is its space read only, but that it can't affect other piece in the file system unless we explicitly allow it to. So for example, the API server for Kubernetes needs to be able to bind to a port in order to provide its service on the network. We use app armor profiles on Ubuntu. On other Linux platforms that have app armor, we use set comp or something comparable. That way we can say, yes, this application, this thread running, this location, has the ability to bind the network ports. Some of those will be connected automatically. Some require actually you to say, yes, I trust this to be able to manage kernel modules or yes, I trust this to be able to manipulate this file path. So it's a way to allow a very concise form of management of what things this process can touch on the host system. Okay, thanks. Thank you. Any other questions? Yeah. Just wonder about the, excuse me, the networking with Kubernetes. So just having you examine recently how networking works in OpenStack using Linux Bridge and OBS and the various connectivities and the agents and so on and so forth. If you have a bunch of containers running on a bunch of separate VMs and you wanna be able to turn them into a private, what, by what means is that achieved? Sure, that's a great question. So there are a set of vendors that are all providing this kind of SDN space. So we use flannel by default. That's what the upstream project recommends. It's probably the most portable. It's just IP encapsulation. So it just sets up a set of bridges, listens for that encapsulation on the network and then you can route between hosts. So containers and pods on different hosts absolutely can communicate with each other. You can also do things like binding to the host network. If you have that ability, you can bind just straight down to host network through node port, not so the most robust means of doing so. And then finally, there's a new feature in Kubernetes 1.6. It's called the policy manager. So there's probably the most prominent example of that is Calico, which I'm sure many of you may be familiar with in the OpenStack space, but it lets you use your host bottom network, but then provides a policy routing management that simply says these two IP addresses can communicate with each other. So instead of having everything open to each other, it allows you to set a set of rules in addition to your overlay network if you have one or utilizing an underlay to say these containers, these IP addresses, these pods actually are able to communicate or not and just kill the communication if they try to do so without your explicit permission. So the networking space is a lot less mature than what you'd find in OpenStack. It is really up to the vendor implementing the solution to do a lot of the legwork. You don't have a neutron, you just simply have an entry point that says, this container needs an IP address, you either give me one or it's not gonna get one effectively. So it's up to the vendor to really implement how that actually happens. It's not necessarily as rigid or structured as you find in the neutron and the drivers for neutron there. Yeah, good question though, yes. So was that SNAP, is that in the public SNAP store already? And what was the name? It depends, so every component SNAPed, there's Kube API server. Do you mind if I hijack your terminal again? I couldn't see it when you were typing it. Yeah, oh, so ConjureUp is just the software to do the deployment and install. Give me a new, yeah. So if you do SNAP Find with Kube, KUB. And you'll find it then? It'll find all the list of SNAPs that we have. That's something that's being updated into the Kubernetes release process as well, so we'll be publishing those with the upstream on release times. And that is the same binaries that you'd get from just downloading the TAR-GZ for that architecture. These also support multiple CPU architecture, so it's x86, it's ARM64, it's PPC64EL, it's S390X. If you SNAP install on any of those, you'll be able to do the... So if you don't wanna use Juju, if you're more familiar with another system, you wanna do something home baked, you can use the SNAPs and you can use SNAP configuration interface to say, here's all the stuff I would normally pass to that Damian control line. And the SNAP would take, you'd be able to leverage from that. That upgrade process we talked about, that upgrade and rollback, and that kind of robust nature of confinement without necessarily having to use the charms directly if that's something you were interested in. All right, thanks. Yeah, thank you. Yeah? Yeah, I think so, yes. So again, on the SNAP, destroying the environment through SNAP, is it not available and you need the Juju model? No, so you're talking about uninstalling a SNAP? Everything that was deployed by SNAP. So the SNAPs didn't deploy anything here, we're using the SNAP as a package format effectively. Right. So we use Juju to create machines, we put code on those machines to do operations, which installed SNAPs. If you use Juju to do that mechanism, you can say Juju destroy this model I've created and it will effectively delete those machines and wherever they, wherever it found them. I'm just saying with the SNAP, all this was automated and now destroying. It can also be automated, it's effectively a one line command. You need that Juju model again, right? Yes, the model you created can be destroyed from the client that we've used the same way we created one, we can destroy one. And second question is on CNI. So you have LXT CNI, you have the Docker that is not used and then you have also Juju CNI, what is that used for? We don't have, so we don't have a LXT, so CNI container network interface? Right, you have, when you deploy with the SNAP it creates three of them, right? The LXT, which is the open stack container and you do the Juju something, CNI. I'm wondering why and what's. I'm not sure exactly what you're referring to, we don't create any CNI outside of the SDN provided, but when this session's over we can take a look at maybe that more particular. I'll take close look. Okay, thank you.