 Right. Good morning. Hello. How's your summit going? You survived till Thursday, huh? Very good. So we've had a couple of sessions this morning looking at different aspects of containers and performance and operations for OpenStack itself. We started in the morning looking at the two different kinds of containers, the machine containers, which are essentially guests inside OpenStack. They look and feel just like KVM, so that's the Lex-D pure container hypervisor and the Nova Lex-D integration of that with OpenStack. That's great because it essentially gives you a very familiar Centaur, Subuntu, Debian, RAL environment in the guest and off you go. And then we showed the difference between that and the Docker, Rocket, OCID, universe of single process containers, which are really all about hyperlasticity. So I'm going to whip through just a refresh on those kinds of containers. And then we're going to look at one of the, I think the most famous at the moment, operating frameworks for process containers, which is Kubernetes. We see Kubernetes sitting alongside Mesos and alongside Docker data center and alongside some other frameworks for essentially giving you a view of tens, hundreds, thousands, tens of thousands of processes with IP addresses, which is what Docker essentially gives you. Okay, so let's look at physical infrastructure. Or maybe we should start with some introductions. I'm Mark Shuttleworth. I lead product design and development at Canonical. And I'm the founder of the Ubuntu project. And this talk really reflects work that we've done, you know, both in Ubuntu and in the community upstream with Lex-D and in OpenStack and with Google on Kubernetes in the sort of interface between the machine world and the container world. Marco? Hi, I'm Marco Cepi. I work on the ecosystem engineering team where I help to bring in and help bring ISVs, partners, and other community members as well as help pilot the Kubernetes, our operations are still inside of Kubernetes and other container orchestrators. Okay, so the familiar world of virtualization, bunch of machines running VMware, Hyper-V, KVM, and creating guests, guests that are designed to look and feel just like the machines themselves, right? They can be in the operating system, but they're full machines, right? Lex-D, the pure container hypervisor, sits right next to them. It also gives you guests. Those are, they feel just like machines. You can SSH to them, Syslog is running there, you can install applications. So Lex-D, as a container story, is really for lift and shift of VMs into containers. I don't want to change the app, I don't want to change the operating framework, I don't want to change how I patch manage it, I don't want to change how I keep it secure. For like the 90% of applications that a bank is not going to change in the next 10 years, Lex-D, right? But for newer applications, we see a lot of interest in hyper-elasticity and the way to do that is with something like Docker, which gives you just a process. Now this is no longer a machine, right? When you Docker run MySQL, you're not getting a CentOS machine with MySQL running in it. What you're getting is an IP address, a CentOS file system or a Debian file system, or 90% of the time, 70% of the time in a Ubuntu file system, right? With a single process running. You don't get Cron, Syslog in it. You don't get any of the sort of supporting processes that you would normally get. You can't SSH there because SSH isn't running there, right? You have an IP address but you can't SSH there. People who've used Docker will be familiar with this, but it's something that surprises most people when they start using Docker, right? I Docker run this. It looks like I got an Ubuntu system over there with an IP address, but I can't SSH to it. I have to jump into the container and there's nothing else running in the container. So you can think of that at the top as literally a process with an IP address, right? And you can run Docker, Rocket, Ocid in all of these places, in all of these guests. You can run it in LXD. You can run it on a Hyper-V guest and so on. You end up with thousands of processes with IP addresses, right? Now, to manage all of the guests, we use OpenStack. Everyone's familiar with that here, right? So I build an OpenStack to keep track of guests. To manage all the processes, we use something like Kubernetes, which is what we'll talk about here, or Mesos or Docker Data Center, right? So essentially, this is performing the same sort of function, but at a different level with a different class of resource, right? Your resource at the bottom is a guest. Your resource up there is a process. Okay. Any questions at this stage? It's nice to have questions. Make sense? Actually, I would think of it as a container coordinator or a container modeling system, just like Juju is a modeling system. It keeps track of all the things and how they're related. Kubernetes, Mesos, they keep track of the things and how they're related, how they're connected. An orchestrator performs a different function. An orchestrator answers the question, why do I have these things here? So an example of this would be, you know, in a bank, you know, there's some something happens which drives a series of decisions and the decisions result in a choice to place a model of some software on this public cloud or to build or to expand the amount of software running on this private cloud, right? Now, the orchestrator would be taking the decision, which cloud do I want to do this work on and how much of that do I want to do, right? And what's the nature of that? Is it HA or is it test and dev or so on? Those are orchestration questions. Things like Juju and Kubernetes or Mesos, they essentially just implement the decision, right? Here you've told me to build Hadoop and Nagios and a bunch of other things. You've told me to make a model, great. So we describe Juju as a modeling system and I would describe Kubernetes and Mesos and Docker data center in the same way. They won't magically take decisions for you to change the model, right? You need an orchestration loop to say, ah, I need to scale that up or I need to move that to a different cloud. Those are orchestration decisions. So this we would call a process modeling framework or a process coordination system, right? Yes, what these things are doing, this Mesos Docker data center Kubernetes, what they're doing is they're actually running Docker for you. You say, this is what I want and then they'll spin up those processes, right? But they can't tell you what you want and so either that's a human, in which case you've got a human orchestrator decides what they want or it's a piece of software, another piece of software, right? And then you've got a software orchestrator decides what you want. Really decides what kind of model you want, right? And then things like Juju at the base level, things like Kubernetes or Docker data center at the higher level, they will then implement that with the primitives, that model, with the primitives that they use. In the bottom level, Juju's primitives are machines, right? The upper level, they're processes, right? Different images of processes, yeah? Okay, so this is the universe that we're operating inside and that's Kubernetes. So this would be a picture that's familiar to many people if you've used Juju. This is a model of applications on machines. The applications in question here are the Kubernetes applications and it just happens that we're using LogStash and Kibana for monitoring effectively. So some of those pieces there represent Kubernetes. Some of those pieces represent LogStash and Kibana. And this is a logical model. It's not a scale model, right? So this picture doesn't tell me how many machines I've got providing Kubernetes function and how many I've got doing management and monitoring, right? That would be a different twist on the picture. And because Kubernetes is just software that you run on machines, right? We can use Juju to model it and that's how we model Kubernetes itself to put it onto different cloud substrates or bare metal, right? Which is what we're going to do today. And so that's just a picture. This is the real one. So this is 10 physical little nooks, 10 physical little servers. It is running an OpenStack cloud which was deployed itself with Juju and that OpenStack cloud has some hypervisors providing LexD machine containers and some hypervisors providing KVM virtual machines. And so on top of that, in those machines, those virtual machines or those guests, I would say, this model has been built. This is Kubernetes. And the scale of that, that's across 12 guests effectively and a bunch of containers. And this is what tells me how essentially these functions, these applications are mapped into these individual guests on the cloud effectively. So this is Kubernetes sitting on top of OpenStack. Using this framework, I can operate the topology, by which I mean I can add scale. So for example, I can go to the Kubernetes worker here and I can say I want to scale it up. So I want to go and put another three KVM workers there. And that will essentially give me a couple more, a bit more scale effectively for the Kubernetes. So I've changed the model of the Kubernetes that I want. Juju will now talk to OpenStack and say I need another three VMs effectively for those Kubernetes workers. And that's now going to go off and come into existence. So you can see here, waiting for machine, these Kubernetes workers here essentially, they exist in the model and they're waiting for the VMs to be provisioned by OpenStack so that Kubernetes worker can spread onto those in the model. If you want to see what actually is running on top of that Kubernetes, this is that dashboard. So here I have a couple of applications modeled with Docker processors effectively sitting on top of that Kubernetes. So coming back to this picture, the apps in that last page are subsets of those processors at the top. Kubernetes is keeping track of them. And then the Juju model, which is this one, is modeling Kubernetes and then OpenStack is under the hood of all of that. Okay. All right. So here I have a different Kubernetes model. It looks very similar. I just moved things around a bit. But this one, you'll see the IP address is Amazon. So you see how it can build exactly the same model of Kubernetes on Amazon as I have on OpenStack. So once you enter model-driven operations, you can essentially reuse these models on different substrates. And in fact, I could take that model and put it on bare metal or VMware as well. And here I have all the same properties of being able to scale the individual components independently, integrate new components, integrate Nagios, or integrate something else live into the model so that I can essentially operate the underpinnings effectively of that Kubernetes, independently of the cloud that I happen to have that on. So you can imagine having a dev test and production pipeline where you're essentially using the same modeling tools, but just on different substrates, maybe bare metal, VMware, and OpenStack, right? At this level in Juju, the operations are all about integration. Like, I want this application talking to this monitoring system or I want that application using that application over there for key management, key escrow, right? In fact, that's what's going on here. This EZRSA application here is doing key distribution for Kubernetes. So it's related and integrated to some of the Kubernetes components and it's essentially allowing them to keep their keys in sync. If I'm scaling stuff up, then I need to the various other components to know the keys of the other things so that they can all talk to each other securely. The operations of the lifecycle would be handled in the Juju model. The operations of scale and integration would be handled in the Juju model. But operations associated with the specific applications will be encapsulated in the charms. So Marco is in a better position than me to talk about actually operating Kubernetes itself. Do you have your own model of Kubernetes or are we going to do it in my model of Kubernetes? I have my own. We can use yours. I've got a few models. Let's try the video. All right. This is real-time Kubernetes operations on stage. Do you want to see if this? Yeah, it's shut. Actually, I think they can switch if you were wired up already. Yeah, but we're already committed to this switch. There you go. Great. So I have yet another model of Kubernetes running. This is a slightly smaller version of Kubernetes that we've seen earlier. It is, while the model itself is very common, the components are there. We have a worker. We have a master. We have Flannel for an SDN overlay. We're using EdCD as a data store. And then, of course, EZR, say, for our secrets. It is a much smaller-scaled version. And if we look over here at the machines, there's actually only four machines that are comprised in this cluster. So it's very lightweight. I think we can even trim it down a little further as well. So this would be something we'd usually normally use for things like testing or small dev works to validate workloads on there. And what we have inside of these charms is we have, from a Kubernetes perspective, we have the master, which essentially is the entire API control plane for Kubernetes. And because of the way it's structured, this allows you to do things like scale and enable HA for a cluster. So today we have just a single machine running master. So if we lost that machine, we'd be out of luck as far as coordinating our containers. But if we scaled this up to add a couple more instances of it, a couple more units of master, we'd be able to have a more durable, reliable control plane. The worker here is very much in the same way. The worker is actually more like, if we were to do a comparison, a crude one, it'd be like a Nova compute for Kubernetes. So the worker itself actually allows you to do things like run the physical workloads of Docker processes that you're looking to execute. And as you scale this up, you get more capacity to run more Docker processes or rocket processes or whatever container abstraction. And then finally, the last couple of charms are there for supporting frameworks. We have a Flannel for an SDN overlay. SED is simply a data store where we do all our coordination of data. And then of course, EZRSA is our PKI secrets distribution. Now, this is one model and this model is running on Google. I have a second model here which is slightly more complex. Again, you'll see a lot of the same components exist. We have Kubernetes. We have the master worker. We have a lot more worker nodes here. We have machines in total being deployed here. We've also added monitoring and log aggregation via Beats and Elasticsearch in Kibana and Logstash. And we have over here on the left, we have some more durable storage. So by default, things like Docker processes are all pretty much ephemeral. There's no real persistent storage or persistent volume mapping for Docker processes without quite a lot of work. What we're able to do is connect things like Integrate-SEF and other persistent storage volumes into our Kubernetes clusters. And what's interesting here is that this is the same SEF that you would normally find backing things like Swift and Cinder or other common components. This isn't SEF built to integrate and work with Kubernetes specifically. This is Kubernetes knows how to integrate and communicate and receive the credentials required from SEF to map and create persistent volumes. And what this gives us now is a way for us to map RDB devices into our Kubernetes pods and launch workloads that then have volumes that will persist between reboots and be shared across the cluster. And in a very similar vein, you can do a lot of things like this as well, where we can integrate the same SEF that's providing a backend service for OpenStack for persistent volumes. It can be shared along the same control plane for Kubernetes. So you're managing one SEF cluster, one ops scene that knows how to do and access and manage and handle the scale for that, shared across your OpenStack as well as maybe something like your Kubernetes or any other thing that requires a persistent volume device. So if we dig a little bit into the kind of operations, I want to show what it looks like to do things like what monitoring gets you and how we can add additional capabilities in here. So we have in this running Kubernetes cluster in Amazon. We've got a Kibana dashboard, and this allows us to get insights into not just the log aggregation, but also into metrics, the health and performance of our cluster. So I'm going to go ahead and open up this dashboard, pending conference Wi-Fi. And what this allowed me to do is it allowed me to introspect and actually start building and dashboarding and monitoring the health and the services and the applications and their performance. I can start making opinionated decisions on, well, I know that I'm hitting a ceiling in CPU processing. I should probably create more workers. And when that becomes a need, I can do things like in Juju like we showed before. I can scale up workers. I can maybe do reconfigurations to potentially hunt down performance issues with wrong performance flags. All of these things allow me to do... All of these actions can be done through Juju. So a little bit earlier, I showed you that I wanted to scale that Kubernetes cluster. And I went... Could we switch videos, please? All right. So I went to this Kubernetes worker application and I went to scale the application and I added three units. So what I was doing is I was taking the model of Kubernetes in Juju and I was expanding the resource allocation to the Kubernetes worker application and three more VMs were then fetched from OpenStack. Those three VMs then got Juju charms for the Kubernetes worker installed on them and those charms then went and fetched Kubernetes worker. That's now all up and running and if I switch over to the Kibana view, here you can see that the additional nodes effectively came into the Kibana dashboard automatically, right? So because the model says Kibana is monitoring all of that application, when I expanded that application, Kibana is automatically managing the additional nodes. And here what Kubernetes has done, Kubernetes then realized, hey, I've got an additional set of workers so for resilience and performance and reliability, I can kill some of the Docker processes that I had on my cluster because I had three worker nodes. Now I've got six worker nodes. I can essentially restart those processes somewhere else and now I've got a more resilient application effectively. I'm spreading the compute of those processes at the top across more of the VMs in the model underneath. Does that make sense? Okay. Great. Any chance we could switch back to... Hi. So, again, just as Mark said a second ago, we've got Kibana. Kibana allows you to do a bunch of customizations for dashboarding. But let's say you were interested, your organization hasn't invested in the idea of Kibana. You have other monitoring tools you already use, maybe you use something like Nagios, Zabix, or other monitoring preferences, Prometheus, for example. Because of the way Juju allows you to model and model integrations for pieces, we can actually start amending and appending models live. You don't have to do a redeployment. You don't have to make any changes. You can just start adding additional things. In this case, I'm going to just deploy Nagios, which is a monitoring tool that's been around for quite a while. I'm sure a lot of you may be familiar with the name and I'm going to deploy its agent. What I can do now is all the nodes that I want to monitor inside of Nagios, I can simply connect to this agent. So now I've got a running Nagios and a running agent that will be spun up in a second, and I can connect the agent to a number of workloads. Let's say I want to monitor my worker, or I wish to monitor my performance for Kibana. I can connect Nagios to a number of different things, and what will happen is that agent will be deployed and just like we saw within Kibana, where a new metric started appearing as we added scale, when Nagios is deployed in a few moments, we'll start seeing aggregation of status of health of those applications and those machines pouring into the Nagios dashboard without having to do any host template modifications or mangling any configuration files. Sorry. That's just one example of how you're able to extend the operations of your cluster without having to do a lot of investment in one single tool. So instead of having to say, well, we're going to try Nagios or we're going to try Elasticsearch, it'll take a lot of lead time to go through and learn what's the right way to install it, what's the configuration we need, how do we tweak the performance, how do we get it all configured in our cluster, and then go through and learn the in-depth operations. You can start really quickly, really cheaply, to just deploy it, evaluate it, and if it's something you're interested in continuing, you can try another tool, or you can really start going deep on the operations for it. So what are operations? Operations are different to deployment, right? Operations are keeping something alive, evolving it over time, integrating new things to it, solving problems that arise like disks that fail, stuff that happens. That's the nature of operations. The really key shift here is the move to model-driven operations. Neither Marco and I specified a single IP address here. They don't matter. We're simply operating on the model, and the machinery is fetching IP addresses and making containers and making virtual machines as needed. Neither Marco or I has manipulated any config because of sensible defaults, but if we wanted to, there's config on those applications. So those are high-level config components that can be distributed out to all of those different running instances. And integration. You saw Nagios integrated there, right? So the running model, there's a Kubernetes in-flight being used in production effectively, and we've gone and integrated Nagios into the model. The magic here is in the charms, right? Charms are like zip files of Chef, zip files of Puppet or Ansible, right, Python. What's in there is up to the community behind that circle, right? What's in there essentially is everything that the community knows about how you operate Nagios, or how you operate Kubernetes, or how you operate Ceph, or how you operate Logstash or Kibana, right? And so you're reusing ops code, right? And you're engaging with those operations in a very abstracted way, right? But you get all the benefits of open source, the fact that other people are using the same code, the fact that other people are generalizing it, testing it, fixing it. Most operations code in most organizations is code that was written once for one user by someone who was doing it for the first time. You know, they were learning a new application and writing the Chef to run that application. So you can appreciate it. It's not necessarily the world's... This is not how we build great software, right? We build great software through reuse. We build great software through openness. We build great software through getting many different perspectives on that software over time. So what we have here is a way to essentially share operations code. Think of it as class libraries for operations code. And that's how these things can emerge so quickly, right? Speaking with a very large retailer, they had 12 people working over two years standing up all the automation for their Kubernetes cluster. And they're just stunned that they can now use this and operate it on multiple different clouds, or bare metal, or VMware, as easily as we're doing here on stage, right? Of course, sometimes the ops code in those charms doesn't do what you need, right? But then it's open source, and you can climb in and contribute to those or fork them effectively to add the capabilities that you need, to add the specific config that you need to express, or additional operational actions that you need to express in order to meet your needs. But if you contribute, then you're collaborating with a bunch of other people who are really interested in, in this case, Kubernetes, or Cep, or Nagyouse, or Logstash, or Kibana, right? Those communities are communities of practice, communities of operators, and the vendors, typically, behind them. Did we look at, did we look at Cep and the operations on that? Sure, so I would say we should dig in a little bit there. So, just to, as I kind of walk through this real quickly. Who's actually running Cep today as part of your open stack? Okay, so the Cep charms you saw providing persistent blocks, disks effectively to, to Kubernetes for the Docker containers is exactly the same set of charms that we use. If you've ever seen us deploy open stack with Juju, then it's the Cep charms in there, and the Cep charms here, they're the same charms, right? We see the same thing happening with SDNs. You'll see Calico charms for open stack. Those same Calico charms now connect into Kubernetes. The same for Plumgrid and various other SDNs that have charms. They essentially service both open stack and Kubernetes just as easily. So I want to talk about briefly, because if you're interested in diving into Kubernetes, there's a lot of things that you can do operationally for Kubernetes. And one thing that we've done and strived for is that as we work on our Kubernetes charms, all of those things live in upstream in the Kubernetes repo. So if you go today to the Kubernetes repository in the cluster directory, you'll find a folder called Juju. That's where all of these charms live today. So it is something that we strive to make sure that everything we do is open source. It lives with the upstream, and it's one of the best ways today to deploy, fully deploy and manage. The closest you can get to the upstream Kubernetes, it's very clean and precise. So there's a couple of things that we have here in our topology. We've got things like our Ceph cluster. So I want to dive into a little bit about, we've created the line between Ceph and Kubernetes, but how do we actually start enlisting block devices into Kubernetes so we can start accessing that persistent volumes? And all of that, again, to follow the common thread, is actually distilled into the charm itself. So Juju provides a way for you to say, here's how you actually call up, create and add that persistent volume into Kubernetes. Normally, this is an action that takes quite a bit of time. You have to first create the block device in Ceph, make sure it's the right size, create the pool name for it. Then you go ahead and you would go into Kubernetes. You would create a bunch of YAML files, describe this persistent volume that gives it names, make sure it's mapped to the right pods, and then enlist that into Kubernetes itself. Because all of that is essentially codifiable. It's repeatable. It's basically the same thing with different input parameters. We can actually distill that down into additional operational inputs that you can provide to the charms. So if we look at the Kubernetes master. It's missing any at the end of Kubernetes. Oh, there it is. Sorry. It has essentially two operational actions. The first one is to create an RDB persistent volume. The second one is to just restart API servers and bring up the scheduler again. I just want to characterize these. You'll understand that there is a charm for the Kubernetes master, and that charm has, think of it as a back chef or a bag of Python or Ansible. It has some hooks, scripts, which are associated with Lifecycle. How do I install this? How do I configure this? How do I upgrade? How do I remove this? Lifecycle. It has some associated with integration. I'm told to talk to EtsyD. What do I exchange with EtsyD? Some scripts associated with drawing those lines on the graph effectively. It has some that are associated, some scripts that are like this. Pure operational actions. So think of this as like a remote procedure call on the charm. Unity has defined a script called createRBDPV, and I can say like what parameters can I pass that? And essentially when I call that action, I'm essentially executing that function in the charm, either on one machine or across all of the machines in that application. So this is, you know, every organization, they have their playbooks, their scripts, right? What you're doing here is taking those scripts and professionalizing them and sharing them, right? So these are scripts that people who are operating Kubernetes Master use, but everybody's sharing the same scripts, and the scripts work everywhere because they discover what they need to discover from the model in the very place where they're being run, right? So I'm going to go ahead and just create another block device to add into this, so we have another persistent volume in here. And to do that, I'm going to just run kubectl so we can see this get added live. I already have a persistent volume in here, so I have one called test that I created about three hours ago. I'm going to go ahead and just create another one. So we're going to say juju run action. Can you read that? And I'm going to say the name equals hello open stack. So I'm going to run this, and juju's going to tell me there's a problem. Because these are distilled as code, these are operational values, we also have all the added advantage of being able to do upfront validation for things. So when the authors of this for the operational, for the how to create an RABD device for persistent volume says in order for you to do that, you have to tell me the name and the size of the block device you want. So I'm going to go ahead and also say the size is, we'll do a gig. Don't do that to me. Run an action, it gives an error and I thought that was an accident. Sorry. So what this does is because juju's an event driven system where we're modeling these things and we're modeling large scale asynchronous deployments. Everything gets queued up for execution. So I'm going to go ahead and it's probably already finished, but actions show status. Oh, show action status, excuse me. So it's completed and I'm just going to go ahead and take a look at the results. Everything's completed. And if I look here, I have a new persistent volume in Kubernetes in the kubectl command that says hello open stack, it's one gig capacity, it's got read and write access, it's been retained, it's available, it was created 40 seconds ago. So now this is available for any pods or any job creations that are being run to attach to this for persistent volume storage. So the real thing that's being transformed here is the way you bring skills into organization, right? As Marco said, to do that actually involves knowing about lots of interactions between the software. Cep, Kubernetes, all the pieces that need to be tweaked and told to make something simple happen, right? But by coalescing that down into shared operational code, you can bring something complex like Kubernetes into the building and dramatically reduce the amount of operator upskilling that you need to do to be effective with the standardized Kubernetes, right? If you want, you can dig into those charms and change the way they behave, add actions, change the config that's available to you and so on and so forth. But the critical thing that we're really trying to show here is we are entering a time when the pace of change and complexity of software has crossed a threshold, right? Organizations are really struggling to bring all of that amazing stuff from GitHub into the organization safely, repeatedly. Yes, they can set it up in the lab and then the people who set it up in the lab go somewhere else and now you have to figure out how to operate it, right? What we're doing here is essentially using open source for automation, sharing that problem across everybody who cares about complex thing and dramatically reducing, dramatically reducing the upfront investment needed and skills to effectively get it into the building. We saw that very clearly with Juju Charms and OpenStack, right? If you look at the people running OpenStack that way, they've got very big clouds that were deployed a while ago, upgrading every time. They're talking now, going to Newton and they can do that because the complexity of the operations are encapsulated in those Charms and shared with all the other companies doing that work. So I want to wrap up. I think we do it half past. I want to wrap up by showing you how you might get started with this yourself and the magic here is in Juju and the Charms but there is a very nice command line, if you don't want to use the GUI, there is a nice command line interface and I just installed it and I installed the beta version accidentally so if you'll forgive me, this might not work but it's a tool called Conjura. Can we switch the inputs to the second laptop? Ah, there you go. It's a tool called Conjura and it takes a Juju bundle and essentially walks you through the deployment. So a bundle is essentially a pre-defined topology. Instead of knowing that I need all of those services to make a Kubernetes, the bundle just says, here's a set of services that makes a Kubernetes. You could do this piece by piece if you wanted but Conjura just lets me do that. That's not right because I want Kubernetes, not OpenStack. We just nearly accidentally deployed Kubernetes on a cloud. Okay, so this is Conjura. I have just done a Juju bootstrap on Google and a Juju bootstrap on AWS. Where would you like me to build Kubernetes? AWS? No problem. Okay. So that bundle, this is the text mode CLI effectively, that bundle tells me that I need all of those applications installed and it recommends the number of units. In this case, it's going to map easily to the number of VMs but we could also put those units into LexD containers on VMs if we wanted a denser one. But it recommends the number of units, VMs, that I'm going to need for each application. Let me configure them so that easy RSA Can I scale that out or not? You could, yeah. I'll say 12. I'm going to go two of those and I don't know if there's any advanced configuration is there? No. I can essentially just look through the model that I'm going to build and say let me configure some of those things. There must be config on Elasticsearch, right? There's some more detailed config that I can provide to Elasticsearch. So this tool doesn't declare so this tool can essentially walk me through that. I can kick off the deployments so I can sort of say, okay, go and deploy that, go and deploy that, go and deploy that. And because Juju's asynchronous all I'm doing is I'm kicking off that change to the model now so those things are going. And I can just go and deploy all remaining. It's then going to give me a view effectively of the model in this tool. Give me a view of that model. So here you see essentially those applications coming in to the model. And so now it's just polling the Juju status, the status of the model and it'll show me everything I need to know. If you want to see what's going on under the hood let me just switch to AWS. These are my controllers. This will look familiar. This is the, ah, networks. There we go. That is the traditional Juju status view. This is the CLI version of the GUI and it's essentially bringing up the machines. You can see that the IP addresses are getting allocated, the instances are getting allocated on AWS. And if we watch that we would essentially see the instances coming up slowly. Here are, here is the conjure up view. You can just run that now it will walk you through the bootstrap of Juju. If you need to, if you don't have any controllers it will get you essentially up and running on any of the major public clouds and then you will have the Kubernetes that you have seen here. All of that is open source. We work closely with the Kubernetes community and the operations special interest group. So those charms essentially are being used to track performance and resilience and reliability of Kubernetes across all the public clouds because it's just such a nice easy way to repeatedly stand it up, test it, evaluate it, benchmark it and so on. Please do come and ask us questions and we're happy to take some questions up at the stage now. Otherwise we'll see you on the show floor. Thank you very much.