 There we go. That'll help with the recording. I think I could bellow to the back of the room, but to those of you on YouTube, hello. Right, so I wanted to do a quick overview of everything that's going on in the Ubuntu OpenStack ecosystem. So we're going to cover a broad range of threads and touch on a bunch of things which are continuations of stories we've told in the past and also some new stories, some new angles opening up. So I'm not going to repeat many of the things we've touched on before. You've all seen the incredible automation that we're bringing and the ecosystem that we're bringing. I'm not going to try not to repeat things that we've talked about in the past, but I want to show you more of what's new. Now in Paris, we announced LexD. Now LexD is Canonical's container-based hypervisor. The Canonical container story started four years ago as we started to look at the mobile platform and we wanted ways to isolate applications. And there were a bunch of people talking about containers in Linux, but it was very quiet. And we brought them all together in Canonical and led the Linux containers project. So that became LexC. LexC became the basis for Docker, which you might have heard about. And in Paris, we announced the transformation of LexC into LexD, a full hypervisor with live migration, supporting all different flavors of Linux and some extraordinary potential. Today I want to show you a little bit of how much potential. We ran some benchmarks and the results are on your seats, but essentially from a density point of view, LexD crushes any other virtualization or virtual machine-based hypervisor. Now that's somewhat expected because of the nature of containers, but just how much? Well, 14 times the density with LexD than with KVM. Now what does that mean? It means that if you're a bank and you have large numbers of guests in your cloud, which are essentially generally idle web applications, Tomcat, PHP, Drupal, the sorts of things that hosting companies and banks and every other institution run loads of, LexD will compress those into one-fourteenth of the space on the physical hardware. To give you some pictures, we took an Intel server, the 16 gig of RAM, and on that server we were able to launch 37 KVM guests. Those are full Ubuntu guests. They're responsive. They're not Cirrus, not a fake little test operating system. That's the real thing. On that same machine, 536 LexD guests. Again, full Ubuntu machines with SSH running, with applications running, Getys and with Docker running inside them. So extraordinary density and performance. We launched those 536 containers in less time than KVM launched those 37 virtual machines. The guys who did that work are all here. Come and see us at the booth. It's a pretty extraordinary story, but if you really want to see the density, you've got to go to the high end. And we were delighted to work with IBM on their new open power servers to produce this number, 2,500 guests on a single open power server. So IBM is delighted with that. We're delighted with that. There's still lots of room to go. Now, at this point, they're going to have to be very lightweight guests because you've still just got the same motherboard driving all of them. But there are many enterprise applications where density is the primary concern. There's another kind of concern, which is bare metal performance. How fast can you go? And the great thing about LexD is it is bare metal performance. We can't measure the difference between bare metal and LexD, which is fantastic in certain use cases. In particular, we've got all of the management benefits of virtualization without the 5 to 25% overhead of virtualization. And in the OpenStack case, what that means is full support for software-defined networks through Neutron and software-defined storage through Cinder. So it's a little bit ironic that we have bare metal performance with all of the NOVA features. NOVA was built for unicorns. And here we have bare metal performance behind NOVA with no limitations on the API whatsoever. This is coming together as a project called NOVA LexD. It is NOVA drivers for the LexD hypervisor. And you should follow it on Stackforge. It is pretty amazing. Now, that bare metal performance is especially useful in more lightweight environments or environments where you're trying to get a lot done at the same time. And I want to point your eyes to the front of the room. We have a Cavium Thunder X server. Looks like a standard 1U rack mount server. But in fact, that is an ARM64-based server. And that socket has 48 cores. So what a perfect machine for running large numbers of things in parallel. And here I think we can show you. So this is Ubuntu on the Thunder X. And if I grip, you can see 48 cores on that one board, on that one socket. And if I do an LXC list, you'll see a whole lot of containers running on ARM, on ARM64, on that board. So imagine how fantastic that is for telco applications. Cavium has a DNA of telco running at bare metal performance in that very highly parallelized fashion. So LexD flies on ARM. And it also solves one of the key problems in virtualization, which is latency. Now for the convergence of cloud and high performance computing, or cloud and telco NFV, we really want to solve the latency problem. And it's really difficult to do on ESX or KVM because the reality is you're scheduling resources to a thing, which then has to schedule resources to the actual app if you're doing it under a virtual machine. But with LexD, those processes are on the host metal. And this is how amazing that is. With LexD, the same workload, this is 0MQ, so a latency-sensitive messaging workload. For remote applications and for local latency, the latency is reduced by more than 50%. So it's less than half the latency. Extraordinary numbers for LexD. And that, of course, all sits right next to KVM. You don't have to give up KVM, right? Now, there's been phenomenal interest in coverage of LexD, but I want to just clarify one thing. LexD is about machine containers. What you get from LexD is just like a virtual machine. LexD is a hypervisor, and you can do anything in that hypervisor, in LexD, that you can do in a VM. So it doesn't conflict at all with process containers like Docker or Rocket. In fact, they go very, very nicely together, right? So you can take a machine, could be a virtual machine or a physical machine. You allocate guest machines with LexD, and in those, you're running everything. You're running init, you're running syslog, you're running all of your log rotation. It is a normal machine, including running Docker. And I'd like to show you that as well. So here I have a little machine running, and I'm going to start a LexD container. And in the time that it takes me to go and ask for a listing, it's running. That's how fast it is. So I've just made, bang, that's another little Ubuntu machine. And then I want to go in, well, there you can see PS on the machine, and there's a cluster of things at the bottom. Well, it turns out that if I go into that machine and do that, you'll see it's the same set of processes, right? So that's the container. Those processes are unaware of the fact that they're actually running in a container. So that's LexD running on a machine. And if I say docker info, here's docker thinks it's just fine. And if I say docker run, there we go. So that's docker, process containers, running inside, hypervisor, essentially, lighter visors containers, machine containers produced by LexD. Pretty amazing. So LexD is the lighter visor. It is a hypervisor. It gives you virtual machines with bare metal performance. Pretty amazing stuff. Now, to be clear, we love KVM. There's more KVM running on Ubuntu than any other operating system. We support it. We love it. We'll continue to invest in it. But it is a fantastic option to have. Now, I want to dive a little bit into OpenStack itself and the work that we're doing in the Ubuntu ecosystem on OpenStack. The latest user survey is out. Whoops. It should be 54% of production OpenStack deployments on Ubuntu. Nonetheless, more production deployments than all the other platforms put together. And if we look at the largest OpenStack deployments, 80% of those are on Ubuntu. And this is really our privilege to work with the guys who aspire to scale their clouds. Because I think scale is where the real tests are. In part, we do that because we support many different versions of OpenStack. We're not essentially trying to lock down Ubuntu. You can create your own OpenStack. You probably know about those. We've added relationships with Ericsson for their cloud execution environment and EMC around cloud scaling. We support all of these. In fact, we support any OpenStack on Ubuntu with one exception, not on this list. But what I want to focus on is the canonical OpenStack distribution. I hope you can relate to some of these companies. These are some of the leading companies that have built OpenStack on Ubuntu. Today we can talk about Walmart. So thank you to the Walmart folks for mentioning us in their keynotes. And in particular, telcos are a real focus. Because again, telcos aspire to scale and reliability. We've really enjoyed working with the telco community. And I think the reason for all of that is just our focus on the ecosystem. This is oil, our OpenStack interoperability lab. You'll see a bunch of new names there, like Coho. And it's great for us to see that ecosystem continue to expand. Every vendor is welcome in our interop laboratories. We build the cloud more than 100 times every day with components from these different vendors so that we can generate an enormous amount of data for those vendors on interoperability. So we can help them answer customer questions as to whether they'll be able to build a cloud with Cplane and CIF on AMD with a particular management system, for example. So I want to take that further. And today what we're announcing is CI for OpenStack as a service. What we've done is we've created the ability to deploy OpenStack across in real-world environments, multi-node environments, scaled-out environments from source. You probably know that we find JuJu the most elegant way to model OpenStack. It gives us an incredible amount of flexibility to plug-and-play components from different vendors. And what we've done by adding the capability to all of the core OpenStack charms to deploy from source, we can now do CI on tip. So we're going to start for the OpenStack upstream doing CI on every commit in real-world scaled-out environments, not DevStack, which is where OpenStack CI happens today, not single-node deployment. We're going to be hunting for finding and fixing those scale-oriented issues as they happen in tip. We can also do CI inside a vendor office on their branch. So say you're a vendor and you've got your plug-in to Neutron, your plug-in to Cinder. You want to be able to continue to evolve your code and track Liberty as it evolves. Well, this we will set up for you, and that will allow you as a vendor to track Liberty every day. That means that as your charms come into oil, we can expect them to work with tip and with everybody else's code at tip. So we really are trying to drive the ecosystem to keep up with the incredible pace of OpenStack. And lastly, for enterprises who are modifying OpenStack, obviously this mechanism gives you the ability to run CI on your own branches of OpenStack, typically branches of Stable. There'll be a detailed demo and walkthrough of that in room 208. I would encourage you to see that on Wednesday at 4.30. Now, OpenStack is running on ARM x86 and Power. You'll see the applied micro box over there, and I'd really invite you to come to the booth and see OpenStack running on applied micro in a scaled-out environment. So that's ARM 64 from applied micro. And we're delighted to have brought up Ubuntu OpenStack on IBM's OpenPower as well. Three major architectures fully supported. But deploying isn't the hard part. You've all seen us deploy that OpenStack in permutations and combinations live on stage with all the options a customer might want from different vendors. Deploying isn't the hard part anymore. We've moved beyond that. It's operations. And so in Paris, we launched our autopilot. And the autopilot is a completely automated installer for OpenStack. You feed it hardware, you feed it a description of the cloud you want, and it builds and operates that cloud. And I just want to quickly run through that install experience, right? We have here two racks, two orange boxes, ten servers inside each orange box. So we're going to build an OpenStack cloud this morning across two racks. This is Inside Landscape, which is our management product for Ubuntu, freely available to all of our customers. And essentially, all I have to do is choose the different technologies I want to integrate into that OpenStack cloud. So here I have OpenVswitch. And you'll notice since Paris, all of the IP addresses are filled out. And they're filled out because now Mas is modeling the network. I'll talk a little bit about that later. But Mas has a full view of all of the network segments and subnets. And so it can just tell me which IP addresses I can use to build an OpenStack cloud. So I'm going to do that. I'll just use Ceph and Ceph for block and object. And now you see I've got two different zones because we configured that Mas to think that all the machines in one of those boxes were in one zone. And all of the machines in the other box are in a different zone. So I'm going to save that selection, add another availability zone and save that selection. So now I've asked Landscape to build me a cloud with my choice of software-defined storage, software-defined network, and do it on those two racks across two availability zones. You can see the summary here. It's going to be highly available. It's going to have two availability zones because we'll map the underlying physical availability zones from Mas into the logical availability zones of the cloud without you having to do anything. And we're going to install that cloud. So our goal is to make it fully automated because making it fully automated is going to reduce the cost dramatically free for all our supported customers. This is our roadmap. These are the vendors whose components we're committed to integrate into that autopilot. So this is going to allow you to build OpenStack clouds with your choice of hypervisor, including Lex-D, Hyper-V, your choice of storage, including all of the capabilities from EMC, both traditional SAN and the new scale-out software-defined storage. And networking. You saw OpenV switch and delighted to say that we will integrate Juniper Contrail into the autopilot. We're working with Nuage from Occatel Lucent, now Nakea, and Metaswitch Networks, Calico, Plumgrid, super exciting startup, and of course Cisco. So you see how we're bringing the vendor ecosystem to your rack in a single page. For those who can't wait, we will build that cloud today manually. And so that's our BootStack offering. It's a fully managed on-premise OpenStack offering, right? Build, operate, and optionally transfer. I'm not sure what's ringing. At least it's not ticking. So BootStack, build, operate, and optionally transfer, a fully managed OpenStack. Our focus is scale, reliability, and performance. And if you look at our commits to OpenStack, you'll see that's what we focus on. But we also focus on economics. For private cloud to thrive, it has got to be economically competitive to public cloud. And so we've announced per node pricing, standard Ubuntu support. If you're just building on standard Ubuntu support, go ahead and build OpenStack and you'll support it. We announced AZ pricing. Now that's really for service providers. Service providers essentially want a capped price for their OpenStack, and per AZ pricing gives them that. We announced that a year ago. Today I'd like to take the next step, because as we move into the enterprise market, we noticed that enterprises have these crazy complicated spreadsheets trying to convert the pricing of all of their inputs into a price per VM. So today we're going to extend our pricing plans and offer for Ubuntu OpenStack per VM hour pricing for full support of your OpenStack cloud. And that comes in two flavors, just like Ubuntu. It comes in supported flavor. You build it and we'll support it. And it also comes in a fully managed flavor. So Bootstack per VM hour. So this is specifically designed as we now move into the enterprise market from the service provider market to help enterprises model their costs, plan their costs. If you choose the option on the right, five cents per VM hour, you can plan exactly your costs and benchmark them against any public cloud. You bring the hardware, we bring the hands and the cloud runs to an SLA. Okay, that's for compute. And to match that, today we're announcing storage support that's priced based purely on what you use. No fees for replicas, no fees for redundancy, no fees for backups, just what you actually use. We're bringing public cloud pricing to OpenStack on-premise. Here's what it looks like, 2.2 cents per gigabyte per month, independent of the underlying storage technology. So that could be Swift, that could be Seth. It doesn't matter. Now there will be a range of plans from a range of partners but we are guaranteeing that there will be a plan for each storage technology. And I think we're about to see an explosion of interesting storage, software-defined storage technology. We were delighted to see the EMC open-sourcing of Copperhead and Viper. That will also come in a fully managed offering, a bootstack offering, again, at four cents per gigabyte per month. As part of that program, because we are now able to attribute specific support revenue to specific services, we're announcing that we will cut a share of that revenue directly to the upstream projects associated with that technology. Why? Well, it seems to me a little bit misguided that platform companies tend to act as a giant sponge for all the revenue associated with supporting the platform, because customers don't pay for platforms. They pay for services, for solutions, for workloads, for applications. And if we can associate revenue to Canonical with a particular upstream, then we're committed to share that revenue with that upstream with no requirement that they provide any sort of L3 support. We're doing all the work. And I'm delighted that participating in that program, we have a couple of interesting companies, obviously Ceph and Swift are included in that program. But we're also announcing that we will work with NexCenter on their very exciting software-defined storage capabilities. There's a range of program relationships there that you will be able to have one throw to choke, one process for working with Canonical and NexCenter, and also SwiftStack, the leaders in the Swift ecosystem. So let me dig a little bit deeper into the underpinnings of all of this. What's going on under the hood to make all of that possible? Well, Maz, metal as a service, is the magic that effectively drives the bare metal provisioning that gives people super agility in a bare metal context. Maz is now being used at the London Stock Exchange. It's being used in banks. It's being used in retailers. It's being used in telcos, in media companies, just as a fantastic base platform for deploying physical infrastructure. And we announced support for Windows and CentOS in Maz that is now all landed and all in production. People are using it to deploy Windows thin clients. They're using it to deploy Slez, Ubuntu, obviously, any Linux that you care. It's just a great, crisp, clean REST API to all your hardware, whether that's HP Moonshot or Cisco UCS or standard Pixi and IPMI. Maz has grown a full open source IP address management capability. So Maz now has the ability to map and model the entire network. It knows about all of the subnets, all of the devices, not just the machines that it's managing. And that allows you to get APIs to get physical IPs dynamically on your network. And we use that very much for the landscape autopilot, right, to grab IPs to use for things like floating IPs or elastic IPs. We were delighted that the Chef community has adopted Maz. So there are now knives and provisioning libraries for Chef to allow you to point your recipes straight at bare metal through Maz. And we see adoption from other configuration management ecosystems as well. The Chef is super popular in the Ubuntu community, so we're delighted with that. This is all in the name of making Maz the universal software defined data center API. It's completely open source, it works with all the different vendors, X86, Arm and Power, Windows, Ubuntu, CentOS. And it's a software defined API to your physical infrastructure, useful regardless of what you're doing with it. In the Juju ecosystem, we've seen some really interesting developments as well. You've probably seen a series of announcements around charms being used on Windows. And I'm really excited to show, if I may, the work from Cloudbase, who have really been blazing this trail, and have charmed an amazing portfolio of Microsoft and other Windows software. So here you have, for example, OpenStack modeled on Hyper-V with Juju and deployable through Maz or on Hyper-V. Here you have a whole portfolio of OpenStack components, Cinder, Nova and so on. But it goes far further than that, Active Directory, SharePoint, Exchange server. You can all read, but it's fantastic. So real software now available instantly on demand on any cloud, on bare metal, on OpenStack and soon directly on VMware. So Juju is gaining the ability to talk straight to vCenter or ESX and deploy all of these capabilities straight onto your traditional, virtualized infrastructure. The Cloudbase guys will be speaking on Wednesday. They'll be talking about their work with Windows on OpenStack. I highly recommend the talk to you. So cross-platform service modeling. You can now write charms in Windows and CentOS and obviously Ubuntu. That charm ecosystem is growing. You saw some Microsoft software there. IBM software is being charmed. If you follow the Juju list, you'll see a series of charms landing for a whole portfolio of software from IBM. Cisco, Juniper, Alcatel, Lucent. All of these vendors are essentially making their software available in this format. And we've also seen upstream projects pulling in charms into the upstream projects. So Google, for example, Kubernetes. If you look in the Kubernetes upstream branches on GitHub, you'll see the charms. So it's becoming a really nice way to take a piece of software, any piece of software and just make it instantly available on any cloud. And for the upstream, the really exciting thing there is that instead of having people wade through layers and layers of documentation, which, let's be honest, is often a little out of date or doesn't cover all the clouds, this is a nice, great way to consolidate the experience of new users to your platform in a very standardized way, regardless of the cloud. It lets you essentially turn all of that documentation into executable code, which is encapsulated in a charm. We're going to follow that lead. So the OpenStack charms, which are led by Canonical, but now are important to a whole bunch of different companies who are building their own layers on top of that or participating inside this, we're going to move those charms to StackForge. So you will see all of the OpenStack charms being developed on StackForge, which will help our collaboration with, amongst others, the cloud base, Joints and various other very interesting companies. And lastly, there's a new capability in Juju itself, which is something called actions. So Juju is all about encapsulating, encapsulating everything that people know about a particular workload. So this is a cloud that we deployed this morning, and if I go here to System Information, and I'm still on the VPN, and I can remember the password, you'll see the clues in the bottom right, just here. This is actually OpenStack deployed this morning from Git from Tip. This is Liberty. Luckily, Tip is in good shape today. So that's Liberty deployed this morning from Tip. And if you want to see, that was all done in an automated process, right? That's part of our CI. So there's a commit on a Git branch, and automatically that gets pulled and deployed CI as a service effectively. If you want to see, look behind the scenes. There's the Juju service model, right? Juju is this universal service modeling thing. It doesn't know that it's modeling OpenStack. It's just modeling what it's modeling. But if you look, you'll see in here all the things that you'd expect to see, right? So here's heat. Nothing wrong with heat. Here's Solometer and Overcomput. Here is Neutron Gateway, Neutron API, Keystone, Glance. This is a highly available glance. There over there is Glance with an HA cluster charm, right? So that's actually a highly available glance deployed from source. That's kind of interesting. That's kind of cool. And the observant amongst you might have noticed that down here is a sort of dangling charm. Oh, one other thing. Here's the configuration of the charm that tells the charm where to fetch the source. So this is how you can program the charm to just go and fetch from a particular branch the dependencies and the code that you want to use in your OpenStack deployed from source. Can you see how this is enabling anybody to build any shape of cloud across any hardware at scale, arbitrary complexity from source, the pieces that you want from source, from your branches, all just using standard universal modeling in Juju with MAS. So there's this dangling relationship down here. What could that be? Well, it turns out that's a new service. Not a service you want in your production OpenStack. It is the Chaos Monkey. That's the Chaos Monkey service. So let's dig in. Remember, this is part of our CI, our quality program. So if we go in and have a look behind the scenes. Oh, and you see some lights coming on here. So that's that dual rack OpenStack coming up. If we dig in behind the scenes here, if I do a Juju status, this is that OpenStack. Bunch of services. There's a new Juju status tabular format, which is quite nice. And so here you can see all of the services in OpenStack. Those are the machines, bunch of machines. Then you see the services. Let's see if I can scroll up. You see Solometer, CIFS, and through Swift Proxy, Swift Storage. All beautifully mapped out for you. You know exactly what the IP addresses are. You know exactly what the status is. If we look at the actual units down here, you'll see under Glance, over there. You see Glance has some subordinates, including the Chaos Monkey. Now Glance, you know, is an image library, right? So let's have a look. This is a Glance query. So I'm just running a Glance image list. So every five seconds I'm hitting Glance and saying, tell me about the images in that cloud. And if I go and have a look in the cloud through Horizon, I see a bunch of images. And those correspond, hopefully, almost exactly to those images over there. Everybody happy? All right. But we've got a monkey. And, well, before we do the monkey, this is Coruscink, right? So this is Pacemaker. This is the, I think it's Pacemaker. This is the HA mechanism, right? It's watching those different Glance instances. And you can see that Machine 6 over there is the primary, right? The VIP, the virtual IP is focused on Machine 6 over there. And that's what's responding to these Glance queries, OK? But we've got a monkey. So what can we do with a monkey? Well, a monkey is a Chaos monkey. So if I say what actions are defined on, I've got these three actions. I can show the logs. I can show monkey IDs. And I can start it. And actions can have configuration, right? So actions are great way to encapsulate an operational script. So you want to back up the database. You need a file name and a compression level, right? So actions are a great way to encapsulate operations associated with that service. So what monkey operations could I do? Well, I think, here we go. All right. So on Chaos monkey 1, Chaos monkey 1 is on Glance 0. You see up at the top there? Chaos monkey 1 is on Glance 0. I'm going to run this action, which is basically a disconnect the network. And with a little bit of luck, Glance keeps running. And here you should see. So we had developed machine 6 was the primary machine, right? So now what are we doing? Chorus sync should notice, and there it has, should notice that that machine isn't responding. It switches its model, puts the virtual IP in another place. And we now have machine 7 answering to the Glance queries. And if I go back to the Glance queries here, you'll see Glance every five seconds still up. So you've seen a live demo on stage of OpenStack deployed from source. So this is Liberty Tip in a highly available, one of those services, highly available, with a Chaos monkey integrated so that we can nuke a machine and watch what happens. You've just seen that failover happen completely automatically. And you can take, if I do juju export on this bundle, if I go and hit this button over here, this YAML file, that is the model of that cloud. You can duplicate that exact configuration across that set of machines for yourself in the privacy of your own garage. Okay. The really fun part of this is that the Chaos monkey service can be composed into any juju service. This is the real amazing power of juju because we can compose services so easily. I can take that Chaos monkey and I can make it go everywhere. Where would you like to make some Chaos? Keystone, bad idea. Bad idea. This, to be honest, we only made Glance HA. So pretty much anywhere I take this Chaos monkey is going to be real Chaos. But the fun thing about this is that we can now have two completely different communities. Ooh, should we put Chaos on Coruscant? That would be wild. Right? If I go and commit that, what you'll see happening here. Not there. Here. You'll see these new Chaos monkeys being installed in there. You see Chaos monkey 4 sitting by my SQL there? So we're spreading Chaos through the cloud. So what we can do is we can have two completely different communities. One that loves Chaos. And they just make Chaos better and better. And others that love HA and they love HA in whatever. Could be Mongo, could be my SQL, could be Galera, could be OpenStack Dashboard, could be Rabbit. Doesn't matter. And those two communities don't have to agree on anything other than that sometimes people want to put those two ideas together and wreak a little Chaos in the service. So that monkey is now completely general. Anybody can go and drive that monkey. So actions are kind of fantastic. You see what I did there was I made an action on that with some config. There's going to be a fantastic talk a little later in the week. And I can get the, ah, on Wednesday at 11 a.m. in room 1114 about benchmarking. Benchmarking anything on any cloud. And the way that's done is by taking the benchmark and encapsulating it as an action on the charm so that anybody can use it on any cloud. And the fantastic thing we're starting to see now is cloud providers saying, hey, let me tweak the charms to optimize Cassandra, Mongo, whatever it is on my cloud. So now the end user doesn't have to go and read all the detailed extreme optimization guides, right? They can just take the standard charm, drop it on VMware, drop it on bare metal, drop it on any cloud and run the benchmark, the same benchmark in a completely reproducible way. And if they're really having fun, they can run the benchmark. And while they're running the benchmark, they can cause a little chaos just by deploying the Chaos monkey, relating it in and having a blast. So that was a whistle stop whirlwind tour. There's a lot of fun stuff going on. It's a real pleasure to collaborate with so many great companies in the OpenStack ecosystem, but more broadly as well, we're seeing charms of very interesting big data configurations, Lambda architectures from the major vendors, as well as the upstream Apache community using Juju to essentially make it really easy for people to spin up big data. Similarly in content management, this is a great way to enable people to go faster on any cloud in any sort of environment. Similarly in platform as a service, I talked a little earlier about Kubernetes. If we go, for example, to demo.JujuCharms.com. I haven't backed this to a cloud, but I could effectively attach it to any cloud. And as long as I'm actually on the internet. So these are charms that are actually part of the GitHub upstream branch, master branch for Kubernetes. But we pull them in, snapshot them in once they've been validated on a regular basis. And that's how easy it is to deploy Kubernetes on any cloud. OpenStack, bare metal, Azure, AWS, you name it. I hope that's really interesting to you. In the OpenStack community, our view is we really have to focus on the core, compute, network, and storage. Because if we focus on the core, we'll make OpenStack great. And if OpenStack is great, then all of these great services that are winning on the public cloud will come to the private cloud. We don't have to invent our own APIs. We don't have to do, you know, snuff who has a service, right? Let's just get the core amazingly good. Have a great week. Thank you very much for coming.