 So, next up, we've got Mark Shuttleworth, who I'm sure many of you know. He is the founder of Ubuntu, Canonical, and I believe he's also the founder of the Dramatic Live demo. So, we're going to see the latest release of that here. So, I can't wait to see what he's got, his bag of tricks over here. Come on out, Mark. Hello, everybody. OpenStack, my how you have grown. So, I wanted to show you a little bit of the state of the art of OpenStack, OpenStack, and Ubuntu. And I wanted to talk about OpenStack in production, like some of the experience that we've gained from watching people and working with people who are using OpenStack in production. Jim Curry said that this is the year of OpenStack users, right? There was the idea, then there was the developer community. Now we're really getting into the era of people putting OpenStack into production. So, what have we learned? Well, a core focus for us with OpenStack in production is the telco sector. And if I look at the companies who are building OpenStack today and building it with Ubuntu, there's no doubt in my mind that the serious infrastructure play of the future is OpenStack. So these are all companies who have OpenStack Clouds in production working with us. There are many others as well. If you look at the GA economies, a very large percentage of the number one or number two carriers are bringing up OpenStack Clouds right now. So this is going to be the infrastructure that essentially becomes the bridge to the LAN extension essentially for enterprise computing today. Enterprise is going to have their data centers. Those will be running OpenStack. And then they'll reach out to their telcos where they'll have managed private clouds and public clouds all running OpenStack. And then from there to the global players, the global providers. So carriers, service providers, telcos are a core, core audience for us. And the lessons that we learned from these guys, from working with these guys to bring up clouds at scale, get built straight into the platform that is instantly consumable to anybody who's using Ubuntu. On the private cloud front, the early adopters are technology, media type companies, telecoms type companies. You've seen Bloomberg here talking about their OpenStack work. Comcast, for example, built out on Ubuntu. There's a very significant pace of adoption of OpenStack on Ubuntu amongst media companies in general. Because they tend to have this workflow where they're standing up shows. And so every couple of weeks they're standing up an entire new infrastructure. Another really interesting example in the petrokin industry where every block that gets drilled on an exploratory basis becomes a whole new company. So cloud is a perfect infrastructure for that. Because you can stand up a company, a virtual company essentially, gather all the data, do all of the analytics. And if you find anything interesting, the company survives. If not, it all gets torn down. The movie industry is kind of like that too. Virtual companies get created and cloud OpenStack is being used in that industry as the sort of virtual harness for those companies. Okay, let's see if this will work. This is a list of companies or some of the companies who are using Ubuntu working with us. Across a variety of different things, not all of them cloud. But the core story is that Ubuntu has now moved into the enterprise, is now a very widely adopted platform in enterprise computing. And in particular, in the scale out field has some very strong stories to tell. Over the last three years there's been a really dramatic move in enterprise computing. This is looking at the top million websites by traffic. It's even starker if you look at the top thousand websites by traffic. And now nearly three times as many servers running Ubuntu in those markets as well. And this is this deep shift that's going on from scale up, which is the traditional way of getting reliability and performance, paying more for bigger servers to scale out. Where you replace all of those very high end servers by many, many, many smaller servers. Where you care much less about the individual performance or individual reliability of the node. And this is going to come to its ultimate conclusion with projects like HP Moonshot, where that server could be an ARM server. So dramatically different architecture, different approach. But handling data in a completely different way. So in this environment, your applications are different. So we used to have scale up applications. And in a scale out environment, your applications are all scale out horizontally architected applications. And it's a change in platform too, because if you're shifting risk from the node to the cloud, then you think about risk in the node completely differently. So we often have conversations with people who say, look, I do this stuff one way when I'm doing it in my data center and the traditional infrastructure. And I do it completely differently when I'm doing it on the cloud. Okay, so why do people choose Ubuntu? They choose Ubuntu because of scale. And there are three elements to that. One is economics, the cost of Ubuntu, typically half to a third of the cost of a traditional enterprise Linux platform. But more importantly, I think, because of tooling. Ubuntu is a platform that has been consumed at very large scale on public clouds on EC2, on other public clouds, and now increasingly on OpenStack. So there's any number of tools built into the platform or tools that have been shaped specifically to address scale. So I wanna dive in and demo some of the latest work that's going on in that regard, let me just check. So what I have here is an HP server that is running OpenStack on a set of virtual machines. I have a backup just in case. So essentially what you're looking at there is one server. And on that, we've deployed using juju. We've deployed OpenStack across a set of virtual machines. And on that, we've deployed etherpad. So here I have etherpad running. Let me just check, it all looks live. So you can think of this scenario as a cloud that you're running and you have a tenant, a user, who has a workload live on your cloud. And previously, we demoed very fast deployment of applications onto the cloud using juju, onto any cloud. Last, ODS, we demoed deploying the cloud very quickly and then upgrading that cloud from Essex to Falsum. So today, I want to show you something that's a little more directly related to the real problems that you run into in production. So what does this cloud look like? If I go to this page, you'll see the juju visualization of that OpenStack cloud. So this is an orchestration view of all of the components of OpenStack running on one of those servers. And that picture looks mostly the same as it looked six months ago, right? All of the charms have been deployed, they're all connected up. But there's some new pieces. On the far left over there, see a whole bunch of HA related charms. So as of 1304 and Grizzly, you can juju deploy your cloud onto bare metal or onto virtual machines. And you can deploy the entire thing in a high availability environment. So what we've done is we've gone through and looked at each service that makes up the cloud. And we've worked with those telcos to identify an appropriate HA strategy for each of those services. Now, the HA approach is going to vary, right? For a dashboard like Horizon, the HA approach is different to some of the core services like Rabbit or MySQL. But in each case, with a single command, you can essentially take a single running node providing that service and scale it out and run that service in an HA capability. So that's a core part of the move from evaluation or experimentation to production, right? There are a few people who will be comfortable running a large production infrastructure without HA. And that's now a standard part of the way people deploy OpenStack on Ubuntu. The other new piece that you see in here is landscape, which is the management server that our customers use to manage Ubuntu. Now, it doesn't pretend to manage Android devices and Windows phones. It just does an amazing job of managing Ubuntu. And as of tomorrow in beta, landscape has grown the ability to give you a high level view of the OpenStack cloud. So traditionally a management server will look at all of the nodes that it's managing. And it will tell you things about them, it'll tell you about hardware, it'll tell you about software updates that are available, issues that may be visible. Each of those pieces of information is essentially attached to the node, what landscape is able to do here now is step back and say, I recognize that those 40 servers or those 400 servers are an OpenStack cloud. And because they're an OpenStack cloud, I can zoom out and I can give you aggregate information. I can look for example at the total load on the cloud relative to as a cloud. Instead of looking at the load on individual nodes, I can say, all right, looking at the compute nodes, because that's what really matters. This is the heat map essentially of load on your cloud. And I can do the same for networking. I can say, look, you've got 10% of your network ports are saturated and the rest aren't. And I can give you that perspective holistically for an OpenStack cloud. And this is all automatically discovered, introspective, because we're able to say, all right, we detect OpenStack running across those hosts. We explore the topology, we know all of the nodes that we're managing, and then we can give you this picture over the top of it. And we can do things like trend analysis. This has only been running overnight, but we could tell you, for example, how long it's gonna take you before you run out of capacity in storage or in compute or any other of these macro characteristics of the system. When we talk to people running OpenStack in production, the most thorny problem that they have is dealing with transitions. So last ODS, we demonstrated an upgrade of the cloud from Essex to Falson. A lot of people have been standing up clouds and then they have to tear them all down and stand them up again. So I think we've demonstrated, and we've made the commitment to the ability to upgrade your cloud from one version of OpenStack to the next. We have 12.04 LTS, which is our last current LTS enterprise release. And we've delivered on that, we've delivered Essex, Falson, Grizzly, and we'll deliver Havana as well, including the ability to upgrade live in place between them. But there's another problem that happens for people who are actually running clouds, which is that you have to do maintenance on the nodes themselves. Six months ago at ODS, when we did that live upgrade, we were essentially staying on 12.04 and upgrading from Essex to Falson. We weren't, in fact, rebooting any of the hosts. We weren't changing the kernel. We were just essentially revving the version of OpenStack. When I say just, it's still pretty extraordinary capability. But we weren't essentially doing maintenance. But large clouds have to have maintenance done of them. So in this dashboard, you can see up here on the right that 25 of those nodes have package upgrades available. So just to restate what we've got is, we've got Grizzly deployed. On top of that, we have a running workload, running KVM on KVM. That workload is live and it doesn't belong to you. The node belongs to you, the workload belongs to someone else. Especially if you're a telco or a service provider. And one of these updates, this one over here, is a kernel. So what are we going to do? We have to deploy a fresh kernel across a running cloud without taking down the workload. And landscape can now drive that process fully automatically through an OpenStack cloud in production. So to demo that, what I'm going to do is kick off this process now. And you can see here a macro activity that's been scheduled for this OpenStack cloud. Let me just check, it's still alive. So what does landscape do? Well, it knows the structure of OpenStack. And it knows that we do HA. And in fact, we do HA plus one. In other words, we're able to scale out a service so that it's not just HA, so that one node can fail. But it's HA plus one, so that two nodes can fail. And so landscape is able to essentially to use a combination of scale out, scale back, and live migrations to roll that kernel update out through the running cloud without taking down the workload. So for example, on a service which is naturally, is easily HA'd, what we might do is we take your two nodes, which are running HA. Here we go to HA plus one. This node now has the fresh kernel. And we can then take another node out of rotation. In the case of a compute node where you actually have a running workload that doesn't belong to you, in which you're not managing, we can drive a live migration for the workloads off onto other nodes, validate that those migrations have worked, update the node in the appropriate way, reboot it if we have to, test that it is now a working node, so do a test deployment onto it, and if necessary, then migrate workloads back onto that. And landscape will drive this process over the next 15, 20 minutes. It's a bit faster doing it on virtual machines. In a physical cloud deployment, this would take an hour or two. But some of our telco and public cloud partners had said that it was taking them weeks to conduct this kind of essential maintenance on the cloud. Landscape will now drive this in an entirely automated fashion. So I'll leave that to roll out. So economics, it's cost effective to deploy the cloud this way. Tooling, the combination of landscape, juju and other tools. And ecosystem, are the big drivers for people building these clouds on OpenStack. Six months ago, HP announced that they were certifying all of their volume SKUs with Ubuntu to enable people to build these clouds on HP hardware. During the course of the last release, Dell also announced that they were certifying all of their volume SKUs so that you can now essentially build a cloud in an afternoon on SKUs from either of those. And what we do is make sure that all of the auto detection, all of the automated updating of firmware and so on is completely automated for these certified hardware partners. So our goal is to be the easiest and fastest way to deploy OpenStack. If you want to get OpenStack up and running today and then keep it up and running over successive versions of the platform, the easiest way to do that is going to be with Ubuntu. I think the first version of OpenStack that was baked into Ubuntu was Bexar back in 2011. So that's BCD, Essex, Folsom, Grizzly, and Havana, Jusun. So we have some deep exposure, some deep experience in the core pieces here. Ubuntu was the first enterprise platform to ship KVM as its default hypervisor. And it still has, by far, the largest install base of KVM guests and hosts in production today. But we also support Zen and Zen with OpenStack so you can deploy OpenStack with Zen. And yesterday, we announced a relationship with VMware for enterprise customers who have a substantial ESX and vSphere property. You can now ju-ju deploy your OpenStack cloud in an afternoon and then with the single command, link that OpenStack cloud to vSphere and to your ESX property. So we're able to have real choice of hypervisors. In both carrier environments and enterprise environments, that's essential. So this is what it looks like. You literally deploy the cloud, connect it up to vSphere, and that entire stack is fully-certed, and supported by both VMware and Canonical. We've also built a strong relationship with Microsoft around Azure. And they're returning the favor, so that's Ubuntu running on Azure. Ubuntu is the number one Linux platform on Azure today. Microsoft are returning the favor. In May, we will have a fully-certified set of Windows drivers for running Windows on Ubuntu on OpenStack as well. So you can deploy your cloud on Ubuntu or you can use the Microsoft cloud and you have full heterogeneity and interoperability because heterogeneity is the norm and that will be increasingly the case. Across all of those public clouds, Ubuntu is the number one public guest. So the easiest way to get anything done on any of those clouds typically is to use Ubuntu as a core part of your tool chain. That combined with the fact that we have this great relationship with OpenStack, matched cadence, we develop and release in lockstep, makes it really easy for people to either engage with OpenStack as developers of OpenStack or consume OpenStack in an enterprise environment. And uniquely, we have this commitment to delivering successive versions of OpenStack. The pace of innovation and change in OpenStack is such that nobody wants to be sitting on an older version of OpenStack even if that is the original certified version. So we uniquely do this. Another core piece of infrastructure for us is Juju, which is orchestration that spans multiple clouds. It makes it really easy to connect and configure service topologies on any environment, on the cloud, on physical metal, on any cloud. I haven't seen a tool take off quite like Juju before, just because it meets a real need that people have. It's a layer up from config management. So it's a layer up from puppet and chef. It allows you to wrap your puppet and chef. So if you have puppet or chef expertise in configuring a particular service, you wrap that in what we call a Juju charm. And then you can connect that up to other charms, in a very agile, very lightweight fashion across any kind of cloud environment. This really is an ODS that's all about ecosystem. If I look around the room, the majority of us are still essentially figuring out how we're going to build OpenStack together. And the core story for us is to be able to plug in value to live running cloud environments. So I showed you just how seriously we take the idea of the running cloud as a live dynamic environment. You have to be able to upgrade it. You have to be able to connect to it. You have to be able to extend it, scale it out, scale it back live in real time. But it's also going to be important to be able to try new technologies and plug them into what are essentially production live running environments. And so our engagements with all of these companies are all about figuring out how to plug their value out into a running environment, into a live environment. So you can try the value add from each of these companies. We don't think that there are going to be hundreds of OpenStack distributions just like there aren't hundreds of, or there are hundreds. But every major institution doesn't have their own Linux distribution. We figure out how essentially to plug value into known trusted platforms. So all of these companies and many others, NEC, EMC, NetApp, and so on, will plug into this live dynamic Ubuntu environment. There are four dramatic fields of innovation at the moment. There's infrastructure as a service, platform as a service, SDN, and what I've simply called SOS, scale out storage services. If any of you analysts have a better term for that, I'd really like it. But, SEF, Swift, and all of the storage companies that are essentially turning directly attached storage into scale out storage that's available at a restful endpoint, that is a phenomenon that we think is going to be really important. And if you look into the companies that are pioneering all of those, in Paz, for example, I think Heroku, Engine Yard, Cloud Foundry, all depend heavily on Ubuntu. In SDN, Nasirah recommends Ubuntu and deploys most easily on Ubuntu, as does Big Switch, and SEF, Swift, and so on. All of these are being pioneered prototype architected and then deployed at scale in Ubuntu. So how do people engage with OpenStack? Well, they all start as an experiment. All OpenStack clouds start as an experiment. People steal a rack, and then they prove to themselves that they can do it, and then they start to put them into formal evaluation processes, then they open them up to test and dev, and ultimately, and quite quickly, tend to move them into production. So one of the key blockers, historically, has been just the amount of hardware that you need to get started. With this demonstration, one of the things that's not immediately apparent is that I can take that running cloud, which is essentially running on top of a set of virtual machines, and without taking the clouds down, I can start adding hardware. And I can grow that to hundreds of racks, thousands of servers, tens of thousands of cores, without ever taking down the running cloud. So it's really important, we think, to have this very smooth pipeline from the ability to take a single beefy server, it has to be a fairly beefy server to start with, spin up the cloud, and then, if that's working, add capacity to it. While you're adding capacity to it, you start to make the transition to high availability, more management, more monitoring. You start to introduce and try some of these other capabilities from third-party vendors. That living dynamic process is what's really difficult, and where it's really at. We run big chunks of canonical on OpenStack today. So we have 20 million users who hit an OpenStack cloud every day. Many of the features and capabilities inside the Ubuntu desktop, phone, tablet, client experience are services that are delivered in real time from OpenStack. So uptime there is vitally important to us. It's been really fantastic to see just how quickly both developers and system administrators have flipped from thinking of this as something alien and different, you know, this dynamic environment, to consuming this eagerly. It's now significantly easier and faster to get new infrastructure deployed inside the company if you do it in a way that makes it really easy for the cloud administrators to drive. And a core proposition from us is predictable pricing. We think the cloud should be easy to install, it should be easy to scale, it should be easy to upgrade, and the economics should be great. So we price it very simply, small, medium, and large by availability zone, and go from there. One last thing. This phone is running Ubuntu, and it's not running an embedded version of Ubuntu, it's not running Ubuntu RT, it's running the desktop server version of Ubuntu. What's amazing about that is a couple of things. First, in order to get it to really fly on the phone, we've had to go through and slim down, streamline, remove the craft, and really make a core OS of Ubuntu as light as possible. We're right at the tipping point where it's possible to run a full platform on a piece of consumer electronics. But the huge advantage you get by us doing that work to make Ubuntu fly on the phone is that we're seeing increases in virtual machine density. So by shrinking and streamlining and tuning the core OS for the phone, we're seeing this nice increase in VM density on the cloud. The other huge advantage is that, as a developer environment, this is providing exactly the same set of core services as an Ubuntu guest virtual machine on any OpenStack cloud. So for example, for telcos who are interested in building ecosystems of developers and devices, this convergence is an extraordinary, extraordinary transition. And I think we're just at the very beginning of that. So I have no doubt that when we get to the other side of that transition, anything that lives in a data center, anything where there is growth in the data center, is gonna be built on OpenStack. All right, let's see how we're doing. I hear them humming away. So you can see that what Launchpad has done is sequence the upgrades. For each service, it essentially has to do a trial upgrade, create a node, validate that that node with the new kernel is working properly and then move over the service to the new node. It isn't fully done. So for example, Rabbit is still waiting to go and MySQL is still waiting to go, but those seem to be the last in the queue. Now we have one glitch that happens. Quantum in Grizzly is Active Passive HA. And I believe in Havana that will be sorted, so we'll be able to do Active Active. So and the issue there is that when you switch Active to Passive, it can sometimes essentially glitch on the network. So let's say force reconnect. There we go. So the workload never went down, but Quantum tore down the network and then built it up again. So I just had to reestablish the connection, but there it is. So that workload is now running on a different virtual machine. It's running on a virtual machine that has the kernel upgrade. If I was a customer of that public cloud, I would never have noticed. All right, thank you very much. I hope everybody has a great day. A lot of good sessions to be had. So I hope to see everybody around. Thank you.