 Yeah, a couple more keynotes to get started here. The first one is from Mark Shulloworth, who is the founder of Canonical. Some of you may have seen the skydiver who did the skydive from space on Sunday, I think it was. Well, I think Mark is probably the only person here who has actually been to space. I was talking to him earlier and he said I could introduce him as a space cadet. Which I don't know that that's completely an appropriate description of him. If you've talked with him, you know that it's a lot more than a space cadet. But we're really excited to have Mark Shulloworth here to give us our keynote. Mark, thanks. Well, hello everybody. Let me see if I can get this live. We're all good, looks great. Okay, nothing like a live demo. So I come from the Ubuntu community, which is probably the largest open source community in the world. And one of the things that we do is we pride ourselves on drawing contributions from all over the world, but more importantly, all different kinds of contributions. And this morning I went to open up my presentation, which under Ubuntu I can do just by searching for it. So I can just go this. And there's my presentation. But I was also delighted to see from these very helpful Amazon suggestions, not one, but two books about OpenStack. So if I go in over here, is Kevin Jackson in the house? Yeah, give him a cheer, everybody. And is Ken Peppel in the house? Well, give him a cheer anyway. The mark of a great community is the diversity and quality of contributions. And this has, I think, all of the hallmarks of a phenomenal community. So six months ago I did some magic on stage at the Design Summit, bringing up from scratch from bare metal a full OpenStack cloud in 18 minutes. And I say magic not because there was any trickery, but because the tool that we use to do that is called juju. And just like Ubuntu means people, juju means magic. And today I'd like to do something perhaps a little more daunting to show you just some of what's been achieved in the OpenStack community in the last six months. My theme for this talk is OpenStack and production. So those of you who have run large, scary systems in production may have some inkling of what we're going to try and do. We have supported OpenStack development for some years. Four releases of Ubuntu, including this week's, have included OpenStack. But increasingly we're also supporting production deployments of OpenStack from a variety of different companies. And it's lessons and learnings and insights from those production deployments that I wanted to share with you today. So these are some of the companies that engage commercially with Canonical for a variety of different things. And on the cloud front I would call out two different groups. There are the service providers, people who are investing in OpenStack as a line of business. So Telcos, hosting companies, ISPs, service providers. And then there are private companies who are building internal clouds, typically in media, tech or finance, insurance. And what they really want to do there is accelerate their developer productivity. When we talk to them they aspire to a bunch of different things, these six things really sort of rounded out. And I'm gonna focus just on agility and scale as the two key things that are drivers of the current dynamic of production use of OpenStack. So on private clouds, the goal that's winning, the thing that people are really getting from OpenStack today is developer agility. When we go into companies and ask what their developers are doing, we find that they are spinning up lots of complex deployments. So they focused on an app, but today the app lives inside this complex world, right? Lots of services all interconnected. Does this look vaguely familiar to anybody who's spinning stuff up on the cloud, right? You've got apps, you've got web frameworks, you've got different kinds of SQL and no SQL storage, you've got big data going on. And that's all scaled out. Now, the complexity of that is what's killing app developers in these large institutions because they have to build this framework in development, then they have to build it again in test, then they have to build it in production and every time they roll out again, they have to redeploy all of that. And it turns out that the tool that we used on stage to deploy OpenStack, JuJu, was born on the cloud and it was built to make this kind of complex deployment really easy to do. So JuJu distills everything that you need to know, everything that the internet knows about how you deploy one of those pieces, be it Node or Mongo or Tomcat or MediaWiki, down into a charm. And that charm has all of the intelligence that you need to spin up, configure, connect, tear down, scale out, scale back, that service in the cloud on OpenStack. Inside that charm can be anything you like, can be puppet, can be chef. There's no conflict between the service orchestration, stuff that JuJu does and the config management stuff that those tools specialize in. And because this is all being accelerated in the Ubuntu community, there's already a long list of these charms that people can use off the shelf that essentially let you spin up with very little effort, complex webs of these services. We did this with OpenStack itself, so all of the core components of OpenStack have been charmed, but from a production use from a guest point of view, these are the sorts of things that people want to spin up. And the productivity improvements, the velocity improvements for developers inside companies when they get a private cloud and OpenStack private cloud and they start to use this kind of service orchestration are dramatic. So one part of that is say a node app behind a load balancing proxy talking to Mongo. That comes down to five commands, right? Literally deploying each of the services, connecting them, and one of those services I told to scale out. So JuJu makes things really, really easy. But as we've taken that into production and we're working more and more with company developers to spend most of their time in Visual Studio or Eclipse, we kept getting asked whether JuJu could have a GUI. And we noticed that people had all these whiteboards where they'd drawn out the service diagrams that they wanted. So we thought, wouldn't it be fantastic if JuJu had a GUI that let you just draw what you wanted and deployed it onto OpenStack? And so today I'm really delighted to show you for the first time in public, the HTML5 open source GUI for JuJu. So this is running live on the very excellent HP public cloud which is built on OpenStack and Ubuntu. And this is deployed with off-the-shelf charms. So in celebration of the folks here from MediaWiki, we've got a MediaWiki instance talking to MySQL. And this is now up and running. I can, you know, it doesn't do very much. If I wanted to, I could go in and take a look at some of its settings. One of the settings is the title of the wiki. So we can do a little bit of this. And in due course, I'll be able to reload the wiki. And there we go. So that's live. That's running in production on HP cloud. Say I was happy with this and I wanted to improve performance a bit. I might want to go and add a bit of Memcache D. So it's as simple as going and saying I want Memcache D. That's a pretty simple thing. I don't really need any special config. So I'm just gonna say that. I might also want a sticker. Let's just put that over here. I might also want to stick HA proxy in front of it. So I just go and find HA proxy. There's a collection of charms all up in the cloud, curated within the broader Ubuntu community. So you've got professional sysadmins and volunteers in the Ubuntu community, essentially crowdsourcing the goodness of that. So here we've got these pieces and I just want to connect them up. So what I'm gonna do is I'm gonna unexpose MediaWiki. And I was gonna pull it back. I don't want people connecting to the MediaWiki service directly. So I'm gonna do that. Now I'm going to go in here and connect Memcache D and in a perfect world, I could connect HA proxy as well. So that is live and running in production and each of those changes gets orchestrated. The changes are dynamically applied and things like scale out, scale back all happen very naturally. So I think that's pretty awesome and maybe worth a round of applause. Okay, so in the last six months in that world, the focus has really been focus on developers bringing up the frameworks that developers are using every day. So essentially these are the top developer frameworks, Tomcat, Node, Grails, Django, PHP, and then the top data stores. And the idea with Juju is to be able to make it that you can pick any combination of those things that you like. So this is almost like a lightweight kind of pass and it's very agile. Instead of there being a sort of central pass where somebody else has decided what the database will be and what the languages will be and so on, you as a developer get to choose the parts that you want with no ops really, glue them together and then deploy them onto any OpenStack cloud. So it's a new kind of pass, a new approach to pass. Works across all of these different environments in your laptop and memory on the cloud, but also on physical metal. Remember, we used Juju to deploy OpenStack on physical metal, on bare metal. And I'll show you some more about that later. Okay, so what are we really doing? We're looking at the economics of corporate IT and recognizing that the budget generally is the budget and that gets eaten away by four things. It gets eaten away by the cost of your compute and storage, your hardware, operating systems costs, your app costs, whether you are building your own or buying them, and operations. And operations are everything from the electrons to the time that you spent configuring and reconfiguring services. And so by compressing the cost of the operating system, Ubuntu even fully commercially supported is a more cost-effective platform option and accelerating operation. So you're spending less time and less money redeploying things. We open up the budget for hardware, the physical ingredients of compute and storage and for app development or software acquisition. So that generally makes hardware companies happy. It makes IT managers happy. It makes developers very happy indeed. So when we talk to people who are looking to deploy private clouds, we often get a lot of questions that start like this. When will OpenSnack support blah? Blah could be almost anything, right? It could be live migration of Windows eight guests on Zen between diverse clusters. And the answer of course is someday and probably quite soon. This is the most awesome community in action right now in open source. The pace of change, the pace of improvement is phenomenal. And so we can very confidently say that anything that really is a top priority feature for people will make its way into the OpenSnack code base. But there's no real reason. This kind of question can be a blocker to deployment. And in fact, what we see amongst the sharper, smarter IT guys is a different question. They're saying is, okay, OpenSnack is amazing today. How do I get the most bang for the buck? How do I get OpenSnack up and running in the way that is going to essentially let me ride that wave of innovation? And we would generally advise a couple of key things. First, stay close to upstream. If there's a specific feature that you think is really important, you will really regret if you invest a huge amount of that behind the scenes. You want to bring that feature idea here. You want to bring your developers here. You want to participate here. And then you want to consume that once it's been through the OpenSnack community process, by and large. Second, focus on the things that work right now. Third, limit your integration scope. So a lot of people start out saying, well, it has to work with our company, single sign-on, it has to work with our backup system, it has to work with all of these systems that we have in place. And in fact, what you really want to be doing in a private cloud environment is taking your best and brightest, most agile development teams and unleashing them on an OpenSnack cloud. It could be a production cloud. It could also just be a sandpit. And the point is that that team will develop the tools and the processes and the culture that you can then spread within the company as OpenSnack meets more and more of your production requirements. And lastly, we would say that a key thing to focus on is how easily you can upgrade from one version of OpenSnack to the next. Because with this size community, the amount of value that you'll get in the next version of OpenSnack vastly exceeds the amount of value that you can add through your own internal development in four to six months. So you know that Ubuntu puts out a release every six months and OpenSnack dovetails perfectly with that so that one way to get the next version of OpenSnack is to upgrade from version of Ubuntu to successive version of Ubuntu. But for people now in production, upgrading your operating system just to get the new version of OpenSnack is a scary proposition. So at the last ODS, I made the commitment that we would deliver successive OpenSnack releases as optional upgrades on the LTS release of Ubuntu. So we put out every two years a release that's an LTS release. It's the release that gets certified by more hardware. It's maintained for a much longer period of time. It's the release that typically gets deployed in production environments. And we said that what we would do is ship 12.04 with Essex, then we'd deliver as updates, as optional updates, Folsom, Grizzly, H. Before, of course, 14.04 LTS kicks in and then there's an upgrade path to I and the whole process can repeat itself. And so today, this week, we'll ship 12.10, Ubuntu 12.10 with Folsom, but we also have the Folsom upgrade path on 1204 ready. And so to make good on that promise and to make myself a little nervous, we're going to do a live upgrade of a running production cloud with services running on top of it and see if it works. So this is the MAS. This is the Metal as a Service layer. It's a very simple provisioning layer, which we use. You can use any provisioning system with Juju to deploy your cloud. But what this is essentially telling you is there are 14 nodes in a cluster somewhere. Nine of them are deployed. That's actually what's running OpenStack right now. Five of them are available if I wanted to scale that out. We'll see how much time there is at the end. I can give you a Juju view of that same picture. So this is the service graph essentially showing you OpenStack that's running on that cluster. So this is what we did on the command line six months ago. Only now you have a beautiful gooey view and you can see a bunch of things. You can see what's connected to what, what services the different pieces provide to each other. If I go into a particular service, I can show you all of the config settings that I might want to provide for that service. I can do things like this. So we use landscape. Landscape is a commercial professional management and monitoring solution from Canonical for Ubuntu infrastructure where there's servers, desktops. And so I've got a landscape charm here. And what I can do is connect up, let's do this one over here. So now you can see that landscape client is connected up to those services. So what's happening behind the scenes is that service is getting hooked into landscape into the management service. So maybe later I can show you a management view of what's actually happening on those machines. So to upgrade, before I do that, so what's running on here just for fun. This is the dashboard. This is horizon. You can see those of you who know the subtle details can see this is Essex. And there are some instances running. Ooh, have I lost network? No idea what happened. I think I just lost the VPN briefly. So you can see there's a bunch of instances running on there. And in fact what those are are that's a WordPress installation. So here we have a couple of instances running on this cloud. Now to upgrade it, I'm gonna go to Keystone. And essentially I'm just gonna change a piece of Juju config in this story here. So instead of saying just pull it from the distro, in other words these are standard open stack packages from Ubuntu that have been deployed. I'm gonna say pull it from the cloud archive. And we're on precise, but we want fulsome and we, feeling brave, so we'll take the latest proposed packages over there. And with a bit of luck that kicks in. Now, so what Juju's doing now is it's telling Keystone so update is config, Keystone charms then essentially go and open up to the new set of packages from that contained fulsome and they start the upgrade process. I just need to do that gooey process at the moment on each of those services which I'm not gonna do here. In fact, my anonymous but very glamorous assistant in the audience is going to run a command line script because all of this can be scripted to do exactly what I just did. Change that one setting on each of the services. The upgrade, how long do you think the upgrade takes from Essex to fulsome? Weeks, says somebody who's been trying to do that in production, I suspect. Three minutes, so in three minutes, we should have upgraded that running cloud from Essex to fulsome. What could we do in three minutes? Well, in three minutes. So that's the WordPress instance that's actually running. I think it's still running. So this is landscape and I showed you earlier that this is the, I mentioned earlier that this is the management interface that we would use in a production environment. So here, for example, you can see an extra computer showed up. So I dragged that link, right? So essentially what I said is register that computer with landscape. So I can go in over here. There's the extra one that I just added. I can just go and accept those. So essentially now that computer is being fully managed by landscape which is also running in the cloud. Let's go and see what we can do over here. These are all the machines. So I'm just gonna check them up and then let's have a look at what's going. And we've got a whole set of monitoring capabilities here. So you can see there's a bit of a load spike on one of the machines but you've got a complete monitoring picture essentially of all of those hosts which are running in production. They'll be a little bit busy for the next three minutes. What else could I show? So for example, in terms of reports, one of the key things that people use landscape for is compliance management. So they want to know, for example, of all of the Ubuntu machines that they're managing what is the typical time to patch a machine. This is all very fresh. That cloud deployed very fast. So you can see that we've got a very clean bill of health on this set of cloud hosts over here. Okay, so clean upgrades. And that's been done. So for private clouds, the focus is developer agility. For public clouds, for service providers, the challenges really are scale and managing your upgrades, right? In a production environment. Also time to market leadership. OpenStack is now something that customers are asking for service providers, rightly want to get early stage offers out there so that their best customers can get access to OpenStack and maintain their, expand their relationships with them around OpenStack. And so the challenge really is how to help telcos service providers deploy OpenStack in that crisp, clean way that will let them ride the, surf the curve of innovation inside OpenStack. So this is the architecture that I've been describing. So we use metal as a service as a base layer. And again, you don't have to do that. You can use any provisioning system. But MAS has cloud-like properties. So for example, when you, when you put a machine into MAS, it gets a hardware inventory taken. And so you can then say stuff like, okay, do you deploy Swift onto another machine that has at least three hard drives? Or do you deploy another compute node onto a machine that has two sockets or N cores or any other measure that you want of RAM or other characteristics? Or do you deploy this component onto something that has say an NVIDIA graphics accelerator because we want to do some high-end compute onto that? We then deploy OpenStack and deploy services on top of OpenStack like that. So MAS is kind of a critical piece of infrastructure and the key change between 1204 and 12010 was to expand MAS so that it could handle multiple clusters. So what the MAS server is doing, when you add a server to MAS, is essentially figuring out how to drive that over IPMI or Pixi. It handles, MAS handles all of DNS, DHCP, and so on. It does the net boot of the machine so you can do a hardware inventory of it. It then essentially puts that machine into a queue and when the machine is required, it's MAS that feeds it the operating system image that's been selected, the services that are required, and so on. So there's quite a lot of interaction between MAS and machines. And that meant that from a scalability point of view, the number of machines you could attach to MAS would ultimately become a limit. So the multi-cluster approach allows us to scale to clusters of hundreds of machines and hundreds of clusters of hundreds of machines. So you can have thousands of nodes now in an OpenStack cloud that's deployed using this infrastructure. There are limitations inside various pieces as folk who are scaling OpenStack are finding, but that is the key focus now of OpenStack, of many of the OpenStack contributors, right, is identifying those bottlenecks and knocking them off patch by patch. So I'm told that the upgrade is good, so let's go and have a look. So with a little bit of good luck, this should change. What was that again? Thank you. I feel my brain expanding. So here we go. So this doesn't look very different, but this is now running, Folsom. So again, to the folks who made this possible, both in the community of OpenStack and Ubuntu and Juju, I just want to say thank you very much. That's pretty extraordinary. So it does look slightly different. So the horizon is up, and you can see those instances are still up, but I'm not sure. Yep, and WordPress is still up. So we did an upgrade from Essex to Folsom. Overcloud running live with production services on top of it and the services stayed up. I think that's pretty awesome for OpenStack, and I'm not sure that I believe what Gartner says when they say it might not yet be ready for interesting use. So on the telco front, a key set of issues for telco service providers, of course, is billing and metering. So Canonical has been leading something called Solometer, which is an effort to create a standardized framework for metering inside OpenStack. Those of you who are here who are looking at, essentially, either internal accounting or external billing around OpenStack should check out the Solometer sessions or come and see us at the Canonical booth. So once you have your cloud up and running, there's a key set of questions about the user experience of the cloud. And users now have the choice of multiple OpenStack clouds that they can go to. And so we started looking at those clouds, engaging with them and benchmarking some of the elements of user experience. Some of them are qualitative, some of them quantitative. So here are four famous but anonymous clouds, and we're just looking at the time to boot and refresh images on those clouds. You can see there's a really dramatic difference, a factor of 15 difference between those clouds. So in our engagements with telco service providers, we work, if they want to offer Ubuntu, we have a program where we engage with them and do a round of tuning, certification, optimization. And we support essentially the Ubuntu experience on those clouds. We engage sometimes just at the guest level, essentially. If you want to have Ubuntu on your cloud, then that needs to be done as part of the program so that the end user experience when they download Ubuntu or whether they get it on a cloud is certified as the same starting point so that all apps will deploy or services will deploy. And we sometimes engage with those cloud providers at a host and guest level. So we'll essentially help them design and build the cloud at the host and then that includes the ability for them to publish Ubuntu in that public cloud. We work today with hardware from a variety of different companies. I was delighted to announce that HP was certifying Ubuntu at the last OpenStack Summit. And there are now nearly 50 pro-line servers certified by HP and actively being used to build clouds. We saw an announcement recently of OpenStack on Ubuntu on Cisco clouds and OpenStack is also in use on Ubuntu on Quanta hardware in telco environments as well. So this is what we're about. Making OpenStack in production real and there's a bunch of work involved. They're designing and building, addressing mission critical issues, things do go wrong and providing escalation paths for that. Working with a growing network of partners worldwide who are themselves running OpenStack in production so that when an issue occurs, they have immediate visibility on whether other people have seen that issue. If something goes wrong in an OpenStack public cloud, it reflects on OpenStack. And so I think we all have an interest in seeing that issue addressed, having the fix for that issue submitted to trunk, but also having it flow as quickly as possible to everybody else who can get that fix in production so that the reputation of OpenStack continues to improve. So in summary, this is what it's all about at the moment. Crisp, clean deployments of OpenStack, using tools like JuJu for developer agility. I mean, that's the real win for enterprises who are doing private clouds today, starting to get their smartest teams on board with new processes, more cloud-like agile thinking in the data center as opposed to just out there on the cloud. Full management solutions so that people can deploy at scale and manage that infrastructure and making sure that the guest experience on OpenStack Clouds is great so that the body of OpenStack Clouds becomes a real force in public cloud computing. So that's everything. I wanted to say thank you very much and have a great design stomach. Thank you.