 Next up, I'd like to welcome a veteran of many open-stack keynotes, Mark Shuttleworth. Come on up, Mark. Hello, everybody. Good morning. Yesterday, Troy announced, or described us all as the Star Wars Rebel Alliance, and suddenly a bunch of things came clear to me. I used to think of myself as Han Solo, but recently my girlfriend's been calling me Chewbacca. I can't imagine why. You know, what's amazing about open-stack is this incredible ecosystem. And if you're a technology supplier, well, hands up, who here is a technology supplier? Bunch of you. Well, you may feel that your success depends on open-stack, but the opposite is also true. Open-stack's success depends on you being successful as well. And the thing I hope every vendor, every technology supplier appreciates most about Ubuntu is that we don't compete with our ecosystem. We don't come to the table with our own favorite messaging system or hypervisor or database or PAS. We try to enable an ecosystem where everybody will succeed. And maybe that's why so many people choose Ubuntu when they're moving to the cloud. Even companies that have spent a long time doing things one way in Linux choose Ubuntu as they move to the cloud. If you look at the public cloud, nearly 70% of the Linux workloads on all of the major public clouds are Ubuntu, even from enterprise commercially supported applications. And in open-stack, the substantial majority of the production enterprise deployments of Open-stack, those companies have chosen Ubuntu as well. And what makes me very proud is that of the five out of the six featured users, you'll see here the super users, chose Ubuntu. And in every single case, there are multiple vendors, other vendors, from this audience who are bringing their innovation to bear on those projects and sharing in that success. Six months ago, we announced the Ubuntu Open-Stack Interop Lab oil. And I'm delighted that 14 vendors have now joined that, moved that. There are many more in the queue going through the paperwork. And what's unusual about that is that it's not a single one-time certification of interop. In fact, it's continuous interop testing. Every day we build Open-Stack 100 or more times with different permutations and combinations of components from different vendors. That's extraordinary. We can build a different cloud every 15 minutes across a whole bunch of different clusters of hardware from different vendors. And the way we do that, if we could switch to the demo screen, the way we do that is by using juju, this amazing service orchestration system that's completely open source. And it allows us to build a cloud just like this. So this is a definition of a cloud. We generate these every 15 minutes and throw them at this huge cluster containing hardware from different vendors. And what you should see there is Open-Stack being built essentially from scratch. We do this on bare metal, installing the operating system, and then installing all the different components of Open-Stack. Each box there, it's always a bit of a relief when the boxes start to arrive. Each box there comes from what we call a charm, which is like a package of software for the cloud. And charms are completely reusable. They can be written in anything you like. Chef Puppet doesn't matter. One box there can be in Chef. One can be in Puppet. One can use Docker. Another can use Salt and juju as a service orchestration system glues those all together. So that allows us to build the cloud incredibly quickly. If we can come back to the deck, it also allows us to do continuous interrupt testing, both on the stable version of the code and on the tip on trunk. And it allows us to do continuous integration as a service. For those of you as vendors who are actually shipping Open-Stack itself rather than plugins, and are modifying Nova or Neutron, we can do continuous integration on your branch in that oil environment against the components from other vendors so that you can tell your customers whether or not your version of Open-Stack will work with components from other vendors. And this generates a treasure trove of information for vendors. So here I have a sample report from oil. We share these with vendors every month. And it's an absolute goldmine because it lets you know exactly where you need to focus your engineering resources. It also enables us to do same-day turnaround on validation for a proof of concept if you have a customer who is looking to build a cloud with components from you and from other vendors. So for those of you who are here to essentially figure out how to integrate your products with Open-Stack, that's probably the single, joining oil is probably the single best investment that you could make. What this means for users, though, is complete freedom in your cloud architecture. You know, we can place those services, those different charms, across any combination of any hardware. What that means is that your cloud architects can essentially build in a morning a cloud based on a particular architecture, placing services in particular places in the network. They can study it for performance for your particular workload. Then they can tear it down over lunch and build it again differently in the afternoon. And people have said again and again that speed is what matters. Speed is what accelerates, accelerating your ability to learn, to master Open-Stack is what this is really all about. So people who've adopted this way of thinking say that as they accelerate forward, as they essentially crowdsource all of the core components but maintain their own architecture, they have mastered Open-Stack and move much faster. So when we work with the very largest clouds, their architects essentially model what they want and then use Juju to place the services exactly where they want on their network. But for smaller organizations, you may not have cloud architects or you may want to have a best practice reference architecture. So I'm delighted to reveal for the first time a fully automated reference Open-Stack installer which we've built on top of Juju. Juju's open source and all of the charms are open source. So what we've done is we've built an intelligent installer on top of those which we've made available to our customers. It's part of our landscape systems management product. Again, if we can switch to the, there we go, thank you, you guys are well ahead of me. So over here you'll see a new Open-Stack tab and that's going to let me build a cloud. So I'd like to demo this live on stage. I have over here a cluster and this is in fact a C micro cluster from AMD has 64 fully loaded nodes and we're going to use landscape to install Open-Stack on that cluster. What we've tried to do is to design out, design down to just the key choices that any user would have to make. So what I've done is I've registered all of the machines in that cluster with metal as a service, maz. And I've connected maz to landscape. So landscape sees maz, it sees that I've got at least five machines ready to go and one of them that has multiple necks for neutron gateways and so on. And I'm going to say let's go create a cloud. I need to name it, so let's call it ODS Atlanta. I need to make some core choices. So I'm going to choose a hypervisor. We'll go with KVM. And as vendors come through the oil process, we will add them to this if they want. So if you're a vendor and you want to be on this page, then joining oil and letting us get confidence in your charms is the right way for us to be able to do that. We could choose different SDN solutions, lots of interesting options emerging there. I'm going to go with open V-switch. I'll choose Ceph because we love it dearly. It's brilliant. Has been brilliant for three years. And now I need to add some hardware. So in maz, you can allocate hardware to different physical zones. So you can model in maz the physical separation of clusters on your network. So I'm just going to add two zones and then, of course, this will map those up to availability zones in open stack. So you do that. All right. So install the cloud. So all of these machines are switched off. There's one little server in the back which is running maz and landscape. And landscape will interrogate maz, interrogate the hardware profile so it will study every node there. Then based on that, it will build the best practice cloud based on everything that we've learned from building the world's biggest open stack deployments. What's cooler about this is that that architecture will evolve. Landscape's a management system and your update landscape every month. And as you update it, it will evolve your cloud. It will, for the Sony chaps, it will handle version upgrades inside open stack if you wanted to. And it will manage the system from the bare metal all the way up, giving you a consistent cloud experience effectively as a service. But if you wanted to create your own architecture, your own installer, the important thing is that you can do that and many guys are starting to do that now on top of the very same charms. In other words, they're reusing the same charms, the service definitions that telcos, banks, media companies and others are using. As those charms get better, their implementation of open stack gets better just as well. So with crowd sourcing from all the users of Ubuntu, essentially, the expertise to really optimize open stack components. And when you build your own installer on top of that, you're essentially reusing all of that expertise as it gets more secure, as it gets more performant. How good could it get? Well, that C micro chassis is pretty amazing. It has 64 nodes. AMD was kind enough to lend us at first another five. So we had six C micro chassis. And on that, we set what we believe as a new world record for the number of VMs launched on open stack. We managed to launch 75,000 in six and a half hours. So that beat the previous record by an hour and a half. We then kept going. And it got to 100,000 VMs. If you want to know what that looks like, here it is. That's 100,000 VMs on just under 3,000 cores. It was slightly over committed. So AMD got very excited. It gave us another three of those C micro chassis. And we turned them on, registered them in Mars and expanded the cloud. It's all very elastic with service orchestration. And pushed it up to 576 hosts and 168,000 VMs, all in the course of a weekend. But if you wanted to go really big, how about the world's fastest supercomputer? Well, that's in China. And it happens to run Ubuntu. And over the last couple of months, we've been invited to install OpenStack there. The intersection of HPC and OpenStack is going to be amazing. That's all running on Intel Xeons. So the fastest computer in the world at the largest scale. An amazing piece of kit. At the exact opposite end of the spectrum, though, you might think that OpenStack is built to be big and bad. But what if we could show you OpenStack as a super light, super small experience for tiny clusters? For the first time ever in front of a public audience, this is ARM 64-bit hardware running OpenStack on Ubuntu. Those are the GeneX development boards from Applied Micro. At our booth, you can see OpenStack running on those with a whole bunch of really interesting workloads. Log Stash, Kibana, everything up and running. And it's beautiful. And just to complete the silicon picture, last week we announced the completion of the port of Ubuntu to little Indian power. And so that makes Ubuntu the one platform that can give you an OpenStack experience across all of the different architects that are out there in widespread enterprise use. Each of them has something beautiful about them, something interesting. And we're here to enable innovation, enable you to choose the platforms that suit your workloads. So you could have a lot of fun with this. And here's some of the fun that we've been having. You may have spotted these on our booth. Come and have a look at them. But Intel also makes these very cool microservers. And we wondered what it would be like if we had 10 of them in a box. And so here we have the orange boxes. Inside each of those are 10 Intel microservers with Juju and Maz. And so you can do anything with those boxes. They're an amazing way to just master distributed systems. We're using them for jumpstart training. Our jumpstart program now has a training dimension for 10k plus travel and accommodation. We'll come to your offices with an orange box. We'll spend two days training you to master OpenStack, build it in different ways, as many different ways as you like, combining components from those different vendors in oil. And once you've mastered it, we'll leave that box with you for two weeks so that you can figure out exactly what you want to do with it. Most people at that stage are ready to go and build their own OpenStack clouds. They've figured out everything that they need to make OpenStack exactly what they want it to be. And if you do build a cloud, if you do choose Ubuntu on your cloud, you're in really good company. Some of the most amazing companies in the world are building on Ubuntu and OpenStack. Deutsche Telekom, building their next generation network. Time Warner, building a huge media property essentially on Ubuntu OpenStack. NEC, one of the world's leading telco network equipment providers, shipping telco solutions on OpenStack on Ubuntu and offering a public cloud. And NTT, one of the great telcos in the world, building a series of clouds and also bringing some really amazing SDN technology to the OpenStack community. And they've chosen, and many others have chosen, I think five out of six of the featured super users here have chosen Ubuntu. They've chosen them because Canonical and Ubuntu have many firsts in the cloud. We were the first to adopt OpenStack by some margin. We were the first and still have the most KVM hosts and guests in production use. We were the first long way to adopt Ceph and there will always be a vastly more Ceph in production on Ubuntu than other platforms I imagine. We demonstrated live migration on stage here at OpenStack for the very first time. We pioneered cloud in it. So every time you boot Linux on any cloud, you're using technology developed by Canonical and contributed as open source. And right now, the key thing that we think is absolutely fascinating and amazing is Linux containers. In 1404 LTS, we shipped Linux containers 1.0. And this is a cross distro effort led by Canonical. More than 60 companies participate in that initiative. If you haven't played with LXE, just grab an Ubuntu desktop and spend an afternoon with it. It will blow your mind. When you see any Linux distro, you know, CentOS, Debian, Ubuntu, Red Hat, booting in less than a second right there, it's virtualization like you've never imagined it before. So it's really amazing, really fantastic. For Linux on Linux environments, it's changing the world. And it's the basis for a huge amount of innovation, including Docker, which you might have heard of. And tons of other things, essentially in the PaaS space, using containers, not VMs, to accelerate everything that they do. We have been an innovation enabler. You know, innovation comes from small companies like Docker, like Cep. We are delighted for the Cep guys with their acquisition. You know, when you buy a company, you don't necessarily buy the magic and you can't buy the ecosystem. And what is important to us is to be at the center of these thriving innovation ecosystems. We will be fully supporting Cep with all of the management solutions that are available for it, for all of our customers and users on Ubuntu. Now, for those of you who are operating clouds, this is the cold hard truth. You will be benchmarked against the world's biggest and leanest cloud companies, Amazon, Microsoft, Google. They do a great job of what they do. For OpenStack to succeed, it has to deliver innovation. That's what this ecosystem is for. And it has to make it possible for people to build clouds that can compete economically with those companies. And that's why we make this really strong commitment that Ubuntu is the most cost-effective way that you can build a complete, supported, commercially supported, sustainable private cloud. Year ago, we announced availability zone pricing ranging from 75K to 350K per availability zone. Never gets bigger than that no matter how big the zone gets. And that's immediately become popular with all of the people who aspire to scale. If you aspire to scale, if you want to build a big cloud, then this is the model for you. But today we're also announcing another model for folks who are building smaller clouds and who would like essentially to get a best-practice cloud that is entirely managed. So on your choice of hardware in your data center, from the pool of certified on oil and Ubuntu hardware, we will help you design and build a cloud. We will manage that cloud, a complete managed cloud to an SLA for $15 per server per day. And we'll make it fly using everything that we have learned from the world's biggest open stack deployments. If you do that with us, your cloud will be very good. It'll be carrier grade. These are big carriers that are building big open stack infrastructures. It'll be banking grade. Delighted to be working with NEPs like Cisco on the carrier market, the telco market, meeting the specific needs of telcos. It'll be banking grade. You'd be amazed at how much of Wall Street is adopting Ubuntu. And I was there three weeks ago and heard this completely non-attributable quote from someone who I believe is here. I hope that they're continuing in the same theme throughout the week. But most importantly, it's going to be Chuck Norris grade. I love dealing with the media companies. They're kind of crazy. They're pushing the limits. They love Ubuntu. And it's great to work with companies in that space. Comcast, Bloomberg, Time Warner, Disney, OpenStack is incredible what they're doing with OpenStack on Ubuntu. With all of that activity, of course, we're hiring. Now, we don't hire pre-Madonna's. The open source community has plenty of that. What we're looking for is people who are passionate about serious scale and serious quality. If you think you could really enjoy working at Google, because of the automation and the scale, but you want to work in a smaller company and bring that kind of capability, that Google-esque automation to the rest of the world, then come and talk to us with the guys in orange. Right. Now, I promised you a little bit of magic. So here it comes. Juju, which is that magical service orchestration system, enables you to do almost anything in distributed systems really, really easy. People don't believe that you could spin stuff up that quickly. At an IBM conference two weeks ago, we spun up WebSphere and Hadoop and a bunch of other things in about three minutes, and the audience just couldn't believe it. We had to actually go and show them WebSphere running. It was there. It was live. It was real. So I want to show you a couple of things that you can now do with Juju that you couldn't last time we spoke. So if we can, thank you. Oh, here comes the cloud. And I see blinking lights. So far so good. This is Cloud Foundry. This is Cloud Foundry, which will be available as both an open source and a commercially supported product, supported by both canonical and pivotal by the time we meet again. So this is Juju deploying Cloud Foundry. So you could deploy this as a result on any cloud. This is on EC2, I'm afraid to say, but it will also be on OpenStack and on Azure and on bare metal. So however you want to consume Cloud Foundry, you'll be able to get it like this with Juju from Pivotal and Canonical. If you wanted to do Hadoop, for example, this is what Hadoop looks like. I can do this straight from the store. So this is a little cluster, very simplistic cluster of Hadoop. This, of course, doesn't show you anything about scale. This cluster could be thousands of nodes because that's just a logical model of what's connected to what. I could go in here and do slave cluster and just scale that up to 200 nodes and it would ask the underlying cloud for another 199 and we'd be off to the races. What else could we do? Toby Ford is talking about NFV, network function virtualization. So really, this is about creating distributed services on top of clouds that improve, enhance, create software versions of some of the existing appliances that telcos use. Well, here is an open source NFV solution. This is an IMS, it's called, which is a really critical piece of kit for telcos. And so that's Clearwater from Mesa Switch. As a series of charms, you can spin that up in a couple of minutes on any cloud or again on bare metal and just in a couple of minutes, you're good to go. What else? Oh, yes, we spent some time with a company that's quite famous for cars on demand and they were looking at ways to make their developers more agile, their DevOps more agile. So this is what we modeled out for them. They were able to get a complete replication of their full environment that the developers can spin up in memory or throw it a cloud for test or throw it VMware or physical infrastructure for production. So that's a whole bunch of different services reusing these charms. There's Rails in there, Memcache in there, tons of different app servers, tons of different databases. So it really is getting pretty amazing. So coming back to the slides, the ecosystem around Juju has grown fantastically. These are some of the companies that are using Juju to deploy complex systems or charming up their applications so that you can deploy their applications just as fast as that. And this week I'm delighted to announce another significant company stepped up committed a full team to work on Juju and that's IBM. So I'm going to hand over to Daniel Saba, the CTO and GM of Next Generation Platform at IBM to tell you a little bit about that. So if you look at the specific areas of collaboration between IBM and Canonical, what we're talking about is basically leveraging the wide variety of Juju and charm assets in the context of OpenStack, in the context of BlueMix with Bosch, in the context of SoftLayer and the way in which we actually characterize workloads and services so that basically people can take full advantage of the underlying SoftLayer resources to then optimize their individual workloads. The only way we can actually do that is through that kind of a partnership. So we'll be working with IBM on three things. The first is the integration of Juju and Heat. Heat does a fantastic job of OpenStack infrastructure orchestration. Juju does an incredible job of application orchestration and bringing those two together will be amazing. So if you have a charm of your software, that'll be the easiest way for you to create heat orchestration templates with a collection of charms and off you go. And you'll be able to use those same charms anywhere, of course. In OpenStack it'll be amazing to use them with heat. We're going to do the same with Juju charms and Tosca, which is another orchestration standard, so Juju in Tosca. And of course we're charming up IBM's portfolio of key software for scale out and distributed applications, as well as software from IBM partners. So people all over the world tell me they're just loving that agility, the acceleration of their DevOps type process that Juju gives them. But there is one common request, one very frequent request, which is for the ability to use Juju with other operating systems. Our focus, of course, has been Ubuntu throughout. But I'm delighted to say that thanks to a customer contribution, over the next couple of weeks we will add full support on Juju, which is orchestration, and MAS, which is physical infrastructure magic, physical infrastructure provisioning, for CentOS. Now if you look at the charts, CentOS and Ubuntu between them are about 80% of the cloud workloads that are out there. And that pretty much anything that you're doing on the cloud you could now orchestrate with Juju. But perhaps that's not enough. Perhaps in some cases it will be useful to use a platform supported by a much larger company. And so I'm also delighted to tell you that thanks to an unexpected contribution from a new member of the Juju community, both Juju and MAS, so orchestration and physical provisioning, will grow full support for Microsoft Windows. And to tell you a little bit about that, here's the CEO of Cloudbase who contributed that work. Cloudbase Solutions leads integration between OpenStack and the Windows ecosystem. We recently added Windows support to Juju and MAS to meet enterprise customer requirements for provisioning and orchestration in mixed scenarios. Now MAS supports governmental provisioning of Windows at the BA Linux deployments at NSK, including OpenStack environments with mixed at the BA KVM units. Juju enables complex Microsoft deployments with just a few clicks, including Active Directory, IIS, SharePoint, Exchange and more. Windows software charms use PowerShell and Python that can have all the same power as Linux charms. The result is amazing. Rich orchestration with both Windows and Linux components, like magic. Really, the best orchestration on the market. And it's entirely based on open source. I would encourage you to check out the Cloudbase booth. They've done some very, very cool work. I was absolutely stunned when they made this proposal. Just to prove that that's not just talk. On those orange boxes over there, one of them, you'll see all the lights are on. That's actually running OpenStack right now. The other one has most of its lights switched off. So why don't we turn those lights on? So what I'm going to do is kick off a deployment of OpenStack with mixed hypervisors. So there'll be physical installation of Ubuntu on that orange box going on. And there'll be physical installation of Windows on that orange box from scratch. And then orchestrated on top of that all of OpenStack, including Nova controller, sorry, Nova compute on Hyper-V. So that that Cloud will essentially be running OpenStack with Hyper-V on that orange box. So here it comes up. It takes a little longer than Ubuntu to install, but it does come up and you'll see it live on the booth. Also, what about applications? Well, here's what's running on the other orange box. Again, that's an OpenStack cloud with Windows images. And we've spun up on there. Windows Active Directory, Microsoft SQL server, and Windows SharePoint, all orchestrated. So imagine the possibilities. Imagine the possibilities of being able to deploy OpenStack with Active Directory backing your Keystone or SQL server as the database controller. So the world is moving very, very quickly. Heterogeneous environments, heterogeneous hypervisors, heterogeneous applications, heterogeneous platforms. That's the ecosystem that we want to enable. I very much hope we'll see you tonight at the Mardi Gras and Magic party in the Murphy Ballroom, where we'll be celebrating some of that juju magic. And it's just one last thing. If we can come to the demo screen again, Maz has support for all kinds of different hardware. It turns hardware on and off and Pixie Boots it and stuff. So if you go into the node, you can actually see how it's powered. There are all drivers for, there's a driver for C micro, there's a driver for the HP moonshot clusters, IPMI, AMT and so on. But there's this robot plastic as a service thing. What could that possibly be? Well, what do you do if you're trying to sort of test a service that turns machines on and off but you don't have a BMC? Well, if you're a cunning kind of person, then you take Lego and you make a little robot arm and you write a driver for Maz to turn the machine on and off. So coming back to Troy's theme, this works really much better, Spaceball's style. If you have little figurines and you go, turn on the server, use the force, Luke. Yay! Who would have thought the same infrastructure that can power on supercomputers, power on whole data centers, deploy hundreds of thousands of servers and put OpenStack or Hadoop or anything else on that could also power Lego. Have a great week. Thank you very much and stay well. I just want to give a special thanks to Chewbacca for making time out of his busy schedule to get here. You have no idea how hard it was to get him. So just one more reminder for everybody. Our next summit is going to be in Paris. You may have heard. November 3rd. So just wanted to make sure everybody knew the date. There will be more details coming soon. But in the meantime, enjoy the rest of your time here in Atlanta. And thank you very much. I hope you have a good time with OpenStack.