 And now as we continue through the morning, we have a couple of featured keynotes. I think that they finished the setup over here. The first one is going to be brought to us by someone who has been a guest at our OpenStack summits for a number of times now. It's always interesting, always likes to walk the wire. He is the founder of Ubuntu and Canonical, Mark Shuttleworth. Thank you so much. Appreciate it. Well, thank you very much. Welcome to Hong Kong. It is a great pleasure to be here. You know, in the Ubuntu community and in Canonical, our highest purpose is to celebrate the work of thousands of developers across hundreds of projects. But it's very rare that you see a project which manages to combine both great leadership, great governance, and being in the right place at the right time. And this project has all of those things. So it's a real privilege to be part of it. And I want to thank the OpenStack team for the way they've kept things together, even despite the extraordinary growth and intense pressure that they must experience being at the center of all of this. Now, we have a bit of a tradition at OpenStack summits, which is that I try very, very hard to do an impossible thing. So far, so good, touch wood. It has worked. And really what I want to do is celebrate the progress that OpenStack is making, show you just what is possible with the latest drops of code. So this time, I'm going to need a volunteer. Let's see. In the past, what do we do? We did an automated deployment. The first time we did this, we did an automated deployment of a workload onto OpenStack. Then live on stage, we did a deployment from scratch on bare metal of OpenStack in 20 minutes. The next OpenStack summit, we did an upgrade in place of a running cloud. So we had a cloud that was running fulsome, and by the end of the talk, it was running grisly. In the next OpenStack summit, we did a complete reboot of the entire cloud without destroying the running instances to show that you could use OpenStack in production and you could essentially preserve guests while upgrading the kernel across the entire cloud. And that, of course, is a critical feature for people who are putting OpenStack into production. All of that is controlled by landscape, the Ubuntu management server. So you might be asking yourself, what we're going to do today? Well, of course, we're gonna build a cloud, but I need a volunteer, and to get a volunteer, we're gonna use my little ducky. This is, in fact, a random number generator, and the way it's a random number generator is that whoever ends up holding this ducky, if they want to, can volunteer, otherwise they can pass it on, they can pass the duck. So if you don't want to volunteer, just throw it over your shoulder. Are you in? All right, give him a round of applause. While he's coming up, I want to tell you a little bit about Ubuntu. Ubuntu is the reference operating system for OpenStack development. It's the base OS used in more than 80% of today's production OpenStack clouds, and it's the number one guest Linux on Amazon, Azure, and HP Cloud. Hello, hi. Tell us a little about yourself, what's your name? Atul, Atul. It is great to have you here. Thank you for coming up. Atul, do you work for Canonical? Do you know what's coming? Do you know what this is all about? Okay, very good. Do you guys believe me that Atul doesn't know what's coming? Okay, sit tight, hang on to the duck. All right. So, what else is Ubuntu? We try to make Ubuntu, oh, here's a quote from the excellent guys at Scala. Scala is an interface to public clouds. It allows enterprises to have governance over their public cloud activities. And Scala says that more than 70% of the images they see on those public clouds are Ubuntu. We try to make Ubuntu fast, friendly, beautiful, and really easy to use for developers. And we build it on a cadence, just like OpenStack. So our 1204, 1404, 1604 LTS cadence matches, with interim releases, matches OpenStack's cadence. So that we try to be the best and easiest way to get OpenStack into your hands. 1404 LTS is going into planning right now. We will ship that in April. So developers, now's the time to tell us what you're looking for in 1404. Specific dependencies, specific versions. We're gonna live with 1404 for a long time. So please, our engineers are here to figure out exactly what makes the best base for OpenStack. And we recently overtook Centos as the platform for large-scale computing. We have been the number one platform on the top 10,000 websites by traffic for some time. We've now overtaken Centos on the top million websites by traffic as well. So KVM, now you may know that Ubuntu was the first Linux distribution to adopt KVM. A tool, would you like to build a cloud with KVM? Okay, but wouldn't it be nice to have some options? Grab a mic. Okay, so here's how this game is gonna be played. What I'd like to do is I'd like everyone on the left-hand side of the audience to show their appreciation for a tool. Okay, and now everybody on the right-hand side to do the same. All right, a tool. What are we doing? Are we doing left or are we doing right? You'll go with the right. Good choice, there you go. All right, don't worry, don't worry. That was just a warm-up, that was just a warm-up. Okay, so we're gonna build a cloud with KVM because KVM is the reference hypervisor for OpenStack, but wouldn't it be great if we could choose our own adventure? Who had those books, choose your own adventure, yeah? Oh man, I love those. Okay, so our adventure. You get to choose ESX or LXC. So ESX, LXC. All right, champs, that'll be LXC. All right, storage. Now, you get the choice. How are we gonna do this? Swift, are you listening at all? Seth, I scuzzy. Enterprise, guys, enterprise. It's legacy, but we have to talk to it. All right, so I think that was Swift and Seth, yeah? Okay, now the network. Now, we're gonna build a cloud of the flat network because that's how most people do it, but again, wouldn't it be nice to have options? And I heard there's this SDN neutron thing. So let's hear it. Open V-switch. I should have put NSX. Oh, it's all. Too many choices. Which ways are gonna go? Too many choices. Too many choices. All right, let's try again. Open V-switch. Nasera NSX. OVS it is, all right. Now, do you guys believe I had nothing to do with this? He chose, you chose. Okay, we're gonna build two clouds. Thank you very much at all. It was a pleasure to meet you. Take the duck. There you go. So our goal is to make an open stack easy for you and to give your options, right? The extraordinary thing about open stack is it is this nexus of collaboration between an entire giant industry. And we want to express that by making it really easy for all of these pieces to plug together. And the way we do that is with Juju. So Juju is sort of one step up from configuration management. It allows us to distill into what we call charms, applications or components, pieces that we want to glue together in a cloud environment. And in this case, we've got, for example, Nova VMware as a charm. So that lets us talk to ESX, which you didn't choose. And here I think we have an example of Ceph and Rados Gateway. So you can see how we build different clouds by combining together these different pieces. And we test and support all possible permutations and combinations, which we expect over time is going to be a fantastic exercise in interoperability testing, right? Hypervisors, storage options, network options, SDNs and physical kit that you might want to deploy all of this on. So I'm very pleased to announce that we're going to, that we have created the open stack interop laboratory oil. And oil is all about creating a place for vendors to deliver and test both stable code and next generation code. Today inside the oil lab, we have multiple clusters on hardware from a whole range of different players. We build the cloud again and again with different permutations and combinations. We build it on stable, we build it against tip. We also build custom versions of those branches for some of our customers who have their own variations on Nova or Swift or CIF or any particular component, right? They publish their branches to us. We organize the continuous integration. And so we're able to know with very high fidelity whether or not they will be able to plug EMC or NetApp or any number of different pieces into that. If you are interested in that, if you have components that you would like people to integrate into Ubuntu open stack clouds, then please email oil at ubuntu.com. We're here this week, so it'd be great to get together. Now, to date, we have done all of our demonstrations on x86, but I think the next big thing is hyper dense servers. And so I'm very pleased to announce that we are working with these three great companies to bring open stack to ARM so that you will be able to deploy ultra dense computing infrastructure, highly specialized applications, highly optimized ASICs. And I look forward to working with these companies and many more integrating their contributions into open stack and really fundamentally changing the face of large scale computing. Last time, six months ago, we announced that we were working with VMware to be able to connect open stack and ESX. Today I can tell you that that is moving into production in a number of locations. How many folks here have ESX on-prem? So how many people have VMware on your network today? So this technology and this partnership allows you to bring open stack and ESX together so that you can use the open stack API to talk to your existing hypervisor infrastructure as well as any other infrastructure that you're deploying in your data centers. I also wanted to give a shout out to the guys at CIF and Ink Tank. Now, I'm not impartial here. I liked CIF so much that I supported the company that spun up, but they have done extraordinarily well. How many folks here have actually played with CIF over the last six months? And what do you think? Yeah? So I think that's an extraordinary bit of technology and it's great to see the rapid pace of adoption around what they have done, fundamentally changing the way we think about scale-out storage. I'm also really pleased to announce that we now have a Windows 2008, Windows 2012 driver that passes all of the HTK tests. It's in production with some high-volume customers. Those of you who have to run Windows on open stack can now do it with confidence. And we look forward to tuning the performance of that and helping some of our larger network operator customers deliver platforms that are completely, clouds that are completely platform agnostic. So, you've got a cloud. And in fact, when we first started building clouds on stage, do we have blinking lights? We've got blinking lights. When we started building on clouds on stage, that was kind of revolutionary, that you could just spin up an open stack. But I think that by the time we meet again, you will all have done this. All you need to do is take off the shelf kit. Here I have Dell and HP servers. As of 14.04, every single server that Ubuntu certifies will only be certified if you can, in fact, do this in the privacy of your own garage with that server. So we use standard interfaces to talk to the hardware, standard infrastructure to essentially build out the cloud and deploy any kind of workload, but especially open stack. So building a cloud is no longer particularly exciting. If you want, we'll come and do it for you in a couple of days. We'll mix up the permutations and combinations that you want and voila, you'll have a cloud. But what do you do with it? Well, how about a Paz? A lot of conversations, it's an extremely exciting field. For the last three or four years, developers have been working with companies like Heroku and any number of other Paz environments to fundamentally develop a way of thinking about a new kind of application development productivity. And our position has been that all of that innovation was really, really healthy. It's fantastic. You can think of 15 or 20 different Paz options that you would have on Ubuntu, on open stack, or just on open stack generally. But one of the things that I think it's important for us to do is to signal to our customers and our partners who we think is in the lead and to build relationships that essentially ensure that when we work with a customer around a technology, we can go really deep with that technology, with that customer. And so I'm really pleased to announce that Pivotal and Canonical will be working together to make the experience of Cloud Foundry on open stack absolutely fantastic. We're going to work together to deliver a turnkey Cloud Foundry solution, not just for Ubuntu open stack, but for any open stack. We're going to certify and test that openly across the open stack ecosystem. We're actually going to work on two products together. We're going to charm up Cloud Foundry so that you can use Juju to deploy Cloud Foundry. Now today, there are lots of ways to deploy Cloud Foundry, but the recommended way uses a tool called Bosch from Pivotal, which is a fantastic tool. It thinks in similar ways to Juju, but using Juju allows us to integrate it more deeply into all of the existing parts of the ecosystem that we've built over many years. So we'll have a charmed Cloud Foundry install. We will test that on multiple open stacks. If there are vendors in the room here who have their own open stack implementations, we would very much like to certify and test that on yours as well. This is not just for Ubuntu. The second product that we're going to work on together is a joint open stack and Cloud Foundry solution so that you can do what we're doing here, take servers in the privacy of your own garage and with one command, spin up both Cloud Foundry and open stack as one product, one solution, sharing resources, sharing databases, sharing infrastructure, super efficient, super fast, super lean. If you're interested in this, please email CloudFoundry at Ubuntu.com. There are folks here from Pivotal and Canonical who would love to talk to you about that. So what else? Well, how about everything? It is extraordinary how Cloud is coming to define modern enterprise IT. One of the things we added to Juju in this cycle was this idea of bundles, right? So charms encode a particular application or a particular service, but more often than not, the idea is bigger than that. You walk into a development office and you see a whiteboard and it has lots of boxes and lots of lines. Well, we call that whole set of things. We call that a bundle. And remember, you can make a charm, a Juju charm, and you can use inside of that charm, you can use any technology you like, Puppet, Chef, Perl, Bash, Python, however you like to write configuration files and organize that system, you can do that inside of the Juju charm, but you glue them all together with Juju. So this is what Juju bundles look like. I'm not gonna do that. I'm going to do a little tricky, so you just go to jujucharms.com, well, comingsoon.jujucharms.com, that's our kind of beta site. And what I'm gonna do is someone emailed me this bit of code over here, this is a bundle. And so I'm gonna take that and gonna drag that and just drop it, give it to Juju, Juju, feed it to Juju, and in minutes, I'm gonna start building this infrastructure. This is a very cool piece of infrastructure over here. It is Jenkins, so there's a bunch of continuous integration going on here. There's a bunch of publishing. You've got WordPress and various other bits and pieces going on in there. This is how people can exchange their thoughts, their ideas about infrastructure in the future. There will be lots of tools that do this, but we think that this is a little taste of what's possible when you start thinking about infrastructure as truly elastic, truly plastic. So I'd be able to edit this, mark it up, and then send it back to whoever sent it to me. Of course, I can also, let me see. Let me start from scratch. If I just wanted to do a deployment, the old fashioned way, I could go and say, all right, I wanna go and take my SQL, put my SQL over there, yeah, I'm just gonna deploy that. Take WordPress. Whoa, you get the idea. So these are tools that are really changing the way people think about rapid application development, rapid deployment, and DevOps. If I'm lucky. So this is our MAZ server, by the way. This is where we have all of these servers registered. This server is doing physical provisioning for those two clusters, and we've tagged the Dell and the HP servers. So we just use tags to direct the clouds to the right places. Okay. Now we ran a competition this last six months called the Charm Contest to see what people could do. And this bundle over here was the winning entry. Is Paul Tsar in the room? I thought there might be a chance that he would be here. Paul? All right, well, he is almost certainly going to hear about this very soon. So please, and he'll watch the video. So give him a huge round of applause, please. This was the winning entry in the Charm Contest. So what's he got going in this bundle? Well, this is essentially a monitoring system. And there's a very cool comment here from one of the judges. This particular combination, using Censu, Graphite, Logstash, Elasticsearch, Kibana, is replacing Nagios. And this is the sort of modern equivalent of Nagios for monitoring and management. And the cool thing about this is that it's designed to be plugged into any infrastructure. So you throw this bundle down in your cloud and then you throw down something else and just connect the two. And essentially all the logs get sucked over. You start to get management and monitoring. Super easy, super graphical. This is really sort of futuristic. Let me see if I can get up here. Okay. To celebrate the naming of the next OpenStack release as Ice House, we thought we'd do something fun. At our booth, we have a Juju bundle, which is an ice cream. And so you can come to our booth and you can order an ice cream using Juju. All you have to do is drag the bundle onto the canvas over there. Configure it the way you'd like it. Add some relationships. So if you want chocolate sprinkles, you can relate those to your banana ice cream. And then you can deploy it. And when you deploy it, a very nice person will make that up for you and give you an ice cream. Perhaps not as fast as OpenStack would feed you a virtual machine and a cluster. But nonetheless, I hope that you'll find that a sweet treat for breaks in your OpenStack adventure. So telcos and service providers are increasingly looking to OpenStack as the center of their open cloud strategy, the centerpiece of their open cloud strategy. We've been working increasingly closely with very large network operators, carriers. Deutsche Telecom in particular has a very pioneering effort to build a marketplace for SMB services. It's been a pleasure to work with them and they've been very kind to support us with this quote. But we are working with a very wide range of telcos and service providers. If you are in this industry, it is of particular interest to us. We're focusing on scale and we're focusing on performance and we're focusing on carrier grade OpenStack. So if you email us at telcoadabuntu.com, we'll be very glad to set up a meeting with you while we're here this week. And speaking of being here this week, it's a great pleasure to be back in China. As a relatively small company, it's very important for us to have relationships around the world with people who can provide insight, training, support us and support local customers. And I'm delighted that we have here today Dong Yi who will speak very briefly about the experience that her company TeamSun has had here in China. Now TeamSun, we started working together a few months ago and I was delighted with the training program. Please come forward. And so welcome and we look forward to hearing what you have to say. Hello everyone. Thank you so much. My name is Dong Yi. I come from TeamSun, the largest AT service provider in mainland China. We committed ourselves to OpenStack three years ago. And now we have OpenStack Technical Supporting Center. We offer free technical support, professional support to those who have interest on OpenStack. The center launched on the June 18th this year and so far we had already got 37 companies. They contact with us for the technical support and proof of concept. Among them, we delivered one project to one of the province of China, Mobile Operator, province level, province subsidiary. Maybe I speak in Chinese the same content for those from mainland China. Hello, friends from mainland China. I'm Dong Yi. I come from Washington. Washington is China's largest IT service company. We started OpenStack three years ago with the solution delivery. This year, June 18th, we founded OpenStack Supporting Center in China. From June 18th to now, we have 37 companies to find out how to use OpenStack to do their private cloud, how to use OpenStack to do IDC. The last sentence is that Tim Sa has a subsidiary in Hong Kong. The largest Hong Kong local IT company called Automated. The Automated committed themselves to OpenStack also. We are going to build the OpenStack platform here. Invite those in Hong Kong, they have their interest to see what kind of OpenStack can do for them locally. And we see the interest on OpenStack keep growing. That's a good news to all of us. Thank you. Thank you so much, Wendy. Thank you. This is an extraordinary sight to see so many thousands of people here. But you know, OpenStack is, we shouldn't forget that OpenStack is fragile. There are relatively, extraordinary large challenges for a relatively small team to resolve. So I wanted to briefly comment on what I think is gonna be one of the key questions that we have to answer over the course of this week. Now, I love this quote from Albert Einstein, that it's in fact easy to make things bigger and more complicated, but it takes courage to go the other way. You know, we should measure our focus by how much internal complexity we're willing to absorb. How many messaging systems do we need? How many databases do we need to support? And we should ask very clearly, do those add value or do they add complexity? And in terms of the scope of the mission, how far up the stack do we need to go? To me, it's very clear that innovation comes from diversity. And I think we want to enable an ecosystem that is diverse, that is competitive. There was a great commentary from James Urquhart on GigaOM saying OpenStack needs to think carefully about how it defines its mission. And this is a conversation for all of us. I have only one voice, but I think we should aim to let 1,000 flowers bloom, right? If we focus on making OpenStack lean, clean, fast, scalable and reliable, right? Extraordinary things will happen. And I hope that that's a useful thought as we go into this week. Now, we should have clouds up and running. This will be on our booth later on today. And I'd like to invite other folks who have OpenStack installers to a performance challenge. If you believe that your install of OpenStack will perform better on a workload which exercises both storage and compute at the same time, then we'd love to let those benchmarks fly. So thank you very much. Trust us with your cloud. I very much hope we'll have the opportunity to speak with some of you. If you'd like to set up a meeting with me or my team, email us at openstackadabuntu.com. Have a wonderful week. Thank you very much.