 Good morning everybody, I hope you're all having a good time with OpenStack so far. How does everybody feel about Vancouver? I mean, what a city. I mean, this place is just unbelievable and this whole convention center just blows my mind. And I don't know what you guys think about all these design elements, but I think they're pretty cool. And I just, I want to give Todd Moria's shout out. He never, he is the design genius that puts all this stuff together and really makes it a special place, but we're just so lucky to have started out right here in such a beautiful city. And so I want to talk real quickly before we get into all the fun OpenStack stuff about the cloud software, about where we're going to go next. So hopefully you're all planning to join us in Tokyo in October. Another equally breathtaking city, in my opinion. And in 2016, I'm really excited to announce that we're going to be going back to where it all began in Austin, Texas in April. We had our first design summit there in 2010. We had 75 people, which is I think the size of the crowd cheering in the front row here, the fan club. So after Austin, we're going to going back to Europe to Barcelona. So I'm really excited about this. Getting back to Europe is going to be really important for us. And as we think about the OpenStack-powered planet is truly about a global community, not just the software. And it's so exciting to be able to go to all these communities and engage people, but also bring as many people as we can to each of these countries. So today, I want to talk about breakthroughs and how, because we have this OpenStack-powered planet now, we've got this compute storage networking, this solid foundation. What can we do next? What can we do on top of that? And I think that everybody's interested in breakthroughs, right? Everybody's wanting to figure out how breakthroughs happen. And Jonathan talked about some very disruptive businesses yesterday, Airbnb and Uber. We hear a lot about these companies. And I think fundamentally one of the things that's changing because of cloud computing, because of the agility and OpenStack, is it's making it easier to experiment. Experimentation is really at the heart of what these companies are doing. They have a culture of experimentation, of risk taking. And as the cost goes down to try different models, different features, different technologies, we can see what works and what doesn't. And so I think for all of us, it's important that the OpenStack organization as a whole, as a community, that we embrace this exact same model. And I'm going to be talking a little bit more about this throughout the day about some of the interesting experimental cloud technologies that we're able to embrace and put into the hands of users earlier than we've done in the past. And I think we all associate experimentation with science and we know that that's how a lot of discoveries are made. But that's also true in technology. And a lot of the technologies that are gaining adoption today started off as experiments. And if we look at it on these two axes, maturity and adoption, maturity is essentially how much time the technology's been in development, adoption, of course, and how many people are using it. So in the early stages of technologies, it's the experimental stage, right? Not a lot of development time, not a lot of users, but it's very exciting. I mean, this is a great quadrant to be in, I think, because nobody knows where we're going to go. And it's a time, especially with the open design philosophy that we have at OpenStack, where everyone can be involved in the design process. Everyone can be involved in the experiments and seeing what's going to come through as a breakthrough. Now, if you don't put a lot of work in and you suddenly have a widely adopted technology, people sometimes call those unicorns. I don't believe in unicorns, so sorry to break it to you. But I don't put a lot of faith in that. I think that most of the time to make a breakthrough, you got to work at it. And if a technology is in development for a long time and doesn't really get a lot of adoption, it might be a niche solution. But obviously, like all quadrants, you want to be up into the right. And the point here really is just that experimentation is what drives breakthroughs. And as we think about some of these new OpenStack technologies, new projects that we're going to be looking at today, it's important to think about that, that they are experimental, but that's okay. This is an exciting time to take a look at them. And now I just wanted to talk briefly about OpenStack and how we work with enabling technologies. I think it's important to put this in context, because in the early days, people often thought OpenStack was a hypervisor. Does anybody remember this confusion? And the reality is OpenStack, using Nova as an example here, we integrated with virtualization, server virtualization. So this is 2010 on the timeline. And server virtualization was already pretty widely adopted at that time, it's pretty mature technology. And you fast forward to today, and certainly we see widespread adoption of Nova and Walmart and all these other giant companies and service providers. Server virtualization is a given in every enterprise. So when we think philosophically about what we're going to do in the next stage of experimental technologies for cloud, it's going to follow a very similar pattern. But I think we have an opportunity to get a little bit of a head start, because we have a much bigger community now, a much bigger user base than we did at the beginning of Nova. And so can anybody think of a technology in cloud that's kind of in the experimental phase right now? Anybody? Containers? I am really glad you said that. I was starting to sweat. Yeah, I was not sure how to explain this slide if you didn't say containers. So yeah, containers, right? So I think people are very excited about containers and Docker, right? And for good reason, this is a very exciting technology. And I think if you, I'd be curious to know how many of you are running in production today with Docker? Show of hands, okay? Not a lot, a few people. Now how many are interested in learning more about containers and how it might work for your organization in the future? So that's exactly what we expect when we're in this experimental phase, right? Lots of interest, not necessarily running in production, but that's fine. This is the whole point of open design. The whole reason we come to these summits is we have an opportunity to hear from users, developers, the ecosystem, and think about that. And so looking back at this diagram here, there are many technologies that are coming on the scene in containers. And probably a lot of users are trying to sort all this out, you know, Docker versus Rocket, Mesos, Kubernetes. I mean, you've probably heard of all these, but to really understand the differences and to try to understand, you know, which one's going to win, that's very difficult, right? And if you think about OpenStack, again, we're not a hypervisor, right? We integrate with hypervisors. That's exactly the same pattern that I see happening with containers. So we have a number of OpenStack projects that are just getting started, Murano, which we're going to see a demo of in a few minutes, Magnum, we're also going to see a demo of, and Kola. These are new OpenStack projects that are not trying to reinvent the wheel, they're not trying to duplicate what these other interesting technologies are in containers, they're trying to integrate with them. And so, you know, the question of who the winners are going to be is an open question, right? When you're in the experimental phase, you don't know who the winners are going to be. The important thing for us as a community is to think about OpenStack as this integration engine that's agnostic. Just as we were hypervisor agnostic was a critical way that we approached virtualization. It's the same thing you'll see as we talk about the approach to containers. And that puts users in the best position for success and allows people to take advantage of these technologies as they mature when they fit for their use case. So if we look back as an example, again, as Jonathan said, processing, storing and moving data. It's really compute storage and networking, right? So how did we approach that? How have we approached that to date? You know, we support every major hypervisor out there, right? Block storage, we have sender and we have, you know, 60 different drivers. So whatever technology you're interested in, it's going to be something you can plug in. Same thing with networking. So just as we didn't try to reinvent the wheel in compute storage and networking, I think that, you know, we'll follow a similar path when it comes to containers. And another myth I think that still persists a little bit with OpenStack is that it's just about virtual machines. And that's really not true today. We've actually had bare metal support for several releases. And in a minute, we're going to bring up a user who's using OpenStack bare metal at a huge scale and looking forward to Ironic, which is the evolution of that. And we are also going to talk to a user who's currently at scale running containers in VMs on OpenStack in a public cloud. And so as we think about Magnum and we hear more about how that can take container support to the next level in OpenStack, you'll see that really what we're enabling is the ability, whether it's containers on bare metal in VMs or just traditional VMs, you're going to see all the different types of technologies available, you know, under one common set of APIs. And that's what I think users are looking for, because you're going to have a lot of different workloads that call for different technologies, whether they're containers or VMs or a combination. And building to put that into one system is very powerful. And so when we think back to this question of what is OpenStack, you know, people have said it's a cloud platform, have said it's a cloud operating system. Really what I think it is when you actually look at how this ties together is an integration engine. And that's what it's always been. And I think it's sometimes hard for people to understand that. But if you look in an OpenStack cloud, there are many other technologies today. They're in every OpenStack cloud. And OpenStack is about bringing it all together. And so to talk about how bare metal is happening in a very real way today, I'm very excited to welcome up James Tenic, who's a cloud architect at Yahoo. All right. Now that is bare metal. Absolutely. All right. So Yahoo has been involved in OpenStack for a long time. It's really exciting to have you up here on stage today. So can you tell us a little bit about, you know, what OpenStack means to Yahoo, what you've been doing at Yahoo with it? Absolutely. So Yahoo has a very large adoption of OpenStack. It's a very big deal for us. We've been involved in it for several years now, typically in the top 10 contributors. And we have members of our staff that even made it onto Nova Core, Ironic, Ozzo Core. And right now, Yahoo is one of the first megascale infrastructures in the world. By megascale, I mean servers in the hundreds of thousands. Wow. That's pretty big. Yeah. So of course, the problem is, when you're one of the first ones out there that reaches that scale, there's no product yet that can support you. So we had to build a lot of bespoke tooling to handle all of that. So we have hundreds of thousands of physical servers, tens of thousands of VMs. And at the moment, those tens of thousands of VMs are managed by OpenStack. And tens of thousands of bare metal servers currently managed by OpenStack. That's huge. I mean, not a lot of people operate at that scale. It's great to see that OpenStack can be part of that. Yeah, it's been a lot of fun. By the end of this year, we anticipate having the majority of those hundreds of thousands of servers all managed entirely by OpenStack. Wow. That will make you one of the largest OpenStack clouds in the world? I'm quite sure. Yeah, I keep joking that we've been secretly building the largest bare metal cloud in the world. So let's talk a little bit more about bare metal. Like, what exactly are you doing with it today? What are your plans? So at the moment, we took some of the older legacy Nova bare metal drivers, and we modified that to sit on top of our existing provisioning infrastructure. While we did this to kind of get us going, and the first step was really to put OpenStack as a single consistent API on top of our infrastructure so that when you need to provision compute resources, regardless of if it's VM, bare metal, or container, when you need compute, you go to OpenStack. Wow. So that's already something you're realizing today. And so I guess as you're one of our super users that's involved, not only as a user, but with developers, where do you want to see OpenStack go next and kind of how are you going to be involved in making that happen? So OpenStack in the last few years, I think, has improved leaps and bounds significantly and has made great strides. And I think some things that we would like to see next that we want to help drive in the community. Some features, certain features that really are more appropriate for a private cloud than a public cloud. So flavor-based quota, the ability to regulate which tenants can boot, which flavors in which places. So locality, that kind of thing. And other pieces that give you the ability to tie OpenStack with internal business and finance processes. So when you bump quota, you need a way of tracking that, knowing how it went up, why it went up, who did it, having an external reference field or something like that. That's a lot of the stuff we're looking for. That's cool. And you'll be working in the upstream communities to help make that happen, right? Yeah. One of our guiding things is that whenever possible, anything we should do, we do with OpenStack, we should upstream. As long as it's something that's appropriate for the upstream community, then we should upstream it. Good. I hope everyone out there is taking notes. That's the right way to be a super user. So, well, thank you so much for joining us up here. Tens of thousands of servers going to hundreds of thousands. That's just incredible scale, and it's great to see users so involved in putting it to work. So thank you very much. Appreciate your time. All right. Next up, I'm happy to introduce Zach Rosen, who's the CEO of Pantheon, to talk about what they're doing with OpenStack. Whoo, Zach! How's it going? It's going well. So what the heck is Pantheon? So Pantheon is a website management platform for Drupal and WordPress. So if you want to enable your marketing website teams to build, launch, and manage awesome websites, you should check us out. Okay. And so how are you actually achieving some of that in terms of technology? Yeah. So just for some context. So we currently run 100,000 websites. We're doing billions of monthly page views. Do you mind if I grab the clicker? Oh, yeah. We actually need to put your slides on the screen. There you go. Okay, there we go. So we run 100,000 websites. We're doing billions of monthly page views. Our customers are include Arizona State University, Tableau Software, the New Republic. And so I just want to talk a bit about how the technology actually works. So for some sense, we run over 400,000 PHP environments for WordPress and Drupal. So we had to solve two fundamentally hard problems to make Pantheon work. So the first is really a problem of scale and performance. So we're very proud of the fact we're the highest-performance website management platform on the market. And if you are familiar with website traffic, it's a little random. So oftentimes you have normal traffic, normal traffic, normal traffic. And then all of a sudden, you get this huge traffic spike because you're internet famous all at once. And these traffic spikes for our customers are frequently in the 100x range, not 10x but 100x. So solving that problem is very challenging. But actually the more challenging problem was this. So again, so the cool thing about Pantheon is we let customers spin up development environments whenever they want. So it's really easy to spin up a copy of your whole stack, have a collaborator work on that stack with you, spin up another stack for another collaborator. And this is why we actually have to end up running 400,000 environments. This is actually to support all that parallel development. And as part of that, each one of these environments has to be resource isolated from one another. So we have to enable you, the developer, to write the dumbest SQL query in the world without taking down one of our live customer sites. It's a popular pastime. Yeah, exactly. So that's the hard problem. And I want to talk about how we actually solve that. And first I want to talk about how you do not solve this problem. So don't solve it this way. The old way. So container, the VM hosting architecture. So you have some customers on essentially shared hosting. So if Jim has a bad day, you're going to have a bad day. And then you can put some customers on VPSs, but it's a single point of failure. So really you want to get everyone on a cluster, but that's thousands of dollars a month. And we have some customers that have 2,000 websites. They can't possibly afford to have every customer on their own cluster. Right. But the real issue is this. If we were to do this for 400,000 environments on EC2, for example, we would literally be running over 400,000 EC2 mediums, which would be ridiculous. And I know one person would be happy about that. Yeah, one person would be happy with that. Exactly one. What is this EC2 VM energy efficiency? Yeah, so here's the irony of all this. If we had 100 operations engineers, we had 400,000 EC2 VMs going. EC2 on average is about 7% CPU efficient. I don't know if you know that. So mostly we'd be paying for a bunch of VMs that sit around basically doing nothing. And EC2 would be very happy about that. So that's clearly not what we did not do. And I'll talk a bit about what we actually did do. So Pantheon runs on 100% containerized infrastructure. So when you sign up as a developer, you'll get a free development environment. We give those away for free. And it has a container for your app, a container for your database. When you're ready to launch, you can hit a button, swipe a credit card, pick a plan. Our high availability plans will run your containers on different infrastructures. That's how we do high availability. And then we can scale you up from there. We have some customers on dozens of containers. We had a team that actually launched the 150th highest traffic site in the entire country on Pantheon last week. We just deployed them more containers. It's pretty quick and efficient. But the real benefit of this architecture for us is it's incredibly efficient for us to operate at scale. So there's a lot of folks excited about containers as a way to make it easy for developers to write once and deploy to any infrastructure. I actually think the real benefit of containers are mostly for their service providers building and operating software at scale. So to give you a sense of this, so 400,000 fully functioning resource-isolated PHP runtimes, our DevOps staff is four. We deploy 15 times a day across all of these sites. We patch chart bleed on 60,000 sites, three environments each, and about three hours with one FTE. So that's a huge savings of efficiency. Sure, a lot of people here add sleepless nights. Yes, exactly. Over the last few vulnerabilities. Exactly. And so we're able to take all those savings and efficiency and pour them right back into the platform. Great. So what does all this have to do with OpenStack? All right. So Pantheon runs very proudly on an OpenStack Foundation, powered by Rackspace, actually. Okay. So you're in the Rackspace public cloud. Yep. And you're running containers and all those VMs. That's correct. That's really cool. And I know when we talked before, you all are looking at potentially doing some stuff with Ironic in the future as well, right? Yeah, absolutely. Yeah, we do think the future of the cloud are not EC2 VMs. We think it's bare metal with containers on top of that. Okay. And we've proven it works for us. Very cool. Well, thank you so much for joining us. It's great to have another user come up here as using a public cloud and doing real work with containers today. All right. Thanks. Thank you for coming. I need that. All right. So we have a feature-packed morning here. So we're going kind of fast. So hope you can all keep up. I'll try to make it pretty clear as we go along, but we've got a lot of people to come through. So the next person I'm really excited to bring up is Adrian Otto, who's a distinguished architect at Rackspace. He's also the PTL of a brand new OpenStack project this year called Magnum. So come on out, Adrian. Welcome. All right. Thanks, Mark. And if you don't know, probably many of you know what a PTL is, but it's really the leader of an OpenStack project. So we're blessed to have one in our midst here. So what is Magnum exactly, Adrian? So Magnum is a new open source project, like you said. OK. And it's for cloud operators to be able to offer containers as a service to their cloud users. I saw a lot of hands go up that were interested in containers. So you might have some people that want to meet you later. So I know in OpenStack, people have been doing things with containers for a while. We heard a user that's doing stuff now and we've had some Docker drivers. You know, how is this different from the way containers and Docker have been integrated with OpenStack to date? Sure. So we've had container support in OpenStack for some time. We have the Libvert LXC driver that's been there for a long, long time. Recently, there's been another one called NovaDocker, which is for creating containers that we treat like machines through the Nova API. OK. And that's fine if you just want to use a container as a small little place to put something. But containers can do a whole lot more. And so in order to access those additional features, it goes beyond what the Nova API was intended for. Sure. And so we've made a new service where those can fit. Yeah. I think developers are starting to look beyond just machines as a construct and really think about the underlying resources as building blocks. And this is a powerful way that we can start to embrace that. So I know that obviously with OpenSource and OpenStack, building a community of developers is really key. So what have you been doing there? So most OpenSource projects start by a small group building something. And then later a community growing around it. And with Magnum, we kind of had the opposite. We started with a community idea. And we had the software after we had this plan on how to do it together as a collaboration. And so we've seen pretty impressive numbers. 42 engineers have contributed code to this from 19 different affiliations. There's about 100,000 lines of code there. So for six months. It all happened this year. That's pretty decent. Our first lines of code dropped on November of 2014. We had the holidays after that. So it was a pretty light period. So all of this action has really been happening this year. Wow, that's great. And so this project, I know, introduces some different concepts. So we have a diagram here you're going to walk us through that hopefully will kind of help us understand some of these new concepts that Magnum brings to the table in this new crazy world of containers. Yeah, so the big difference between Magnum and the way that other container systems arrange your containers is we have this concept of a bay. And a bay is where the container orchestration system goes. Okay. We have two kinds of these today that we support. One is based on Docker swarm, which is the prevailing kind of native Docker orchestration system. And then there's one that supports Kubernetes, which is a slightly more advanced, more featureful alternative. And in Kubernetes, you have these arrangements called pods. A pod is like a grouping of containers where all the containers are part of the same application and need to be kind of together as a unit. And so we make the bays, the pods go into the bays, the containers go into the pods. So as we think about OpenStack as this integration engine, kind of extending that into the world of containers, these are the concepts that help us do that, right? That's right. And these should apply to just about every orchestration system there is for containers. Okay. So our users get the benefit of these exciting technologies that some people may think are competitive or somehow would not be appropriate in an OpenStack community. And it's far from the truth, right? When we see our users see an interesting technology, they say, let's go experiment with it. You're making that possible with Magnum. Exactly. So I would like to see how it works. Can you show us a demo? Want to demo? All right. All right, let's do it. Speaking of experiments, we're going to do some demos. All right, Mark. So the first thing I'm going to do is show you that this is actually an OpenStack cloud. What I've got here is on a Rackspace on-metal service. Okay, so this is running on the Rackspace cloud. OpenStack. On-metal, which is our kind of ironic powered cloud server. Okay. I'm running an OpenStack instance on that. Okay. And you'll notice all the same normal things are in here. I've also got a Magnum instance running on here. Okay, so I'll show that to you. We mentioned before the different things that are in Magnum. The interesting one is Bay. And to make a Bay, we use something called a Bay model. I see. And experiments you can often learn from, even if they don't go as smoothly as you expect. Okay, so pods are what you put... I'm sorry, Bay is what you put the pod in, and then the pod through something like Kubernetes that would actually create the containers. That's right. Is that right? So that's what we would be seeing right here if the network was up. Okay, so, all right, if you actually use Magnum to deploy the Bay, it can either be Kubernetes or Docker. That's right. Well, yes, a swarm, Docker swarm. Sorry, Docker swarm, which is really kind of their container orchestration engine. So, again, the model here is to build a point of integration that allows the tools like Kubernetes or Docker swarm to do what they do best, that rely on OpenStack for all the stuff that's already solved. Exactly. So we're on now. All right. I've shown you the... I told you, Bays come from Bay models. Right. We've got two Bay models defined in here, one for the swarm type, one for the Kubernetes type. And these things have all of the attributes that are necessary in order to create the Bay. So let's make a Bay. This one says, create a Bay from this Bay model, K8S Bay. Okay. Or from the Kubernetes Bay type, we're going to call the Bay K8S Bay, and it's going to have two nodes in it. What this is going to do is it's going to call out to heat, which standard OpenStack stuff, right? So we can look in our OpenStack cloud again. We'll see, okay, well, there's a heat stack in there. So the advantage, if you're already an OpenStack user and you're wanting to get started with something like Kubernetes, is it leverages the tools that you're already used to. It is a much quicker, easier way to get this kind of environment up and running inside of your OpenStack cloud. It's using heat. It's using the horizon dashboard. So we're able to see here some of the work that's going on already that you've just kicked off. Exactly. So what we're watching right now, this is a heat resource group. So this is our way in heat of having a whole bunch of nodes that are all the same. So our Kubernetes minions are all created in this heat resource group. So is Kube what the cool kids are calling Kubernetes now? Kube. It's so hard to keep deep up these days. That's it. So, you know, our Bay... It keeps me easily. It keeps me. Our Bay is complete now. Okay, great. So we want to put a pod on our Bay, right? All right, yes, let's do it. Let's do that. All right, so now we're operating... We got a Bay. Now we get a pod. Yep. So pod's on Bay now. Okay. And what happens next? This is a web app. The Kubernetes system is going to pick up the pod definition and it's going to deploy that. So Magnum with the Bay concept helps get Kubernetes set up in an open-stack native way. That's right. And then you actually go to the tools within Kubernetes to do what it does best, which is create clusters of Docker. Exactly. And you can have the Kubernetes Bays side-by-side with Docker Bays if you want. So I can create a Docker Bay. Again, two nodes of that. That's really cool. And I think, going back to what I was talking about earlier in terms of being agnostic, being able to integrate different technologies, it is true that part of it is nobody knows who's going to win, but it's also about choice. Like there are going to probably be multiple winners. And enabling our users to have choice is what we've always been about philosophically. And I think it's really cool. I don't know of any other system right now that's really designed in such an abstract way to kind of be agnostic as to what the container engine is and I think it's really cool that we didn't try to go build our own version of this. We just integrated with these systems. Exactly. You can see this is a pretty complicated thing. There's a lot of... All these little squares are software setups that are occurring. And then... We're create complete. So we should be able to verify that through the API. Yep. There we go. So now that we have a new Bay here, we're running the Docker Swarm cluster inside of it. Should we put a container on it? Let's do it. So I've got a definition here for a container. It says we're going to ping google.com. So we should be able to see the output of this once this thing goes. This morning we were talking with the gentleman from Google and I asked him to please not let Google go down during this demo. So... We should hope it's up. You know, you just got to... All right. It's up. Thank you, Sandup. We'll be hearing from him in a minute. Very, very cool. Well, that's really awesome, Adrienne. I think this shows you what's possible in a short period of time with the right approach and an open design approach and the right leadership. So thank you so much, Adrienne. This is very exciting. There will be many other talks where you can learn more from Adrienne throughout the weeks. If you go to your mobile app, search for Adrienne Otto and find him. He's all around. He knows all about containers and Magnum. Okay. Thanks, Mark. Thanks, Adrienne. All right. Next up, I'm really excited to announce a new initiative that we're launching today. It's called the Community App Catalog. And when we think about taking this open-stack-powered planet, putting tools and applications on top, and doing more with it, being able to run experiments and to be able to empower our developers with a lot of these cutting-edge tools, you know, another way to approach this is through things like glance images, heat templates, Murano packages. And we have a lot of users that are doing this today. We have many users that I've talked to on a regular basis who are creating heat templates, for example. And they've never really... They've not had like a single place to share those. So, you know, with all things open-stack, we're trying to build tools that enable sharing. And so with the Community App Catalog at apps.openstack.org, you can quickly go find whether it's a glance image, a heat template, excuse me, or a Murano package to deploy many different applications, whether it's Cloud Foundry, OpenShift, Kubernetes, and even something like Oracle Database is in the catalog today. But this really is the beginning. Knowing this community, I expect there will be hundreds of additional applications in the next few days. And we have some sessions talking about that later this week so you can get involved. So, we're really just opening it up today and the community can all contribute to it. And to actually show us what's possible with this catalog and with Murano, I'm really excited to bring up Craig Peters. All right, so... Okay, thank you, Mark. Craig has been instrumental in helping bring this to market and knows a lot about Murano and how this system works. So, why don't you show us how it works? Fantastic. I'd like to show you kind of the opposite of a black screen demo here. Okay. So, let the demo gods be nice to us. So, I'm going to log in to Horizon. And really quickly, what I'm going to do is I'm just going to go and take a look at Murano. What I see here is I've got a set of applications that are available for me to self-deploy into my cloud. So, it's a catalog. I've got Cloud Foundry. I've got Staccato, which is another distribution of Cloud Foundry. I've got the database here and I've got a web server. Really, this is not very rich. I want to find out what the community has shared and what's out there. So, I'm going to go and add a package definition. Okay. So, what is that? Well, the cool thing is now I don't really need to know. In the past, I had to have some real in-depth knowledge about what that is. So, in Kilo, we've added this import package button. Okay. And in the import package, you can do it in the old way or you can choose a repository. And if you choose a repository, what you get is a link to the catalog, which you just introduced. That's great. And so, let's hope the catalog's up. But hey, it is. And so, I clicked on catalog. I realized I forgot to undo that the last time I did a run-through. Perfect demo. And you'll see here we've got all of the things you showed on your screenshot there. Sure. But the Murano packages is a really rich set of starter apps that we can just get going. And this is going to explode as everybody wants to contribute. So, the whole idea here is now I don't have to go to a Google search, Murano, something or other. Sure. Everybody can share what they've got going on. Let's take one and show them how you can actually pull this into your local catalog inside of your OpenStack cloud. So, what I want to do is a different way of deploying a Docker app. So, I'm going to use the Murano way of doing that. So, I'm just going to copy and paste the package name into Horizon. Clearly, a lot of people here want to start experimenting with Docker. So, it's good that we're giving them choices on how to do that. Absolutely. So, isn't containers the theme of today? So, we're going to do another little container-oriented thing here. So, we're just going to do a simple one. Okay. And I categorize my app here. So, there's already some work going on here. The packages are pulling in? That's exactly right. So, now that's telling Murano about it. If it has any dependencies, say on Glantimages or heat templates, those get pulled down to automatically. Okay. So, from a user experience perspective, I now have the ability just to go right back into my application list. And you'll see this thing I brought down and everything it depends on. In this case, it also allows you to deploy it with Docker standalone or with Kubernetes. So, I'm going to do a quick deploy. Okay. I'm going to make a choice between which provider of the container services I want. I'm going to choose Kubernetes in this case. And I'm going to give it a name. And I'm going to add it to a cluster. So, this is kind of the same kinds of things that we're going on in Magnum. We're just setting them up through a UI that was preconfigured to give me kind of all the parameters that are available. It takes a lot of... I think people, I think, want to do work with their OpenStat Clouds. And that means applications. It means different tools, maybe like a Cloud Foundry or something like that. You want to put that on top of your cloud and get to work. And this takes a lot of the pain out of it. Exactly. So, as soon as I'm done kind of picking and choosing all of the options that Murano is configured to ask me for, and that's a beautiful thing. It's everybody can configure different kinds of packages for these different applications. And we want to kind of develop a set of best practices for what works for different kinds of use cases and share that in the whole community. And then what we result is a sophisticated topology, a cluster of these things. All I have to do is click deploy, and that's going to go and provision all the underlying resources in OpenStack, and we'll get it going. So, while it's doing that, let's take a quick look at what's going on already in the catalog. So, you know, this has really happened very quickly, and we've been lucky enough to have some participation from a few partners who are early contributors. And so I think it's kind of interesting to look and see what that's like. So, for example, if you go look at Glance Images, there's already a significant library of things. So, AppSara is another pass vendor with a totally alternative way of doing their own container management and multi-cloud management who've already contributed their stuff. That choice thing again? Always present in the OpenStack world? It's a theme here, and I think what's going to happen is this is going to grow really quickly, and as a community, we're going to have to figure out how to do some categorization. Sure. That's one of the things I think as a community we should talk about is how do we deal with design patterns? What are the best solutions for different kinds of use cases? Yeah, because this is very much, you know, a beta kind of first cut at this, and I think, you know, Craig's been instrumental in helping to think about how we get this off the ground, but we want everybody's involvement in just thinking about, you know, how we evolve the app catalog. It's a community project. If you want to add any application, you just go through what, the standard community process? You're exactly right. It's exactly the standard process. So, you'll see we've got clients' images, we've got heat templates, and Marano packages, of all kinds, including multiple kinds of distributions of Cloud Foundry. And if you want to contribute, you can just go back to the add new content, and that shows that essentially we've done this in exactly the standard OpenStack way. So, you contribute, you do the same OpenStack contributor license, and you submit a patch to the YAML file basically, and Garrett gets proved in your all site. Well, the community does the reviews, it gets pushed, it's a peer review process, exactly. And then it's up there, and then the beautiful thing is, because of the way we've done things in Horizon, and integrations in Kilo, it becomes really easy for people who are just getting started with OpenStack, as well as providing the flexibility that you need for long-term productions. So, you'll see here, Marano has actually gone ahead and set up the Kubernetes service, so it's also setting up monitoring and an automated... Floy in progress, I think that's a good sign. Oh, absolutely. It means the demo gods are treating it nicely. Yellow is better than red. Yes, we don't want any red. And so, in the end, it'll just give me back the URL for the web server, it's almost up and running. So, it's just pulling the Docker containers down, but there was an option in Marano to point to a Docker registry, but by default, it uses Docker Hub. Okay. And there it is, my web server is up and running. All I got to do is copy... So, ultimately, the community app catalog is a community... And it's not up again. It's not up yet. It's a community catalog that is maintained by everybody who's looking to share these different templates and images and packages. And it's essentially a repository for all these kind of artifacts or packages that people want to share. And like I said, I've talked to a lot of users. Kevin Fox, for example, is probably out there somewhere. Ian and I'll just talk with him a couple of weeks ago about how he's got all... He does... His company and his organization is doing so much work to create heat templates and just really wants to share that back. And I think as you see people with these different use cases and patterns and applications wanting to... They're solving those problems. They want to give back. There's a lot of different ways to contribute. You don't have to be a core on one of the projects. If you simply have a cool heat template that you've built, you now have a place to share it. And people are going to be excited to use that. I think so. And I think you hit it earlier when you were talking about how the applications are really the key. This is an application and solution-driven way of viewing how we're leveraging OpenStack. And I think as a community, we can build that into a very powerful thing. And I thought I was talking to Tim Bell who's going to come out in a little while and he's like, we have things we need to add to this. This is going to be a way to help us to federated distribution of these kinds of assets across the clouds. Super user Tim Bell. Can't wait to see him in a little bit. Awesome. All right. Well, thank you very much. I think we're calling it or... Well, we can try it again. Let's see if the problem with the container is, of course, we're doing it all from scratch. Okay. And so we had to get it up and running. Oh. It works. It's finally up and running. Yay. Well, like all Apache Web Server demos, if it doesn't end then it works. It's a little nerve-racking. Absolutely. Look, I believe in experimentation, so I thought we should put our money where our mouth is today. Live demos. Well, thank you so much, Craig. Thank you, Mark. That was great. And I'm really excited that you guys got to see the Community App Catalog and you got to see a little bit about what Morado does as well. As an OpenStack project, it's really gaining traction out there. So the next person I'd like to bring up is Sandip Priek from Google. Welcome. Thank you. All right. Well, it's really exciting to have somebody from Google here. Thank you very much. So what are you doing here? So Google's been doing... It's funny running into you here. Right. So Google's been doing container work for years. In fact, we launched two billion containers every week. Got a lot of... It's not bad. Yeah. Got a lot of knowledge around how to run container infrastructure. So what we've done is we've turned that into Kubernetes and we're here at OpenStack Summit talking about Kubernetes. And basically, we believe in this vision of Kubernetes running everywhere, whether it's public cloud or private cloud. And so we think from a private cloud perspective, OpenStack is definitely leading the charge on how to do that the right way. Good. Well, I think you'll find that we're a very welcoming community. We want everybody to be part of the big tent here in the OpenStack world. So I hope you're having a good time at the summit. Yeah. Yeah. It's a great time so far. Yeah. Vancouver's not a bad spot. No, definitely not. Well, I know that one of the promises of Kubernetes that people are excited about is you can potentially run things across multiple clouds. Is that right? Yeah. Yeah. So last week, Adrian and I worked together over the last few days and over the weekend on getting Kubernetes deployed in two places. So one on Google's container engine and the other side on the Rackspace Cloud. And then we set up what we think is kind of a typical production environment. So we've basically got the same web application and same databases in both clusters. And then we've got our front-end load balancer that's routing requests between both of them. Well, since you worked on it with Adrian, why don't we bring him back out? Cool. And Adrian can walk us through a little bit of what you guys have been doing together. Welcome back, Adrian. So I think we've got a couple of slides here to talk about what's going on. So there's a Rackspace Cloud, which you're probably familiar with, and the Google Cloud platform. So what's going on the Rackspace side here, Adrian? So on this side, we used a software called CoreCube. Okay. It deploys a CoreOS cluster. Right. And on top of that is Kubernetes. So we deployed that on top of the Rackspace OpenCloud, which is an OpenStack Cloud. Sure. On our performance flavors. And the application gets deployed onto that Kubernetes cluster that runs on top of the CoreOS cluster. Okay. So this is the Rackspace in production cloud that's running out there as a service. And you guys just use that as part of the demo. And the front end here, this load balancer is the Rackspace Cloud load balancer which we selected because it can have backend nodes and multiple clouds. So it supports remote nodes. So it's actually able to load balance over to the Google Cloud as well. Exactly. Cool. And so tell us how it's working on the Google Cloud side. Yeah. So Google actually has Kubernetes as a service running under what we call Google Container Engine. And so there basically it's running Kubernetes on top of the same sort of Google Compute Engine infrastructure and resources. So it basically maps back to the same nodes and regions that we've got in the rest of the cloud platform. Cool. So in the demo that you guys built, you're running the same software on top of two very different underlying infrastructures, right? And so tell us a little bit about that. Yeah. So we're using the same version of Kubernetes in both places. Okay. And then on top of that, we basically took the same database infrastructure. And in this case, we used Galera and kind of a Master Master across both clusters. And then we deployed the same web application again across both clusters. And the nice thing was that we didn't have to change anything. So the code that I wrote to get this to work on the Google Container Engine Kubernetes deployment was the same code that Adrien used to deploy into Rax-based public cloud. And so it really was one of those right once, you know, sort of run both places or run anywhere. Yeah. We used the same YAML files for Kubernetes. Exactly. We're going to turn it on. Okay. And I believe you guys just did this last week. I called you up. Hey, why don't we do something crazy? And they're like, okay. So he wrote a few lines of code, sent it over to you, and it worked in both places. So let's see if the demo guides are going to smile on us. Yes. Let's take a look. In the experimental world of demos, we're actually going to be doing a third demo during this keynote. Try to make Mark Shuttleworth proud. All right. So this is the address of the front-end load balancer. So I'm just going to refresh a couple of times here. And so every time it refreshes, it's going to show us a different server name there. So this is a basic web app, and it's live, and it's routing between the two server environments, one sitting at Google, one sitting at Rackspace, all using Kubernetes underneath. Yeah, now. So if I take a look at the same, you know, the Kubernetes pods that we've got running, I can just show you a quick one here. So I'll show you what the Galera one looks like on the Google side. Okay. And I can show it to you on the Rackspace side as well. So we've got the two terminals, two clouds. That's right. So we've got the exact same infrastructure running in both places. And we won't have to compare these line by line, but it's the same container images that we use that both, that all live on the Docker registry, and then the same YAML files that we use across both clouds. Wow. So what is the benefit of this? You know, what is this going to do for a user? So really it gives us the ability to do, one, this truly kind of hybrid approach, because not everything is great for a public, not everything is great for a private. There's this kind of mix and match that people want to be able to do, but we want to give them the same infrastructure and do it in as many places as possible. So in this approach, what we can ultimately get to is this notion of if I want to run kind of this highly available deployment, I can run Kubernetes in multiple places, and I can put these front-end load balancers in place and tie all this stuff together. But if something should go down, I've got the ability to kind of recover from that pretty easily. So why don't we make it go down? So what I'll do over here is... I want to put this to the test. I'm going to delete the Galera infrastructure on the Google side. Well, we made sure to do this demo after the earlier one, in case he brought all of Google down. That's right. Just in case. He's sudo Googleman here. All right, so you're actually taking down... Taking down the Galera nodes right now. Okay. So those are down, and then I'm just going to run a quick little update in my load balancer scripts here. So what that's going to do is it's going to take out that down node from the load balancer because we need to turn on health checking for the demo. And once that is finished running... There we go. All right, there. So it's disabled. So now if we hit this load balancer again and again, we're only going to see it from... Okay. Oh. We'll switch away from that one for a second. So... All right. So we're still in the bottom left quadrant. Yes. We're still working on a few kicks, but the net of it is... It does actually work really well because I did do this last night in the hotel room. All right. Well, I will just end it by saying I think we're on the verge of a breakthrough. That's right. And look, I think this is really exciting. You know, we've got Google and Rackspace working together. Kubernetes helps make this happen and the way that we're embracing technologies that people are interested in I think is pretty exciting. So thanks for coming out, guys, and I really appreciate it. Thank you very much, Mark. All right. All right. Well, that wasn't a bad... Oh, what is that? That was not a bad ratio of success. So I just wanted to end on one final point to leave you with, which is... Open stack really holds the room together, man.