 OK, let's get this started. It's a gigantic room, so don't hesitate to move to the front. Don't be shy. Not everyone at the same time. This is the last talk of the day, so I'll try to make it brief and entertaining. So hello, everyone. Thanks for coming for this very late talk, last talk of the Wednesday, day three. My name is Thierry Carras, I work for the OpenStack Foundation. And today I want to talk to you about what makes OpenStack relevant in a container's world. As part of the introduction, I want to talk about the confusion we're setting out to clear here and introduce a cast of characters that will help us through this discussion. Every now and then, a new technology appears. And as it appears, it creates confusion as people try to wrap their heads and strategies around it. It was OpenStack coming to mainstream in 2011, containers in 2014, Kubernetes in 2016. And every single time, this confusion arises. Every single time, the new technology is set to replace everything else, like obviously, because it's so much more convenient to think that a single technology can solve all of the world problems. So the rise of containers and container orchestration systems created a lot of confusion, especially with respect to OpenStack, which was like the previous hot technology. Like if containers are replacing VMs and OpenStack being VM-centric, does that mean that OpenStack is not relevant anymore? Or on Earth, would I deploy OpenStack if all I want to do is cloud-native stuff? Or for people that actually understood that those were different technologies, there was still confusion as to whether OpenStack runs containers, or is it like running on containers? There's mixed messaging around that? So lots of confusion. And these are not theoretical questions. These questions I hear all the time, especially in container-oriented conferences, where people ask, well, why is there an OpenStack foundation boost here? Like, what are you guys doing? I want you dead already. And so I receive those questions all the time. I try to put some words into the usual explanation that I give to clarify it for a lot of people. To help us through this, I want to introduce a set of personas. Personas are a tool used in user experience studies. You basically put a face and a profile on a typical person. You describe his life a bit, and that helps explain the value stakeholders and why would they care about one thing or another. So I want to introduce three slightly overlapping personas. The first one, let's call him Dinesh, is the application developer. Dinesh writes the application that runs your business. And he cares mostly about speed, speed to design, speed to develop, speed to deploy, speed to market. He also likes to use the latest tools, not only because it makes him more efficient, but also because it keeps him on the edge and ever relevant on the job market. So that means today Dinesh is looking into 12-factor applications and serverless technologies. Dinesh doesn't want to care too much about infrastructure. He wants to have the differences. He doesn't want to deploy in a server. He doesn't want any difference between his development environment, his test environment, and his production environments to introduce interesting bugs in his applications. And finally, Dinesh does not obsess too much about cost or lock-in. He loves to use AWS, finds it very convenient, and is happy with it. The second persona I want to introduce, let's call him Bertram. Bertram is the application operator. Bertram handles the deployment, monitoring, and scaling of the apps that Dinesh writes. Obviously, those are slightly overlapping personas. In some companies, the Dinesh and Bertrams are sharing the same desks and offices. But in most, in a lot of companies, those are slightly different roles. And we'll see that they have slightly different priorities. Like, Bertram cares a lot about performance and reliability, more than he cares about speed. He is the one that is on call, so he wants solid and proven tools. I don't want to be called at 10 PM on a Saturday because everything could fire in production. Bertram does not want to micromanage infrastructure, but he still kind of wants to look under the hood to understand how it works. He still wants to understand enough of it to be able to select the right technology. And finally, Dinesh is concerned about lock-in because he likes to pick the best technical tool and being locked in kind of reduces his options. Our third persona, let's call him Erlisch, is the infrastructure provider. Because even in serverless, someone has to rack servers and that's Erlisch's job. Erlisch could be operating public cloud infrastructure and be offering infrastructure resources to anyone around the world with a credit card or he can be operating a private cloud infrastructure and be offering infrastructure services to people that are within a given organization. That doesn't change that much his role. Erlisch doesn't want to care too much about specific workloads. He really wants to provide generic programmable infrastructure for Dinesh and Bertram to be able to do their job. He cares mostly about cost. That would be his primary metric. He also cares about evolution, the ability to change his systems in a way that makes his systems relevant for anyone that comes next, like for the Erlisch and for the Bertrams and the Dinesh's of today, but also the Bertram and Dinesh's of tomorrow. So that's it with the cast. Those are our three personas. Now I want to introduce the various technologies. What are containers? Containers are, at base, a packaging format. It's a convenient way to package your application together with libraries and dependencies. It's also a pretty nifty deployment tooling, a very convenient one, that you can use to deploy those applications in relatively isolated environments. What Docker really created is bundling those namespace and control group kernel technologies together with that convenient tooling to make those tools really accessible to everyone. And with the success of Docker, you've seen also the rise of application marketplaces as more and more companies publish their applications under containerized formats. So if you think in like Debian terms, it would be the combination of the deb packaging format plus the apt tooling for deployment plus the distribution repositories where you can basically draw packages. And as such, containers are extremely appealing to Dinesh, because that allows him to package his applications together with all the dependencies and libraries. It makes sure that it shields his applications from the operating system intricacies. It guarantees that he can use the application in development test and production without introducing crazy bugs thanks to the isolation that the container technology provides. Kubernetes now, what is Kubernetes? Well, it's one abstraction level up from containers. It's a way to describe your application using groups of containers, the role they have in the application, how to scale them and have it deployed and maintain semi-automatically. So in a way, it's deployment platform for containerized applications. What makes it great is that it captures operational best practices out of Google's experience and embeds it in the way you have to describe those resources. And finally, it's also pretty good at managing application lifecycle and scaling. So you can scale up and down based on demand, but you can also handle things like holding upgrades to introduce an application of your application. And as such, Kubernetes is really appealing to Bertrand. Bertrand loves Kubernetes because it encapsulates operational best practices and it's also pretty solid. It's open source, so you can look under the hood. It can be run on public or private clouds, so you can basically be free from locking. So that's pretty much the tool for Bertrand. So containers are great for Dinesh. Kubernetes is great for Bertrand. Nothing is great for Erlisch. What does Erlisch want? Erlisch wants to provide programmable infrastructure for the Dineshs and the Bertrands of the world. At that point, it has two choices. It can go specific infrastructure or it can go open infrastructure. The specific infrastructure choice is if Erlisch is that sure that everything that Dinesh and Bertrand will ever want is containers and a container orchestration system like containers and Kubernetes. They are that set, they will use that forever. Then there is a little value for them to necessarily deploy it on top of OpenStack resources. He could go and deploy directly a Kubernetes cluster onto his bare metal servers. Or he could opt for an open infrastructure. What is an open infrastructure? Open infrastructure is when you want options. You want to give Bertrand and Dinesh access to containers and container orchestration systems, but you also want to give them access to VMs, to bare metal machines, to mesos clusters, to Docker Squarm clusters. And you want to provide those options with shared networking and storage. Like if they want to combine VMs, containers and other things, they have to be able to communicate and store and access data. You also want to provide advanced services like object storage or database as a service because that makes Dinesh more efficient as it doesn't have to reinvent object storage or that makes Bertrand more efficient because it doesn't have to micromanage databases in the environment. You want multi-tenancy because you want to be able to properly isolate the various people that are using your system and properly account for how much resources they actually are using. You want interoperability. So that they are not locked in, you're not locked in with a given provider. You want beyond interoperability, you want bursting. You want the ability for those extra days in the month where you have that extra need, you would want to be able to burst your capacity to a public cloud that will be able to absorb it. You want scaling to millions of CPU cores. You want seamless operations. So you like things like common log file formats or common configuration file formats. And beyond that, you want whatever comes next. You want the framework that you deploy to be able to be used to integrate the next technology that Dinesh and Bertrand will want tomorrow. You don't want to reinvent and reinstall everything for the next technology that comes up. And that's basically what OpenStack provides. OpenStack, some people equated it to VMs, but it was never, it was always more. Like OpenStack goal is to give the infrastructure provider the way to answer the needs of the application developer and the application operator. So you want programmable infrastructure. You want like VMs, bermanable machines, containers, container orchestration engines, not just Kubernetes, but also support the others. You want open infrastructure, the ability to plug services, additional services as they become needed within your environment. You want interoperable infrastructure. You want compatible clouds that you can burst to. You want future proof infrastructure. You want the promise that the framework you're deploying today will still be relevant tomorrow as that new technology that will come next that nobody knows what it is will need to be integrated. Because like make no mistake, there will be something else. Like Kubernetes and containers are not the end of infrastructure technology. They're coming every five years, we have the new thing. Just, and the idea that OpenStack is that it's more of an integration engine that will be able to reuse the same framework to capture that technology tomorrow. Let's go into practical examples. How am I doing with time? That's a bad. So how can you, what type of OpenStack project would you deploy to answer values use cases? The first use case is the case of raw resources. So those square at the bottom are supposed to be servers. Some font issue here. Okay, raw resources. So you want to basically provide access to raw resources. You get VMs or Bermuda machines. You deploy Kubernetes on that and then you can deploy your containerized applications. To do that, you would deploy that kind of a stack. So Keystone for authentication, Cinder for block storage, Neutron for networking, Glance to store the disk images, and Nova to provide the VMs that you would get those basic resources out of. If you want Bermuda machines, then you also deploy Ironic to drive access to those Bermuda machines. So that's like the most basic use case. You just provide raw infrastructures in someone else's job to deploy Kubernetes on top of it. If you want to have directly Kubernetes cluster, that's another use case. That's more of a container orchestration engine as a service thing. So you just want directly to have Kubernetes without having to deploy it yourself. And in that case, what you would deploy is the same stack with two more projects, Magnum and Heat, Heat for orchestration, and a Magnum to provide this container orchestration engine as a service system. But at that point, you might say, well, I just want to run a container. I just, why are you deploying this Kubernetes cluster for me? And I just, like, there is this thing on Docker Hub and I just want to run it. How do I do that? Do I have to instantiate a VM and then install Docker on it and then run whatever command on that, whatever? Well, we have a solution for that. So what you basically want is open stack to just absorb your container and run it. To do that, we have a project called Zoom. Zoom actually lets you run any container and will provision a Bermill machine through Nova and Ironic to run it for you and you can kill it when you're done. Just like, it's really as simple as Zoom create the name of the container on the Docker Hub. One thing to note here is that all those options are backed with shared networking and storage. So Courier bridges, neutral networking features to containers, letting them access the same kind of networks that are accessible to VMs. We also have native cinder volume support in Kubernetes. So you can mount block storage directly in Kubernetes pods. So you basically can rely on having common shared networking and storage resources to back all those solutions. The last one I wanted to mention because we had keynotes on Tuesday that were explaining that it would be great if we pushed more for individual open stack projects to be reused in other stacks. And so if you just, what you want to deploy is Kubernetes, but Kubernetes also needs identity management. It also needs access to block storage. It also needs access to networking. And you might want to leverage all the drivers and plugins that we develop within the open stack community and give them access to it. So how do you make sure that Kubernetes doesn't prevent the wheel and how do you leverage those projects to provide those functionality? And there is a project called Stackube that is emerging right now and it's being proposed to four open stack inclusion. And it basically bridges between Kubernetes and those three projects by providing plugin to Keystone, Cinder and Neutron. And one other benefit is that it's a truly multi-tenant Kubernetes installation. It uses hyper.sh technology to properly isolate between the various tenants. So it's basically a Kubernetes multi-tenant distribution that reuses a number of open stack components. It used to be called something else. Hypernetic, thank you. Okay, so now that you get where every technology fits and how you can run containers on open stack, we will change everything. And how do you run, why do people say that we can run open stack on Kubernetes then? It's different. And this inception is what I want to talk about. So you have to realize at this point that open stack is a complex application. It's lots of scale out microservices. You just add additional nodes to cope with the load. The deployment is very complex because you have all those different moving parts. The upgrade is difficult because you don't want to cut user access to the resources while you are upgrading the system. And so the idea of a deployment substrate to handle that complexity of deployment of upgrade, that orchestration of the open stack application is not a new one. Like we've been exploring, especially running open stack on top of open stack with a triple project. So we would run an open stack under cloud and use it to deploy the rest of the user-accessible open stack over cloud instance. Now if we get back to the technologies of what containers are, a packaging format, convenient deployment tooling. Well, it sounds like it could be useful to deploy open stack. And, I'm sorry, like we could use containers as a packaging format rather than relying on distro packages. We could use that convenient deployment tooling to simplify the deployment of open stack. We could publish those open stack packages in packaged format. And we have a number of projects that are exploring that space. Open stack Ansible is deploying open stack in fat containers like OS-like containers using Ansible. Cola, the original Cola, the one that is using Ansible is deploying open stack in like Docker containers using Ansible. Now if we get back to what Kubernetes is, it's, so a deployment platform for containerized apps that encapsulates operational best practices, also manages application lifecycle and scaling. Well, that sounds, it sounds like it could be useful to deploy, upgrade and maintain that open stack application. And, especially to simplify scaling, to simplify your holding upgrades, you could run open stack on a Kubernetes substrate. And it's something that a number of projects are exploring. So Cola Kubernetes is a variant of the Cola, in the Cola family, open stack deployment framework using Docker containers deployed onto a Kubernetes substrate. There is also the open stack Helm project which is an unofficial open stack project that has a collection of, produces a collection of open stack Helm charts that you can deploy with the Helm client onto a Kubernetes substrate. So two slightly different approaches for the same problem which is leveraging Kubernetes to actually deploy open stack, the open stack application. Okay, in summary, containers are a packaging format with nifty tooling, answering the needs of application developers. Kubernetes is a best practice application deployment system answering the needs of application operators. Open stack is an open infrastructure framework enabling all sorts of infrastructure solutions answering the needs of infrastructure providers. Containers can be run on open stack provider in infrastructure allowing them to share networking and storage with other types of compute resources in rich environments. Kubernetes clusters can be deployed manually or through a provisioning API on open stack resources giving their pods the same benefits of shared infrastructure. And finally, operators of open stack can leverage container and Kubernetes technologies to facilitate their deployment and management of open stack itself. In conclusion, those are different complementary technologies, thank you. And we have plenty of time for questions. The previous slide, do it? Sure, I'll post it on my Twitter feed. No questions? Well, thank you for your attention and have a great day with open stack.