 It's nine o'clock. Good morning, everybody. It's 9 AM, day four. Anybody still have energy left? Raise your hand if you have energy left. All right, great. Cool. Thanks for joining me this morning. My name is Lars Herrmann. I work at Redhead. And I'm responsible for the integrated solutions business unit where we put together our products, but also our partner products to create solutions from open source components. Prior to that, I was responsible for REL. Any of you using REL 7 anywhere in the environment? Raise your hand. Keep your hand up if you like it. All right, cool. Thanks. I was also driving the container strategy. And I'm really here today to talk about containerization with OpenStack. And I'd like to open this up with a little bit. Containers were this OpenStack. A lot of excitement around containers. We've seen in the survey the number one technology that people are looking at in the OpenStack context that they are interested in is containerization. We have seen a lot of hype in the market for the last two years. And frankly, a lot of it is very confusing. Are containers going to replace virtual machines? Is Kubernetes going to replace OpenStack? Are we using containerization to install OpenStack? Or are we using containers on top of OpenStack? What are we going to do with it? So in this session, I want to give you some insights as to how we at Redhead believe containers and OpenStack come together to enable successful clouds. So what I'm going to do, it's more of an architectural level. I'm not going to show demos or code here. So bear with me. It's really more like a higher level of conversation. But I want to share with what we learned from our customers, from our partners, from working in the communities, and how we believe these technologies come together to deliver a lot more value in enabling success in the enterprise with cloud. Now that said, let's go back to why we built clouds in the first place. We have seen quite a number of cases where organizations invested into cloud and spent a lot of time building private cloud. And then it was done. And then they waited for the users to come and start using it. And in some cases, that just didn't happen. We saw it in one of the keynotes on, I think, Monday, time to do something went from 42 days to 44 to 42, something like that. So in order to drive a successful cloud, we have to go back to why do we do cloud in the first place? And what we see big time in all industries is we go through this digital transformation as an industry where every industry increasingly delivers their value, their competitive differentiation, their customer engagement through IT-based solutions. We all use mobile phones. And we use a lot of services ourselves. Between enterprises, transactions get handled through IT. We engage with customers. We communicate. We produce media. We communicate through technology. So in any industry, serving customers and therefore winning against the competition is a matter of being able to do IT. And it's all about defending against the disruptors from the side. You've all heard the examples, Uber, don't get Ubered, Airbnb, Facebook, Google, Apple. These are technology companies who are entering traditional industries. We're here at the OpenStack Summit and a lot of talk about telco NFV. What is NFV? It is an industry transforming itself from a somewhat static proprietary hardware appliance world into a software-defined fabric. Why? To be more agile, to be more nimble, to be more cost-effective, to compete. And we see this in every single industry. At the same time, as you see at the bottom of this box, the infrastructure that we already have is running a lot of workloads. And just because we now do cloud and we do digital and we find new ways of working, doesn't mean that we can shut off the old stuff. That's still there. And in most organizations, this is where a lot of the resources, a lot of time is spent just by the nature of it. We have all these things. So there is an opportunity here of using the same technologies that we use to enable the digital transformation to also, at the same time, modernize the infrastructure that we have so we free resources. We become better overall and we drive some degree of consistency between these worlds. This is an interesting survey from the Harvard Business Review that basically asks CIOs, what's your big ticket items in the next three years? And if you look at that list, it's very consistent with what I just said. Number one, drive business innovation through IT. But number two, security and risk. So this is still important. And we have to bring the domain of security into this paradigm of elastic distributed computing because otherwise, we might sacrifice the value we create to our customers if we lose their trust because they don't think they can hand us their data. And then we know this is a transformation. So we have to establish the transformation in the architecture. So that's what I'm here to explain, to talk more about this architecture that we need to think about and how these technologies fit into that. Now, we also heard in the keynotes that this transformation is not just a matter of technology. In fact, technology is really only the enabler. We've seen this with many of our customers where technology is a necessary enabler for a transformation. But then really, you have to figure out, how do you use the technology? Because if all you do, you put some technology in place, but you operate the same way you do, you have been doing, you're probably not going to gain much. And let me give you one example. We know that at Enterprise IT, where we work at a certain scale and complexity, as an industry, we have adopted best practices to manage what's happening. And one of the key fundamental ideas behind many of these best practices in the last 20 years has been, let's get very organized around managing risk. And the approach to managing risk was, we manage change. Many ways, we limit change. Or we want to control who can make a change. The whole concept of ITIL is built around that idea of we manage who can change what and how can we control these changes. Now, if you take these processes and you try to apply them into a cloud world, you're not going to get much benefit. Because you still depend on humans making decisions while the technology is waiting and dragging the feet. So there's a process element, which then leads to organizational pressure. We see many of our customers starting or beginning to not only change the process to align it with the technology, but also starting to change the organizational structure. So if you think about building successful clouds, anticipate that you will also have to change your organizational structure. Who is responsible for what? You will give autonomy and power to some people who didn't have it before. And you will have to find out better ways for the people who are on control today to stay in control. And ultimately, it's about a different culture and mindset. And that's a major shift. And I can tell you, from watching this in many organizations, even inside Red Hat, change is always hard. Cultural change is by far the hardest. So the technology should enable all that. And if you have a project that doesn't go well, guess what gets the blame. It's not the culture. It's not the process. It's always the technology. Anybody ever heard any criticism about OpenStack being complicated? All right, I wanted to see your hands. Next time, show me your hands. So if we go into the CIO, back into the shoes of the CIO, for the last 20 years, how were they measured? What's the CIO's role in the organization? He's there to run a bunch of services, which we weren't always on. And he's there to basically save money. The pressure of the CIOs were under the last 10 years was all about save costs, save costs, save costs, be more efficient what you do. And we've responded to that with automation, standardization, commodity, scale out, lots of interesting things. Now, we're in 2016, and this picture has radically changed. It's all about customer satisfaction. IT has become the business. Therefore, what's most important is satisfying our customers. I'll explain a bit what that means. The business demand agility. We need to move fast. There's a competitive threat here. There is an opportunity there. We want to go after it. We want to launch a new application. We want to change an application. We want to absorb a new feature. We want to consume a technology that comes from somewhere. We need to be able to make that fast. And to be honest, this fast, again, is not just a matter of technology. At the same time, though, we engage our customers through this technology. So we have to be confident to what we deliver. We need to make sure it's actually working. It's running. It's performing. So let me dive into each of these three a little bit to explain what it is. If we look into the agility dimension, the biggest enemy of agility in enterprise IT has nothing to do with technology. It's wait time, where one person waits on something to happen. Has any of you ever waited for something in IT? Right. Now, why are we waiting? That's an interesting question. We've adopted best practices from other industries. What you see here is a fairly recent picture of a car manufacturing plant, where we all believe the assembly end was invented, where we then had the opportunity to specialize certain roles. This guy is working on an engine. This guy is putting the window into the door, et cetera. You name it. And we've embraced that same in IT. So we've built organizations around specialized skill sets. There's a networking guy, storage guy, a virtualization guy, your DBA. And you see in this picture, the DBA is actually sick today, so it's not here. Guess what happens with the work that lands on his desk? It ends here. The request queue. So one of the ideas of cloud is to eliminate the request queue, enable self-service. But how do we do this if we at the same time still have all these specialized skill sets that excel at what they do, but now we have to coordinate every single request, every single change we want to make across multiple people, because people are not instantaneously available. And to make it worse, people, if taken out of context, might make mistakes. So one bad thing about request queues is you might get your request fulfilled, but not the way you intended it, so that also the error rate is actually a bad thing. So putting all this together, if we talk about agility, we of course need automation. We need self-service. But we also have to come up with a way that eliminates the wait time that is incurred on us because of the organizational structure and the specialization. And the good news is we can do that. If we look into the customer satisfaction, there's a lot to customer satisfaction, obviously. Starts with, is your product compelling in the first place? But let's assume that in a technology-driven value and service, it quickly comes down to, is it performing? It goes back to the same thing. If you are doing something on your phone and it takes 10 seconds to load, you might not have the patience to wait for that. You just go somewhere else. And that's true for everything and everybody. We talk about features, obviously, new capabilities added, new exciting things, responding to competitor moves. And then increasingly, technology is also not just defined by, it's there, I can use it. It's the experience that matters. Is it easy? Is it intuitive? Is it pleasant? Is it emotionally engaging? So all these things require constant tweaking, constant tuning. You want to experiment. You want to find out what your users or customers like the most. So you want to have that ability to be very agile and nimble in that category. If we go into this confidence domain that I described, that looks very familiar to all of us. That has been always pretty much the same, except it hasn't. The availability, of course, our services need to be available. No doubt about it. Downtime is the enemy of customer satisfaction. No doubt about it. Performance is super important. And performance is something that's difficult to be confident about if you have volatile workloads. If you have that many users now and you have a lot more users tomorrow, and then you have tons of applications, and you sort of manage resource capacity against this, performance becomes a real challenge, managing performance. And we use technology to this. And we need more technology, to be honest. And then there's security. Confidence delivers also very much where the security domain lives, because the last thing we want is that we lose the trust of our customers because of some form of security exposure. And we've seen, for example, when we all had to change all our passwords two years ago when Heartbleed came out, that actually led to a drop in some of these services because people just didn't bother. I just don't use that anymore. I don't trust these guys with my data. Now, in order to do this, though, we need technology. And we've introduced this in cloud to actually be able to share resources. We talk about elasticity. And we talk about scale. And that is very important. If now you combine all these things, you look at this fairly challenging beast. And that's one reason my open stack can be seen as complicated. If we bring it into more of an attribute view, what does the infrastructure have to do in order to enable the business? Because that's what a successful cloud does. We need to find a way to share standardized resources. We need to introduce specialization in a new way. And we have to abstract certain layers of the technology from each other. So we enable groups and individuals inside the organization to act autonomously, to not having to wait in a request queue. That's what the technology has to do for us. We need a high degree of self-service and automation. Obviously, again, eliminate wait time, reduce the amount of work that's involved with any stack. And we need the machine to control many aspects of the total solution. And typically, we describe this as it's automated, it's managed, it's policy-based, it's always on, it's utility. Give it any name you want. Now let's look a bit into the technology. If we look at a very simple stack, the stack that's running right here on my laptop, what really matters to the business is the application. That's what we really care about. Content is king. And in order to run any application, we need some capacity underneath. We need some hardware. There might be some people out there who believe that just because we do cloud now, hardware is no longer important, but all application needs some silicon, copper, and fiber underneath to work. So hardware is obviously always there. And as an industry, we've settled on a stack where we put something in the middle, that thing in between that we used to call the operating system, which does a number of things. It enables the hardware, but it somewhat abstracts from it and it also manages the application up top. So let's bring this into the cloud domain. If we think of cloud, it's still that same thing. There is hardware underneath. We call it infrastructure, because now it is a distributed system. We can scale this across however many nodes, networks, storage resources we want to use. So that forms the infrastructure. And that takes the role of the hardware on this little laptop. The operating system now is no longer just an instance of Linux or Windows or Unix or whatever. It now becomes a more complex things that actually manages resources, services, but also provides run times to the applications and allows us to share across that distributed environment. That's what the operating system does. Now if you look from a technology standpoint, OpenStack certainly does that. OpenStack lives in the infrastructure. OpenStack also provides some of that operating system. And then we have the applications on top, which typically are composed of the app code itself, plus whatever dependencies they might have, system dependencies, runtime dependencies, languages, modules, frameworks, you name it, and then some other resources like config or other static data the application needs to deliver. And with OpenStack, of course, perspectives matter. If you look at this stack, if you look at this stack from an application point of view, you look from the top down. You care about your application. You care obviously about that operating system thing in the middle because it constrains you. Typically that's the place where you are limited in what you can or cannot do. And then there's this infrastructure thing underneath which you really don't care about. As the app guy, this is just there. It's utility. I need it to be there. I want it to be there, but I really don't care how it works. If you look at it more from an infrastructure point of view, it's the exact opposite. You care about this infrastructure thing more than anything. And when we build OpenStack clouds, very much a lot of our problems live in that world, getting it going, getting it running, getting it updated, getting it to scale, getting all the system to work, getting it available. That's an infrastructure thing. That's what we worry about if you're more in the infrastructure ops scenario. And you quickly extend into this operating system because of a lot of your challenges are up there. If you think about run times, security fixes, security isolation between tenants, between applications, that's all stuff that happens in the operating system. You care about this. And then you get to the applications which you don't really know a lot about because there are so many of them. So these perspectives lead to different needs. And this is something that's very important to keep in mind. If you wanna build a successful cloud and you wanna use it to transform the IT function in an organization, you have to cater to the needs of the people doing it. And if you look on the left side, that's more like application-centric guy, developer, application owner, line of business, call it whatever you want. To them, it's all about getting instant access to stuff, no wait time, self-service. It's about resiliency and scale that you just take for granted. But you wanna use it. You wanna have a way to annotate, okay, I want this application, and if no more users are coming, I just want magically the infrastructure to scale more capacity to it. And that's possible. But you don't care how that works. You just wanna tell me how to do it. And I'll do it this way. And then you want a lot of control because you are the one who is responsible for what happens to these users out there. So you need to be in control. You wanna be able to respond quickly to something. You wanna also know how are things going. You need to know, am I performing? Am I able to push out a new version three times a day, an hour, a minute? So you wanna know if these things happen if they're happening well. And you wanna have as much independence as you can from other people, but also from environments. That's why we have seen in the last few years many of these development projects around cloud native applications starting in the public cloud because that gave them independence. The public cloud guy doesn't mind what I'm doing as long as I swipe my credit card. And then in enterprise IT, it often is not as smooth. So we've seen that. And of course I wanna have a high degree of automation. Flip over to the right side of that. There's your ops infrastructure standard view. And it's not completely different, but it's different enough to be important. So you look here at the most important thing you should care about for that persona is the operational model needs to be defined, needs to be robust. From an ops perspective, nothing is worse something bad happens and I don't know what to do. Because then you get the blame and you have no way to escape. From an upside also, you host many applications. You serve many users, many teams. So utilization is something you worry about because utilization very directly translates to cost and cost is always important. It's maybe not most important, but it is important. And you go into security and compliance which again is a horizontal domain. You cannot write a ticket if you're in a financial institution and you are regulated with all the fine regulation that we have out there, then you cannot say, oh, for this application, that doesn't apply because that application is important or cool or agile or whatever, doesn't matter. Compliance applies to everything you do and so do the security expectations. So you get the idea. You get a little bit of this, the left side's very focused on what they own and guess what? The right side's very focused on what they own. And that's actually one of the key insights in building successful clouds, being very conscious who owns what. Now, let's bring it into more of a technology. Today with OpenStack, we build OpenStack for a number of years now and we've come a long way as we've seen here. Customer momentum, partner momentum, becoming the standard for building clouds. But the whole technology stack is fundamentally built around the concept of virtual machines. And a lot of this container discussion that we see in the industry is about containers versus virtual machines, which is better. And there are certainly things that are amazing about virtualization. That's why we use it. It abstracts from the hardware. It gives a control point to people. They can do lots of things. But then it's here in the slide, there is a catch. And the catch is a virtual machine combines many technologies in one single deliverable. You have a guest operating system in there, which typically tends to consume a lot of changes, security fixes, features. You have runtime dependencies in there, like whatever, your Python, your Java VM, you name it, whatever language you have to support or whatever environment you have to support. And these things, what do they have? Again, versions, security fixes, bug fixes. So these things change. And as you go up, you get more into the ops view, you get to configuration, you get to agents for various purposes, and you get to automation engines, like Puppet, for example. And also from the app side, you have application code going in there. You have other frameworks going in there. So what you see on this chart is, that's a picture, that's a normal VM as we build it, multiple people drive changes into this thing. And this is what the problem is with virtual machines. It's not so much about, they might consume more memory than a container, or they might take a couple seconds longer to start. This is an optimization, but it's not most important. The most important aspect is the organizational impact of, I need multiple people to make decisions about what eventually becomes one thing. And then that one thing is what I'm managing. And if anything happens, I go back to having two coordinated cross people. Now that's a solvable problem, and we have solved it. We've solved it by using technologies to dynamically compose and aggregate these packages, using Puppet or Chef or Ansible. Anyone in the room using these things? Hence, all right. That's not all, that's surprising. But still, that creates a new problem. Because if you now bring it at the level of the scale, of the cloud scale, you have all that complexity in every single VM. There are some that are simple, but not all of them are. Many of them are different, and you have the same thing at the infrastructure underneath. And we call this in the industry, what? VM sprawl. And that obviously is not a good thing, as we all know, but VM sprawl is really not so much about data or storage. It's about the organizational impact of having that complexity. And here's really the catch. As humans, we tend to not deal very well with complexity. I found this amazing quote, which tells us something about the software industry. But what's really happening, the way we want to deal with complexity is we want to break it into parts and pieces. So we go back into this specialization view that I described, and we've seen what that introduces. It reintroduces wait time. So there is a risk that once we have a certain scale with virtual machines, you can build amazing applications with virtual machines. You can build many of them. But once you hit a certain number of them and a certain mix of old and new and slow-moving and fast-moving and lots of people interacting with it, it becomes super, super complicated. How do we respond to that? We introduce standardization. And standardization is amazing. We tell people, only use these components. So not every VM is completely different from every other. But that comes with a catch, because standardization limits flexibility. And as you see in this automobile industry example, there was a time when cars were all black. In software, that could mean, like a policy, you're only allowed to write the Java applications. Does anyone ever see the policy like that? A lot of enterprises function this way. And that, of course, is not ideal for enabling innovation. So we need something better. Now, here I'm finally coming to one of my key points, containers to the rescue. Because containers create an opportunity for us to redefine that middle layer in the stack. We rely on infrastructure underneath, and then we have an orchestration container fabric on top of that that enables the application. And what that really does is, it builds on the idea of two things. On one thing is, we take that virtual machine that we looked at with all its component, and we cut it into two pieces. There's a piece that the upside owns, and there's a piece that we're happy with the apps guide to own. So these two sides can now make the decisions autonomously independent of each other. Of course, now, that sounds great. There are a couple more things. What is a container? A container is basically a fancy file, container image, that encapsulates the application with all its dependencies. And that can be instantiated, and then it becomes nothing but a fancy process. It's a process that lives in namespaces and is therefore isolated from other processes on the same systems, and can also carry attributes like network ports or IP addresses. Now, that is really good because we are all very comfortable using processes, aren't we? We better are because there really is no other thing. So processes are actually very efficient on a system, and we have decades of history of sharing resources across operating system instances, across process on the same instance, because that's how a multi-user, multi-tasking OS works. So we have a lot of experience. We can make that grade. We can manage the sharing. We can manage tenancy very well. So containers seem to be amazing for that. Now, what containers really enable in the enterprise is three benefits. The first one is, it allows us to specialize operations along these lines of abstraction. We can have this infrastructure box that you saw at the bottom. We can make change there. We can go from one OpenStack version to the other one without affecting anything up top from us. But we can now also separate the fabric inside the application that carries our runtimes, the security fixes, the application code, all these things, and break that into somebody can manage that cluster, and then we can enable teams to operate the application itself. And this is how some of the most advanced, large-scale organizations work. Like, for example, Google, who is one of the driving forces behind containerization for many years, they have an operational structure that looks very similar to that. We get, with containers, we have a high degree of standardization, but not standardization on the content, standardization on the methodology for how we make changes to a system. And that standardization can be automated very well, and that makes us better, delivers more quality, more speed, and gives us a lot more efficiency. Now, how does it really work? In order to get to this standardization, we need, of course, a consistent way of doing things. And in the container world, containers are around for a very long time. I don't have the time here to really give you all the details about it. There were other good talks here, and I'll give you some other suggestions at the end of the day. But we need a set of open standards around containerization in order to make them really effective. It starts with a format. As an industry, we've basically settled on Docker being that format because it's really amazing, because it allows us to package an application as a whole and still retain some sharing because it has built-in layering concept. That's really cool. That's why we loved it. We also embraced it at Redhead. But then you quickly get into the next big problem, which is a real application is not a single container. A real application is composed of multiple services, or microservices. And now, if you want to manage that application, you have to get them and talk to each other. You have to instantiate. You have to manage them as a whole. And that's what the orchestration engine does. The orchestration engine really enables you to take a bunch of container images but now instantiate them as an application and manage the attributes this for performance, for change management, et cetera. So the orchestration engine very much is like the brain of the operation in a containerized world. While the containers are the immutable software artifacts that you don't change while they're running, you change them as you instantiate by replacing one image with another. We also look at open standards to drive the interaction with the underlying infrastructure. That's really where OpenStack comes in. And we've created, as an open source community, a number of vehicles to do this. We launched the Open Container Initiative together with Microsoft, Docker, lots of other companies. We also launched the Cloud Native Computing Foundation to drive the standards and best practices. And of course, we have the OpenStack Foundation. So think of this as these are the bodies who drive a set of standards around isolation, format, orchestration, increasingly also distribution of how do we distribute container images so we can consume from an ISV or from a service provider, can bring it into the enterprise, apply whatever policies to it, and then bring it to the production system where they run, and likewise take our code and bring it to public cloud. All this should rely on open standards. The orchestration engine as the brain of operation is of course super important. And the most popular one according to the most recent OpenStack survey is Kubernetes. And there are reasons for that. Kubernetes is an amazing piece of technology. We've invested heavily into it at Red Hat since Google open sourced it into the open project. And Kubernetes really is an amazing piece of technology for two reasons. A, it has a very rich feature set that combines the aspects of managing a cluster of nodes and turn basically a bunch of Linux nodes into a compute cluster on which you can run containers with the ability to define your application as a sum of its pieces. And what that means is, it doesn't only handle the multi-container use case, it also then instantiates the containers with the right hooking them into software defined networks, attaching storage resources to it as you like. So you have a way to run stateful applications. You have a way to interact with the underlying infrastructure. You have a way to share data and information across these microservices or applications. So Kubernetes really is the cluster manager. It is the state manager for your services and containers. It has built-in availability for your services. You specify my application is composed of this image that I want that many times, this image that I want that many times, and this sort of connectivity in between. And there's a lot of complexity in doing this, or can be, so there's a lot of rich feature set, but then it manages that state. It has the concept of a replication controller which watches your parts, which is the logical unit in which your container instances are running, and then that is ephemeral. It can be on any host, but if something bad happens, Kubernetes will act on it and re-instantiate these containers so you have the right characteristics. That's all amazing. In the real world, we can go one step further. Like for example, in OpenShift, which used to be a platform as a service and really now is more of a general purpose container management platform, we build around the open standards and Kubernetes specifically I talked about. Docker-formatted containers, so we can take any code in there. Kubernetes is the brain of the operation that runs, manage the cluster in the state, and we've combined this with technologies like AutoScale, so we can have the platform automatically detect those, more load, we need more capacity, give me more capacity, but other things like deployment automation, very important. If we look into a containerized world, whenever a change happens around us, we rebuild the container image and re-instantiate that. Now what could change? If you look at that slide, we could wanna make a manual change like a tunable that we set. Typically, we look at code changes being the number one driver features coming from the application developer, but we also have to do config changes to adjust to the environment, and then we will have, we will face image changes. Image changes means in this one, the image we build on, that gives us the core system run times that might change. Why is it gonna change? Security, being the number one driver. This shows you, it's very hard to read, I acknowledge that. This shows you a little summary for a Hello World application in containers. Hello World is not particularly complicated, but just for one year, well seven, how many security fixes did Red Hat release for the components you would need to run Hello World in various languages? And you probably cannot see these numbers very well, but take my word for it, it is on average for Hello World every other week, or way more frequently than that. You see Java here standing out with 66 changes just in one year. So your containers have to change constantly, just to address the security of the underlying run times, plus whatever you change as you drive on top. So the platform needs to automate for that. Now if we have these technologies in place, we use containerization for all these benefits, we can now redefine specialization in the organization and go back to these benefits I talked about. Agility, it's about wait time. If we can redefine specialization, we eliminate that wait time, and what we start to see emerging here is a stack in which different concerns of the total solution we're running are owned by different people. I'm not gonna read it to you, but it's still the triple of infrastructure at the bottom, application platform in the middle, that's the container engine, and then our application code running inside containers. And that is a best practice for how you bring containers and OpenStack together. You run containers on top of OpenStack and you use OpenStack as the underlying infrastructure, you consume services from OpenStack, but you have the opportunity to define autonomy inside the organization, and you can leverage the built-in intelligence, performance management, and resilience capabilities across the entire stack, and every layer you have your choices as to how you wanna handle it there, and that enables even more autonomy. Now, if we make it practical, this is an overview of how would you run on top of OpenStack? And it's basically relatively straightforward. You use the core OpenStack services like Nova, Cinder, Neutron, Heat, et cetera, and you use that to instantiate basically a cluster of virtual machine or bare metal hosts forming your container cluster. And that creates an amazing opportunity because now you can actually run alongside services that run in containers and are driven by that operational model around immutable artifacts that are orchestrated by a Kubernetes or another engine. At the same time, you have virtual machines, the things you do today, they run in the same environment and they happily live together. And you can draw your tenant boundaries around this. One example, provisioning. You need more capacity for your containerized applications. There are many ways to do that. There's a Magnum project. You could do it with just a heat template. There are lots of experimentation going on right now in the community, but in the bottom line it comes down to, we tell OpenStack, give me more container hosts or give me a net new cluster. So we drive that, we have the technology to do this. Same with networking. Networking, we can use a soft, verifying network layer inside OpenStack and all separately again inside OpenShift, but that seems like overlap and redundant, and it is. So one thing we can do, we can just define an SDN per OpenStack tenant or whatever context we wanna use and then we hook a Kubernetes container cluster into that. So there's a lot of flexibility how we handle soft, verifying networking. The easiest one would be we just rely on the networking that we get out of OpenStack inside the container fabric, but we can tighten it up more. Same with storage. We can consume like Cinderbase storage very easily from a container fabric, just tell OpenStack, this application wants five gig storage, okay, here's your five gig storage. Done, and then it's the container fabric that makes it available to these containers by whatever mounting it into the container. We can do other interesting things. We can have distributed storage services inside that container fabric. That's something we start to see using maybe a cluster file system like Gluster or Cassandra databases that you instantiate in as many container instances as you want. So you have them right next to your application. You can minimize latency in that and you still manage them together as one application, one definition, one change stream inside that. So last one would be managing. Of course the opportunity is to manage this altogether because for your application, for your performance, for your availability, to the application it doesn't really matter if you use containers or virtual machines or OpenStack underneath, but if something goes wrong, you need to find out where it is. And that's why we believe we need the tools to manage both the container fabric and the underlying infrastructure fabric with OpenStack. And we happen to have them, there's the ManageAQ project which we productize as CloudForms and we have technologies like Ansible and many more that happily work with both layers and give you a single pane of glass and drive it all together. Here's an example of CloudForms container management which shows you hosts, application, containers, state, resource management and that same dashboard, one click away is the dashboard for the OpenStack views so they get a holistic view of what's going on in such a cloud. Now we put this all together product-wise to integrate to really enable to run and develop any application in there. We can take existing applications there, we can develop container-based cloud-native applications, we can manage VM-based services are all in the same cloud infrastructure. And I can tell you, this is amazing. This is really a big leap forward that gets the technology more towards what the business needs to deliver these benefits around agility, customer satisfaction and confidence. So with that I'm at the end here, I still wanna leave a couple of minutes for questions but if you wanna learn more about all this, there's obviously a lot of things you could do. I leave you to read it. In two months we have the Redhead Summit in San Francisco. Come join us there, we'll talk a lot more about how OpenStack and containers and management and all these things come together in a practical way. With this I wanna thank you for your attention now and I'm happy to take any questions. If you have questions though, please step up to the microphones in the middle so we can all hear it. Any questions? Go ahead. Good morning, Lars Scott-Fulton with Job Jackson at the New Stack. I have seen the three layer model that you showed on the slide modeled more like a donut or for those in the room who are allergic to sugar more like a bagel or hula hoop in which the three layers are actually coordinated with one another and doing this communication thing and the arrows point to one another and you have what some people call a virtuous cycle and they say this is necessary for SDN because in SDN you're giving the application or you're enabling the application to define the network, enabling layer three to say what layer two is. How does that type of virtuous cycle happen if we are to maintain the type of ownership of those layers that you say is necessary in order to make a container orchestration work? So the short answer would be, part one is there's many ways how you can apply SDN into such a stack or such an infrastructure and that is important so we want that flexibility but the simple answer is if you wanna retain the operation specialization in the ownership you will define a provider and consumer relationship and that's what I described because that's simple, it's easy to understand and if that operates in a way that it meets the needs of everybody involved then that's actually a really good best practice. What I described here is the provider would be open stack, the consumer would be the container fabric around let's say open shift or Kubernetes based clusters. You can make things more complicated if you want but right now we would not recommend doing this unless you have very special needs. Another way you can think about this, I showed the example, you can absolutely have multiple container clusters. Don't think about I need one big cluster with thousands of container hosts and that's where deployment applications, that is really not necessary. You can carve out separate environments and define elasticity within these environments and then use for example network separation or segregation against these environments that reduces your complexity, still gives you most of the agility and flexibility you want in the infrastructure and we have the tools to manage that, open stack being one of them, open stack can carry as many of such environment tenants as you want and also from a management back plan point of view we can bring it together. So that would be a best practice to do that. Thanks guys. Any other questions? All right, cool then. Thanks very much for your time and your attention.