 Hi, folks. Good afternoon. Well, the second day of OpenStack Summit, and it's not yet the last session. So I'm hoping you're not completely submitted out. My name is Tarek Khan. I'm with Hewlett Packard Enterprise. And today, we're going to talk a little bit about containers and, more specifically, containers within the NFV context and talk about what different options could be there for being able to use containers within NFV. And I'm sure if I start talking about this slide, half of you are going to leave, because you must have seen it in every single container presentation, why people are using containers over VMs. But the point that I wanted to call about was that there are some very good reasons why containers make a lot of sense for enterprise workload, for cloud-native workloads and such. But the telco workloads or NFV workloads, as we call them, are slightly different. And the way that they're different are that they require, when you deploy these, you require a little bit more fine-grained control on how things are being deployed. And if you have been coming to these sessions, I think the last couple of years, there has been a lot of discussion around the specific telco requirements for OpenStack. A lot of those requirements are equally valid for containers as well. So you need a little bit more control, a little bit more knowledge of the underlying architecture of the infrastructure. But besides that, if we talk about the mobile core, so some of the applications that are deployed to be able to run our cell phone networks, those applications like Virtual Evolve Packet Core or IP multimedia subsystem, they require the VM sizing that they require. They're rather large. And typically, when you deploy them, you assign full cores to them. So the traditional packing of containers that we are able to do, which is very attractive for deploying applications as containers, that perhaps there's only so much you're able to do. The other thing is that, for the same reasons, the workloads within containers are slightly longer lived. So you don't have VMs and containers coming up and down as much as, for example, I think everyone would have not heard the number that Netflix, each workload, is only live for around an hour, less than the time a movie lasts. And then I think the last one is not going to be surprised to anyone, especially if you are from the telco side or from the VNF vendor side, that there's just not many VNF that are available as packaged as containers today. So there's as much of knowledge and experience we have not been able to get. But now it's definitely picking up. And I would say that in the last year, I have not spoken to any operator that's not looking to start investigating the use of containers within the ecosystem. So this, by the way, is our take on comparing containers and VMs. I'm sure some of these could have very passionate conversations. But this is definitely subjective. But the point that we are trying to make over here is that containers and VMs are solving different problems. So it's not either or. It's for specific problems, especially for telco workloads. It's going to be you're going to apply different things to solve those problems. And as you'll see, in certain cases, by the way, where the circle is full black, that means that we feel that is more appropriate for that row. And where it's more white, it means that something's lacking. So what it really means is that in certain things, one deployment is much better than the other ones. Or in certain other cases, the other ones have some capabilities that are not available other places. So having said that, container platform, again, I'm sure most of you sat into this morning keynote where the OpenStack container ecosystem was discussed. This always has to happen. So there's different ways of parsing it. And I think if you do a quick search of comparative container ecosystem, you'll have different ways of parsing the entire ecosystem. This is one of them. This is from Wikibon 2015. I think you can already say some things have changed in this. But it's a rather complex ecosystem. Just like what OpenStack as VMs has been trying to address, it's a rather complex ecosystem. It's not just use Docker and all your problems are going to be solved. And if you go to the next level in it, and again, this is not a comprehensive lift of different technologies that are available at each layer, but there are a lot of technologies out there for every single layer. And what this slide doesn't show is that while each layer has different components, there are some stacks that are being developed, some of the popular players. They're developing stacks which address more than a single component, and which at times is absolutely needed in the environment, because that way you're able to very quickly get the platform up and running. But in other cases, you want a little bit of flexibility to be able to do and customize different layers as you'd like to be able to do. So if we take the stack that we talked about earlier, and then we try to bring it up to OpenStack, and in this one, I just assume let's say Docker, which is the most common container runtime. But instead of just taking the runtime, let's see if we use Docker data center, which is their Docker's ecosystem. If we were to compare and contrast it with different OpenStack services, how it would look like. And you'll see that there are some places, and again, this is not in these kind of comparison. There are always some things that you can talk about this doesn't align with here or this aligns with something else. But at the broad level, this is how these things align. And you'll see that in some places, Docker doesn't have an equivalent service. So there's some things that if, for example, you're going to use even an integrated stack like Docker's with OpenStack, you want to be able to use the OpenStack ecosystem to provide those services. Block storage, absolutely. That comes up that Cinder providing black storage services, Docker does not have anything. You just use the back end storage directly, like Cephanol, or object storage, another one. If you're going to be able to store your images somewhere, either you use it file system or OpenStack provides object storage. Why not use that? Again, if you look at identity, right now, if you look at it on the Docker side, it's pretty much just go and integrate with LDAP, which doesn't provide what you're looking for. You can leverage something like Keystone. But as you see in monitoring, there's a lot of, I know there's a number of vendors over here as well, so a lot of different solutions, which perhaps you can look at Monasca to somehow work at it. Now, this is another slide. I'm sure folks from the telco side would be very familiar with it. So we thought that perhaps we talked about OpenStack equivalency, but if you were to use containers and map them over to the Etsy architecture, this is the architecture that most of the telcos are within this area they're using to define the broader problem. Then we'll see that one thing that doesn't change between the two is the hardware. You use the hardware, which absolutely could be the same as for hosting anything else. But the things that change are some of the other components that come and that are considered as part of NFEI or NFE infrastructure. In those things, essentially something like a hypervisor that we use within VMs would be provided by something like a container runtime, in this case, being Docker engine. You will have the actual containers, which are basically the NFEI or virtual servers that you're going to use within Etsy nomenclature. And then you'll have networks. LibNetwork is the one that comes with Docker, but of course, you'll want to use something slightly better. Now, there are certain use cases where OBSDPDK is going to make sense, specifically for NFV, quite likely. And other use cases, it may be different. But then we kind of go into the other parts. We can within, if we're just talking about using Docker data center by itself, then there are some components that come along with it, which you'll assume as the virtual infrastructure manager for containers. And then if you have a SDN controller, if you want to put in it, Etsy didn't call it out separately, but SDN controller would be sitting in VIM as well. But these are things that the users or consumers of containers are not going to worry too much about. This is only for the OBS guys who are building these systems and have to care and feed these things. The guys who are going to use this environment, they'll worry about the interfaces. And what are the APIs that you're going to use to be able to access this? And they essentially come down to this VI-VNFM interface or in the OR-VI interface. These are the ones that VNF managers and NFV orchestrators use. And you are able to Docker swarm if you use the Docker data center. They are going to provide these interfaces, which means your upstream system's got to be able to speak this language. There are other interfaces that come in play as well, but these are not exposed to the users. The one between virtualization layer and hardware, and also the ones between the VIM and the virtual infrastructure. Again, users are not going to worry about it as much, but the VNF vendors will quite likely have to worry about these interfaces. So far, I just defined what if containers make sense for NFV. And then how would some of the things, nomenclatures and different architectures that the telco guys have been using within NFV, how containers or a container stack is going to look like. Now getting into a container platform, so when an operator is building a platform, obviously, first you've got to come up with some requirements. And the first question is going to be, the workloads that I want to be able to deploy, are the workloads going to be coming as containers or VMs? And if someone is going to say that I just want to be able to support VMs or I just want to be able to support containers, and by the way, I know you can deploy containers on VMs as well, we're going to touch it. But if you were just going to do containers or VM, one packaging style that you're going to support, then a lot of solutions are there. You deploy a number of different folks are doing it, and almost every infrastructure vendor that's out here would have a reference architecture to be able to go about it. So we're not going to talk about that as much, because that doesn't change too much when you're running either telco containers or regular containers. The other option becomes containers on VMs. And by the way, I'll broaden it a little bit. It could be containers or VMs or containers or bare metal as well, but when bare metal is managed as a tenant facing service, i.e. bare metal using something like OpenStack Ironic, it's going to be the same thing. So for this, as you would have heard this morning, and there's a number of talks over there, that this is an area and I believe that this session has been predominantly around how one of the key topics has been how do we run containers and virtual machines together within the OpenStack infrastructure. And community is working on it. Magnum has reached some level of maturity that you're able to start deploying it and start testing some VNFs when the VNFs are available. And then this new project, Zoom, I'm very keen on sitting in it and trying to find out what is it that that project is going to bring out. Courier, of course, makes a lot of sense in bringing not just the network, but other OpenStack services over to containers. So community is working on it and we're going to have some solutions available to show. Excuse me. Available today, some solutions coming out. But where I wanted to go was that if we want to be able to deploy containers and VMs, and what that means really is that we want to be able to use the native container stack and a VM stack together, then what options are available. So that is what we're going to look in the rest of the slides. So if you take this a little bit deeper, then we have to come up with what requirements for this platform that will guide us in building this thing. So first off is straightforward. As an operator, as a customer who wants to be able to run whatever workloads are coming, you want to have as much commonality as possible. So commonality in terms of acquisition, managing. So you want to be able to come up with a single platform and a hardware management platform, a single way of getting things in and be able to manage the lifecycle of the components that you can call it, the underlay of containers. Then you want to be able to support multi-cloud. And there are excellent presentations this morning at the Keystone, at the keynote where it was shown how using OpenStack, you're able to deploy it to Amazon or to GCE. So similarly, your platform, you'll want to be able to support multi-cloud because what your VNF vendors are going to provide, you won't be able to control. And then there's a lot of other things that you want to be able to share between the two at a minimum. Quite likely more than this, but at a minimum, you want to be able to come up with some of these things that you support. And then if you recall this technology stack, you want to be able to map some container stack that you're going to pick, and you want to be able to map it to the container technology stack. And in this one, we took the example of Docker data center. But this is as applicable for any other stack, or a stack which is made up of component from other places. I know this one uses Docker swarm as the container management or orchestration system. But of course, Kubernetes, MISOs, they are significantly more popular and more adoption. So they play in it as well. But the key thing being that you're going to pick up a stack, and then there will be some deployment choices that you're going to be making to be able to build your entire environment. And then what you want to be able to come up with is to layer this environment around. You start with the platform, the platform management component. And in this one, the example I'm taking is obviously some of HP components. But the approach is applicable anywhere. But you start with a common infrastructure and then common management of the infrastructure. And then what we are suggesting over here is that have some kind of a mechanism that from one place you're able to manage the underlay, or as we're calling NFV control plane. So you essentially be able to deploy your OpenStack, your SDN controllers, and your container control plane as well. And by the way, the OpenStack that you deploy over here, now that Magnum is obviously part of BigTent, and Magnum is now included in a lot of distributions. So the containers on VMs, be it on bare metal using Ironic or be directly on VM, you do it through the OpenStack layer. But then be able to provide a native containers stack on the side that can use some of these newer technologies that these, as we know, it's a very dynamic environment. So you're able to provide some of these newer technologies and bring it out. So for runtime architecture, I know there's some comment made this morning, this one API to rule them all. And of course, that's a desire. We would love to have a single API where we're able to manage all kinds of workloads. And there's great effort going on within the community to be able to do it. But quite likely there will be cases where you may need to be able to expose the native API of this stack as well. And what we are suggesting over here is that near-term deployments, especially for NFV, where most deployments are to familiarize with the container technology and to be able to see how it can integrate into the environment, we suggest that we expose both of the APIs, the native API of the container stack, whatever that expose it, be it Kubernetes, be it Swarm. And of course, expose the OpenStack API. And that way, at the NFVO level, the NFV orchestration level, you're able to create services, and those services could decide where the deployments are going to go. So with that just closing that for such a stack, what we feel that single API, even though it's desired by everyone, it may be a little while before it's going to become a reality. And until such time, we end to recognize that the velocity of development for both OpenStack and the container platforms is going at such a level that we plan for multiple APIs to be exposed upstream and build a platform that's able to support both containers and VMs in the same environment. That we have a couple of minutes, and if there's any questions, I'm happy to answer. And since this was a sponsored track, our guide always asked us to plug in our products, so I wanted to do that. We do have a solution called LENFV system, which is a integrated offering. Hardware software includes our distribution of OpenStack. And our vision is that something like this platform should be able to support containers in near term. Thank you very much.