 Hello everyone, thank you for joining us for the session. My name is Amit Tank, and I help lead the Cloud Architecture and Strategy for AT&T Entertainment Group. So there's a lot of discussion happening about container and how container is going to fit into the OpenStack and how does OpenStack is going to orchestrate container. There was a lot of demos being shown in the keynote addressing. So we like to present our strategy about how we see the world. It's not about not just AT&T. We also have collaborating on the LCVO, where we actually hear feedbacks from the other large telco and other large users. So we like to share what we think, how OpenStack is playing a role with respect to container, and how does that is going to help the large provider in terms of supporting container, as well as keep supporting the VMs currently being supported. So before we go into the technical aspect of the container and OpenStack, I'd like to provide you guys overview about what is AT&T Cloud deployment is about. So the AT&T Cloud deployment is called AT&T Integrated Cloud. And it is deployed more than 80 plus location. And it is a carrier-grade workloads are running in it, because we are running the NFE and other enterprise workloads also. Especially when we run Cloud in the telco world in terms of supporting the telco application, the fact is that it involves more than just supporting a VM. So there need to be a proper networking connectivity. And it is not just about the L2 connectivity. It also needs the SDN solution to support complex networking connectivity, what is needed to support the telco application. The size of the AIC nodes, it's widely arranged from a small deployment, to a medium deployment, to a very large deployment with the finder plus compute. So when we talk about the scale, when we talk about supporting a VM, when we talk about supporting a container, we have to think about a large scale. Most of the time when the developers goes and code a specific configuration to support a container or a VM, sometimes they forget about the large scale. So when we talk about the strategy about the container and OpenStack, we like to think about how do we support this large scale? How do we support these large deployments? So we're going to touch those topics today. And in terms of how do we support this large scale with respect to container and OpenStack? And also the AT&T Integrated Cloud, it's really about supporting a highly available application. So when we really support telco application, for example, there is a 911 call getting called, and the application is hosted in the telco cloud in the AT&T Integrated Cloud. The fact is that it has to be highly available. This is not a development lab where you run a VNF in a container, and then that doesn't fit into this large production ready cloud. So we've got to look at both aspects of the container as well as the VM. So here I want to show you guys the trend. There was a lot of discussion about, also there is a lot of myth about how the container world is emerging into it. So I'm going to show you guys the trend currently we are seeing in terms of how the containers and VM. Again, this is the telco world of the view. This may be a different for other people, depending upon what workload they are running with. So here you can see that today, if you take enterprise workload on the top part of the slide, there is a set of VMs. And then you can see that the bubble for the container, it is large today. So you can have a lot of this web application, enterprise application, can be easily made to work as a container. So in the 2018 and 19 time frame, we are seeing the trend that the OSC from the internal team, the OSC from the VNF team. We see that this container is really getting, the bubble is getting bigger and bigger with enterprise workload. Especially when we go to 2019 plus, the container world is really going to get very big. And the usage of VM is really going to become small. This is a little different when we talk about virtual network function. Because the virtual network function, as I stated, it is not a simple VM. It has to maintain a state. It has to have a complex network configuration. It has to have sophisticated storage connectivity. So the high availability also plays a large role in terms of supporting this virtual network function. So it is not there yet in terms of getting into the container world. So the virtual network function today is that you can see this bubble. The VM bubble is really large. The container is really, really small right at this moment. But in the trend is like 2018, 19, we feel that it's going to get a little bigger. But there is a lot of work the industry has to do with respect to the virtual network function. When we say industry, it's not only just about the vendors who are supporting the network virtual function, but also the open source communities, like OpenStack, also has to do certain sort of requirements need to be satisfied in order to bring this virtual network function in terms of making that bubble bigger, what I show actually in the 2019. So this is a trend with respect to the workload itself. But when we look at the OpenStack services, can we run the OpenStack services as a container? Absolutely yes. So we are looking forward to run the OpenStack services in the container. And we're going to talk about that in the future slide about why do we need to do that? And what is the benefit we are gaining of running the OpenStack services as a container? So we're going to talk about that in the next few slides. So when we talk about container, especially on the telco application, the layer on the top and the layer on the middle on the previous slide, what I was showing on, there are expectation in running things container. When we want to do something, there need to be some business benefit or there need to be some benefit, we move the technology from one to another. So in this case, like you can see, there is five things we are categorizing that is the key things we are expecting out of the container world is that we can have a microservices and we need a horizontal scaling. When we try to have a more application created in a very quick manner, we really need this horizontal scaling. And we also need this distributed system. And the fourth one is a very important key is the better CI CD process. So when I try to install the OpenStack and when I try to have the application installed, if I can quickly roll out that releases into the production, that is very important when we really talk about very large scale of the deployment. So CI CD is also very important. The modular scaling and better resiliency, I talked about this, the concept of highly reliable. It is very important that modular scaling and better resiliency also be taken care. So whenever we talk about the container, whenever we say that something need to be developed because there's a lot of discussion happening about Magnum, there's a lot of discussion about happening about how the OpenStack has to support it. We have to think about, OK, what is the benefit and use cases we are trying to achieve out of it? Because if we don't consider the use cases, then we will develop a partial solution that will not fit to anyone. So we need to ensure that these are the benefits we're trying to target when we talk about specific solution inside OpenSource or even OpenStack. So this is a slide I want to simply, this is a very, very simple slide. We didn't go into the details of the integrated details of how this thing's going to work. But the use case level, I'd like to show that how these two walls of container and OpenStack are coming together. Because this is a million dollar question that everybody was asking about is that we see the three different use cases that is going to have OpenStack orchestrating the container or OpenStack running as a container. So let's talk about the number one. So in this case, we have OpenStack Cloud. So this is what everybody has it today in the deployment. And then having a Kubernetes cluster installed using the OpenStack Cloud. So that to me is a use case number one. And this is more of a tenant use case. This is not doing anything with the infrastructure itself. And the second part is like OpenStack hosted Kubernetes. So this is a very interesting use case to consider is that we are really putting a lot of effort in terms of making this second one happen as quickly as possible. So we have a lot of people coming from AT&T and being in the sessions and we are pushing this concept of supporting OpenStack running inside a container. So that is an infrastructure use case. As a Cloud guy, I like to have a very simple CI CD process and way of getting things rolled out. The use case is what I talked on the previous slide. We like to see that get addressed on the number two. The number three is, again, going back to the tenant use case, container is as an OpenStack service. So we support VM today and supporting container through the OpenStack. There are a lot of benefits. The benefit is when I provide a workload to a developer, the developer doesn't need to be worried about whether it is getting created as a separate tenancy for the container, separate tenancy for the VM, and managing all this account. They need a seamless way of authenticating to the cloud. And they got the container if they needed. They got the VM if they needed. Instead of having a two silo systems to manage the containers versus a VM, so there is a lot of investment that's been already done by the industry and also by the large telco providers to run the cloud to support the VM. So in this context, we are trying to have this container and VM supported within this context itself with the OpenStack. So Magnum is actually the one good project that provides that container and VM work together in terms of a single tenancy model. So we believe that these three use cases is very important to consider in terms of supporting the OpenStack as well as the container. Thank you, Kanthan. So continuing to build upon what we've learned so far. As an OpenStack adopter, you are likely very familiar with some of the challenges and some of the benefits that you already gain out of infrastructure. But then you hear about this growth in container ecosystem. And that makes you think that what exactly do you have to learn about? Or why exactly do you have to care about? Well, there are two areas of intersection, mainly where anybody who is either a container adopter or an OpenStack adopter will have to see an intersection happen there. First is the tenant facing, which is very well known. If you're running VMs on OpenStack, the very natural thing to do is to be able to try out some of these container-based features and capabilities in the OpenStack ecosystem that allows you to adopt containers right away. And that gives you a lot of benefit because you can immediately put your applications on top of containers. And you are able to basically take benefit of OpenStack Magnum or OpenStack Murano or Mezos or Courier and heat configurations working with communities configurations to allow your workload to get orchestrated. A lesser known use case is when it comes to infrastructure. Now, OpenStack control plane that we heard about earlier has very interesting use case as well. Nine out of 10 OpenStack adopters don't necessarily think about asking a simple question. Why is OpenStack control plane not just another app? The moment you start asking a question that why is it not just another app, that's when you get on to something. An app has to have upgrade, downgrade, lifecycle management capability happen very smoothly, very seamlessly. And when you treat something like an app, you start pursuing those things. And that's when you can derive the best value out of OpenStack. When OpenStack becomes very easy to manage, very easy to upgrade, downgrade, patch, these are some of the projects that allows you to essentially achieve that app-like behavior, app-like design pattern with OpenStack. So OpenStack Helm is a great example. Docker and Kubernetes, as you can see, are sitting right in between because they essentially are the enablers of some of these capabilities. Solom also allows you to do some very interesting deployments. There are some other industry tools as well that allow you to manage the lifecycle. You may question that, why do you have to care so much about the lifecycle? Well, here is why. So the question begs to be asked is, do OpenStack services really need full KVM virtualization? If it was a very specialized app, maybe it required its own kernel version. Maybe it's required its own system libraries. Then yes. But OpenStack, you heard in Summit today, OpenStack is just a collection of really, really nice composable services, which are written in Python, very user-space application. And therefore, you don't necessarily need some of these layers that you usually need with VMs to be required for OpenStack services. You don't necessarily need a full duplication of guest OS. You don't necessarily need its own separate kernel. Like a NOVA doesn't necessarily end up needing a separate kernel version from, say, Cinder. And therefore, this design pattern on the right starts becoming more and more attractive there. Now, Docker engine, or rather any container engine in the line that you see there, that's more of a visual depiction. But really, in practice, actually, it's not in your data path. None of your calls that you make from containers actually go through Docker engine or a container engine. They directly get down to the kernel. So that's where you get really very high performance and benefit. So when you're able to run OpenStack control plan services in a containerized form factor, you get performance, you get lifecycle management capabilities, and they start having the behavior patterns or design patterns just like any other apps, which is where you may want to go. Finally, by a show of hand, how many people who came here to Boston took some kind of jet plane? Great. Well, I would imagine a lot of us, all of us, mostly, coming from out of town. Now, asking the same question, how many of you would have taken that same plane if there were no air traffic controllers on either of the airports? Well, not many. We aren't that adventurous. So that's where Kubernetes comes into the play. A container by itself is as useful as a 747 without an air traffic controller. So you may want to think about orchestration on day one when you start looking at containerized workloads. This design pattern, going back to what Kandan talked about some of the telco use cases, he explained it very nicely on how some of these use cases are changing. And applying this design pattern of declarative computing, Kubernetes is a very good open source project. When trying to bring this design pattern towards your OpenStack infrastructure, it brings a lot of benefit. Now, you no longer have to worry about maintaining the level of high availability for your sender services or for your NOAA services because a framework, a declarative framework, already does that. And it brings that orchestration to this container array. Now, you see a wordier container sandwich. So I want to talk a little bit about our workloads in entertainment group. We deal with a lot of media-specific workloads. We have to encode media. We have to process media. We have to run applications that deal with metadata and personalizations. So we are actively exploring a design pattern that we call container sandwich, where we can run a bare metal. We run a container on top of that bare metal. And we are able to run a LibWord instance inside that container. A LibWord instance is nothing but your very familiar KVM VM that you have been consuming for a long time with OpenStack. So that LibWord instance becomes an application inside the container. And once you have a VM running there, inside that, you can run VMs. You can run any kind of Java applications, any kind of Python applications. You are back to your very familiar level playing ground, where you can run any and every kind of applications. And this layered sandwich, so to speak, gives you benefit of security isolation, because now you can't have a break-in or break-out of the containers. It's a jailed environment inside a container. And because you're running your containers directly on the bare metal, you still get the benefit of performance and you are able to get the benefit of orchestration of that containers. So if your VM dies, your policy of getting that VM immediately spun off without an operator getting a phone call that, hey, 5,000 of my VM just died, do something about it, without having a manual purse intervention, the declarative framework automatically spins up additional 5,000 containers, and bam, you got your high availability or resiliency achieved there. So that's what the container sandwich is a little. Finally, quickly going through the various different options that you have available in the industry, I want to quickly touch upon a few of them. So you have different virtualization layers, type A virtualization layer, KVM, that is a really, really stable product that OpenStack wraps around. And then you have different choices of OS. Finally, the container technology choices. You have Rocket, which is a very good option that is evolving, and CNCF is behind it. You have different options that are coming at the orchestrator and schedule placement engine. And finally, the question that we are helping towards is that looking at containers helps you improve your OpenStack experience. So depending upon which conference you may be attending, you may see more centralized news, like, hey, is this ecosystem going to win? Or is that ecosystem going to win? Well, it's not quite black and white. It's really, if you are able to take a more pragmatic view by taking advancements and containerization and applying it towards your OpenStack infrastructure and your OpenStack designs, you can actually achieve a very, very good solid cloud infrastructure purely made out of open source ecosystem. So that's one of the messages that we really wanted to talk about. Finally, the CNCF work that is happening in the OpenStandard is something that AT&T is committed towards. The Open Container Initiative and the Open Container Image that we heard about today, the low-key project, that is very exciting. We believe container architecture should be open so that anybody can consume and innovate. It should not necessarily be a privilege of only a special group of companies or individual customers. And therefore, the Open Compute Image, the Open Container Image is a very important thing. And you'll see AT&T supporting more and more of those initiatives by way of contributing our learnings as well as our resources towards those things. So Amit talked about open source. And we are committed to open source. And when we talked about running OpenStack services as a container, we are really looking forward for these specific benefits and expectations of running OpenStack as a container. So there is a lot of demos and lab-level work. But in order to make it as a production, there are a few things need to be really taken care in terms of supporting from a community aspect. So these five items is very important, especially when we're on a large scale. So one is a less control plane overhead. What do I mean by that? When we try to package things as a container, we could eliminate that guest-to-ware layer. And that actually gives us more packaging in. And we really need to have a very, very small instance of OpenStack trying to manage the scale, whatever we need to support. The other aspect is the modular scaling. We don't want to have a situation that we have a fixed size of OpenStack, then fixed size of the compute to manage the infrastructure. So we need to have a way that OpenStack services can scale independently out of other components. For example, we like to have more authentication need to be happening. So independently scaling the keystone. And there is more compute need to be supported, Neutron and NOAA independently scaling. So that is what we call it calling as a modular scaling. Because this is very important, especially when we deploy in a small data center and make that small data center into a large data center. So we can't go and wipe out the OpenStack, which is already in there. So we need to take that OpenStack and scaling that into a large scale. So that's what we call as a modular scaling. So we talked about better resiliency and large scale support. So when we deploy the OpenStack in the telco world, as I talked about it, the resiliency is very, very important. So we can't have services individually deployed as a single container. Then if it fails, then I don't know if we don't have a strategy to really bring it up or having a multiple container supporting in a load balancing way, then the resiliency will be suffering. So we need to have a way of supporting the better resiliency. The large scale support is also very important. There is a multiple efforts in the community to support up to 500 node, 1,000 node. Many people have demonstrated into it. There are still key areas in the OpenStack need to have some changes in the design and also in the coding so that it can support of the large scale. The other aspect is a hit less and in place upgrade. This is also very key for the telco and the large scale deployment, large scale uses of OpenStack. Why do we need a hit less and in place upgrade? Again, this goes back to the availability of that VNF. So there need to be certain availability, need to be guaranteed for the workloads which is running on the OpenStack Cloud. We need to have a hit less and in place upgrade, meaning that when I have already in a deployed version of OpenStack, if I go and upgrade that OpenStack, I don't want to wipe out the existing instance. Need to have a way of laying this new version on the existing one. So that's what we're calling as a hit less and in place upgrade, which will eliminate the need to reboot the VMs where the customers are actually serving traffic or completely changing the structures of API, something that would impact the VMs itself. So the better tools for deployment, this is a one specific area, need a lot of concentration from the community. Why do we need a better tools for deployment? There are multiple tools in the community today to deploy the OpenStack. And this is where we are focusing on with respect to deploying the OpenStack as a container. So we are contributing this OpenStack Helm project. I'm going to talk in the next slide. What is this about is that making the life easier in terms of deploying the OpenStack as a component. So we're trying to package all these benefits and in terms of making this deployment faster, so we need a better tool from the community perspective that allows us actually to do a deployment very quicker. This is easier when it is a lab instance. If I have a one lab instance, go and install it. It is easier, but when we talk about 100 plus location or 200 plus location, this is very getting very harder if the tooling set is not a proper tooling set. So Kubernetes plus Helm plus OpenStack. So this is the way we are looking for the OpenStack to be installed as a Kubernetes containers and run as a Kubernetes container. So we have contributed, this is really a community effort now, and we started this, but now it's purely a community effort. We encourage everyone to participate in this community effort of supporting the OpenStack Helm. This is an OpenStack project. This allows the OpenStack to be installed as a container in the Kubernetes. And whatever the benefits, whatever we talked about, like easy tooling and other aspect what I talked on the previous slide, we look to see that having this sort of new approach of installing the OpenStack as a container does provide this benefit for the large user and all the people who have been involved with the OpenStack. So to provide a summary of what we discussed about, AT&T integrated cloud is powered by OpenStack and we are committed to the OpenStack. We want to make sure that people understand that. And planning to run OpenStack services as a container, that is the area we are actually focusing on. And also, we are planning to use Magnum. So we encourage the community to make all these use cases what we talked about, that consider when they design and code these sort of projects within the OpenStack. And we can take a few questions. I think we still have some time here. And if anybody has questions, you can go to the mic and ask the question. I have a couple of questions. First, you spoke about virtualization overhead of running containers in tried virtual machines and how we want to remove that performance overhead by introducing a pure container on bare metal. But then subsequently, you ran into running live work inside of a container, going back to some amount of virtualization overhead. So one, I would like you to clarify that if you can. How does it work? So that is kind of a contradiction there. The second question I have, again, is a two-part question. You showed enterprise workloads moving faster into containers versus network function workloads. If you were to pick top three barriers for virtual functions, going slower into containers, what would those top three barriers be? The related question I had was if there were top three features that you would like to see in a container product that enabled faster adoption of containers for virtual network functions, what would those three features be? OK, so there's a lot of questions there. So I'm going to go ahead with the first question. Sure. So the topic about performance overhead when it comes to virtualization, yes, because of the entire duplication of the instruction shed set that comes with the user and the kernel that gets packed with your VM. There is single-digit performance overhead, if I may, that comes with it. That performance overhead, the way we see it, is justified because of the additional benefit that the orchestration brings along in a typical OpenStack environment that you are still able to achieve resource isolation. You are able to achieve security isolation. When you devote three cores out of, say, 16-core machine onto that KVM VM, that resource boundary is respected all the time. So resource and security isolation and the benefit of being able to orchestrate VMs and schedule them wherever and whenever you need to, that we believe, those values we believe, make it that performance overhead of that virtualization worth it. Now, going to the benefit of or rather the use case of being able to run the containers directly on the bare metal. So you eliminated that performance overhead for that layer of the sandwich. But what you did also give up is your security context. If you're trying to run, say, even with a simple container, Alpine, where your surface area for your attack is very small, you don't have that much library code or that much system code running, you are essentially, even though you're getting the best performance or closer to the metal performance, you're given up isolation, instruction set isolation, as well as the CPU resource isolation. So your one tenant or your one app, if it's misbehaving, is likely to impact the other app so that the CPU slices, it's going to be not necessarily remaining left for the second app. So by running LibWoot on top of that, yes, you are adding a little bit of additional overhead, but you are making it up by running it closer to the bare metal. And you are bringing back those added benefit of isolation as well as security. Here is one thing. When we have a choice in terms of architecture on do I want to give up security or do I want to give up a little bit of performance, it's a no contest. You always want to give up a little bit of performance because you are going to horizontal scale, but still retain a very, very good security posture. So that's how we look at it. Hopefully, Dan said your question. If you have further questions, we can talk after the talk. Please go ahead. That was such a wonderful, all-encompassing first question that I almost sat back down. Hi, gentlemen. Scott Fulton from The New Stack. This time last year at OpenStack, some of the Austin AT&T was investigating the architecture that you've talked about here, the possibilities of using Kubernetes as a way of floating OpenStack as a container. I'm interested in what decisions led you to choose Kubernetes plus Helm plus OpenStack, the architecture that you've described here, over the alternatives that were out there, other orchestrators, Swarm, Mesa, Sphere, and what led you to put Helm in the middle of that sandwich and what does OpenStack need? I'm doing a multi-part question. I've been inspired by somebody. What does OpenStack need, if anything, whether it's a condiment or a vegetable or something in that sandwich, to help balance your architecture out? So the container technology is evolving. When we put things in the lab and we test it, we look at the several factors. And we are open for solutions. We're not saying this is the solution. I want to make that very clear. So this is a solution that can be accommodated in the time frame. What has been discussed right now? There are other possibility of solution. And that's why we look for the experience what we gain from the OpenStack and Open Source wall. That's why it's OpenStack. It's not like a closed-door solution that somebody is making a decision. That's why we believe in Open Source. That's why we believe in OpenStack to understand what the community like to do. So that's why whenever we come up with a project, we share what we think, and we take the feedback, and we adapt to the new technology. To me, it's really about the open source. It's really about the readiness of the product. It's also about what is a community we like to do. So that's why we submit as an OpenStack project. As we submit, we share what we think about it. Then Open Source and OpenStack community provides the feedback. Then we adjust things into it. So hopefully that answers your question. Quickly to add a few couple of things there. Why specifically Kubernetes? There were options, and there are other options, as you mentioned. DockerSwarm and some of those other ecosystems have been able to achieve a good growth and good prospects. However, we see a very clear trend there that the mind share that Kubernetes is gaining, especially because of the fact that it is an Open Source project, and it works very well in doing few things. It doesn't try to do too many things. It tries to do few things, but few things very well. The declarative computing and the ability to handle the scheduling of containers in a very, very simple manner without assuming that this is a Docker container. You can use Rocket just as well. And I think that value proposition that Kubernetes brings along made it a very compelling choice for us, or rather compelling option for us to consider Helm as a way to describe those charts in a relationship that can then be run as OpenStrength services. OK, so I think we're running out of time, so we can take one more question. Thanks. You talked a lot about Kubernetes as being the orchestrator for the containers. You didn't talk about OpenEcomp, so now part of ONAP, which, as I understand it, is the current orchestration element in AIC. So what's the relationship there? Does ONAP have a role to play in orchestrating containers? We have a whole talk about ONAP tomorrow. I welcome you guys to come and attend that particular talk. ONAP sits one layer above what we talked on this presentation. It provides that over orchestration of controlling multiple sites and also the way of organizing the data across multiple locations. So it sits one more layer above it. And tomorrow talk, we want to talk about more detail about our ONAP, orchestrate AIC, as well as how does this fit into the container world. Thank you. All right, thank you very much for joining. Thank you very much. Appreciate it.