 All right, everyone, we're going to get started. So today, right now, we have AT&T speaking. So before we get started, I just want to remind everyone we have an open control user group tonight at 6.30 PM at Fenway Park, followed by dinner event. So make sure you guys try to come out to there. You'll see postcards around the room in the back, just RSVP at that link. But for now, let's give a hand for AT&T. Good evening, guys. Today, we're going to talk about telco edge use cases and container networking. So this is Kandan Kadrivel from AT&T. And my name is Kasim Aram. I am AT&T SC from Juniper Networks. So open control is the topic today. So we will cover that together. So I'm going to talk about what is edge use cases and container networking. Some of the material I'm going to use on this presentation is that some part I already covered and a couple of presentations done yesterday and today. But also I'm going to focus on the networking aspect of, what do we look from a networking aspect, especially from a software-defined networking? What do we expect from the solution need to satisfy the use cases? That's what primarily we're going to focus on today. So this is a trend chart that's something everybody is interested in now. And people like to understand how much the VM and how much the container is going to be utilized by the network virtual function. So this is a million-dollar question that everybody is asking. And we like to share the insight, what we see as an industry where this whole thing is going with respect to the VM and the container. Because when we talk about the networking, it need to support a seamless way of networking, either it is a VM or either it is a container. So that's why this chart does show that, like, what is a trend is actually evolving into? And where do we need the container and the VM virtual network to work with? So in this chart, you can see that today, if you take the enterprise workload, the top portion of the slide, today, the enterprise workload already can start using container. And most of the enterprise workloads are like application servers, web servers. They can already start moving towards the container. And the VM utilization is actually going down and down with respect to the enterprise workload. So when we move on to the 2018 and 19, we can see that there's going to be a lot of big bubble with respect to the container. 2019 when more we move on, it's going to be like a very, very less number of VM and there's going to be more containers. If we take a network virtual function, this is the area that it need a lot of other work to make the container all to adopt into the functionality what has been served out of the network virtual function. I want to give you guys an example. What is a network virtual function? For example, a firewall, load balancer, or it could be in a telco application serving in like a routing functionality. So these sort of network virtual function, like as of today, from a container aspect, it is only a very less number of virtual network function. It's picking up with respect to the container. But most of them are virtual mission as of today. But when we go to the trend of 2018 and 19, this is definitely going to change. And we're going to see that the container is start coming up more and more. But when we go to the 2019, definitely the container is really going to become big. And the VMs is going to be getting in small number. But industry has to do a lot to make the container adopted by the network virtual function. Why this is a huge difference between the enterprise workload and in terms of the virtual network function, why it is taking this much time? Why everything cannot be a container? The reason is the nature of the workload, which is running as a virtual network function, the security aspect of it, the performance aspect of it. And also in terms of when there is something need to be done with the patching, upgrading, the lifecycle of the VM of itself, it also need to consider all those ecosystem. So it is not just the container, it's not just the VM, but also the tooling around it also has to grow up. So that's why we see a very slow trend towards the container on the virtual network function versus enterprise workloads that are already picking up a lot. So from a OpenStack services or any other control plane services, they can easily can adopt into the container. And there is a benefit of getting into the container world from there. And we see the trend that even today, OpenStack services or even the Contrails services, especially the Contrails controller, could be run as a container. So this is a trend to show the use cases. This is not just a specific telco use case. This is more of a generic use case could be applied to anyone who is using the OpenStack and Kubernetes and the container world. You can see that the use case number one is how do we support the Kubernetes cluster inside of OpenStack Cloud? This is a well supported use case today and some of the public providers that are already supporting this use case. This is to take the Kubernetes cluster and run as a service in the OpenStack. And this is more of a tenant use case. When I say tenant use case, this is more of an application use case than the infrastructure use case. Then the second one is like a Kubernetes running sorry, Kubernetes hosting the OpenStack. This is a use case what I was talking about. This is where a lot of focus is currently going on from my infrastructure world, taking the control plane elements, something like Contrails controller, something like OpenStack controller, something like a log collection controllers. Those could be run as a containers. The third use case is again, this is where more work need to be done in terms of supporting both container and the VM in the same umbrella because this is very critical. Today, we can see that Magnum is a one good project that providing that one umbrella. So when I say one umbrella, what does that really mean? So I'm a user and I want to create a container and I also want to create a VM. In most of the situation, if the application is already there in the cloud, it is already running as a VM, for example. And now I'm trying to create some container or I want to transition my VM, some part of my VM into the container. I do not want to create another tenant or another project with an OpenStack to really go and create the container as a separate because this will create a lot of issues from operational aspect that the logistic of maintaining the different workload because as end user doesn't matter to me either it is a VM or a container, all I need is actually the workload, right? So it has to be under the umbrella of the same user and all the authentication mechanism, all of the structures of doing an analytic log collection, all the stuff has to be seamless either it is a VM or a container. So this is where the ecosystem has to really evolve and the next slide I'm going to talk about the networking aspect of it. So this is where it has to be very seamless. Today, we can really see in the field that people are addressing container networking and people are addressing the VM networking which has already been addressed almost, but there is less work currently going on between the combination of connecting the VM and the containers and that really need to happen so that we can have a way of providing a seamless way of providing the networking between the containers and the VM. So in this case, I want to explain to you guys the networking scenario. The reason why I say open stack network use cases, the reason is that we still like to see that open stack API, so the neutron has been used and there may be other API set of APIs which are not supported by the neutron could be directly coming to the SDN controller itself, but the majority of the API to be handled by the neutron itself so that this tendency model what I explained with respect to open stack being preserved. So in this case, you can see that on the top we have the neutron and the CNI plugin for the Kubernetes. So from a use case number one, in this case, Kubernetes cluster hosted by OpenStack, in the previous slide I talked about those three use cases. From a networking perspective, how does that look like? So the OpenStack cloud could be handling the container inside a VM and this is the use case, I think most of the public cloud providers are supporting it. This is very seamless for a lot of people because they want to be not worried about the security aspect, they're not worried about the tooling aspect of the thing which needs the life cycles of the particular application. So they're just putting the container inside a VM. So this is the sub use cases of the number one and the other use case is that installing the container directly into the bare metal. So these two area for the tenant use case under the OpenStack cloud, I would call this is a use case number one. In the use case number two, OpenStack hosted by the Kubernetes. So the control plane is actually hosted by the Kubernetes itself. Then we need a seamless networking between the OpenStack, which is running as a container. Then we also have VMs running as a, sorry, the VM running in the KVM. So why we still need a VM, if I can actually have everything as running in the container because the transition between the VM to the container, it is not going to happen in a single day. And it's going to take some time in terms of transitioning the VM into the container. And usually the large companies, providers, they take a small baby step in moving from one to another. So in this case, there's no way to switch over everything in a single day to a container. So the transition has to happen small. So in all these cases, the very important thing we need to notice that, you can see that the use cases are getting more and more. So it is not just the VM and the container. We need a seamless way of connecting them. So in the first use case, we want the container running in the VM and container running in the bare metal. We need the way to talk between each other through the OpenStack Cloud. In the second use case, we have a set of VM running as a control plane and there is a set of OpenStack services or other control plane services running as a container. So we need a way of connecting them. The third use cases is more of a tenant use case. And this is pretty similar to the number two. So the number two is more of infrastructure use case, but number three is more of the tenant use case. So even in this case, what we really need is that we need a way of connecting the networking. So all these use cases, it's bottom one common idea theme we've seen here is that we need to have a way of, seamless way of connecting networking between the container and the VM. So this is a slide I talked about yesterday. I'm sort of repeating this content. The reason for emphasizing the edge over and over and over because there's need to be a lot of community work need to be done in both the OpenStack and OpenContrail. In this case, the edge is really evolving very fast and there are independent solution, but we need to make sure that there is a community solution for this. So IoT space is really picking up a lot and the AR VR space is picking up a lot and the virtualized mobile network, which is a VRAM in support of the 5G. It is also coming up and the virtualized wireline access that is also evolving a lot. So for example, Universal CP. So all this thing need something closer to the edge. The edge is a definition vary from people to people. Some need closer even in the home deployment. Some need something in the cell tower. So it could be somewhere between the customer place or even the cell phone towards the data center where the large data center, where the many number of hosts has been deployed. So now we're talking about the scale of not 25. We're really talking about the scale of like a 25, like 100 or even at a 10,000 location. So in this large scale, already the SDN and the cloud solution is, it's complex and hard to manage within a 25 location. Now we're talking about like 10,000 plus location. So if you don't have a common SDN theme in terms of managing this large scale, then there is no way to deploy SDN based solution in this large instance of like a location. So that is why we need a seamless way of connecting the network. So my next slide is to show that when a user requests the edge over a cloud, right? So in an AR application, and there is a glass a person is wearing, then it connecting to the cell phone and through the 5G or 4G they connect to something and there need to be a quick video processing need to be done. So in that case, you know, the video processing really need to happen very closer to the edge. So in this case, we could take the edges like a cell tower or in a central office. So in this case, what could happen is that, you know, like that processing. So now we have a connectivity from a cell phone to the central office, sorry, to the cell tower. Then from a cell tower, there is a connectivity to the central office. But now here is a critical situation is that this is what the industry need to solve from a use case perspective is that we will not be able to put all the workload in the edge because this is very, very critical because the edge is really going to be small. And especially if you're trying to put something in a customer home, I can't put 1000 servers in your home, right? And I can't put 1000 servers in, you know, like a cell tower because the cell towers are usually small and small factor places. And also the, how much hardware I can pack in a small form factor is also a very critical, it's hard to do thing with respect to the deployment. So that is why we need to think about, you know, like there need to be an automatic scheduling mechanism. We talked about one app, and one app could actually do this job is, you know, once the end application, end user application requesting for a workload, and it says that, okay, here is the characteristics of this workload I need. And I want you to go and install this VM or a container and process this particular image or process this particular video and send that back to me, right? So that is a use case. So the scheduler can decide, okay, I'm going to put it in a cell tower and because this is where the application is asking for, but this application is asking for like 40 VMs, for example. Oh, I can't put all this 40 VMs in this particular cell tower. These applications are very sensitive to the processing in terms of the latency. So I'm going to put it in a cell tower, some goes in telco office and some goes to the data center. This is like scheduling across like multiple type of cloud at multiple location. So this is where we have to have an open standard API, irrespective of the provider, you know, the application need is something from end user perspective to place anywhere. And also from a networking portion on the bottom portion of it, this is also very critical piece. That's what, you know, like we want to talk more about it is a software defined network. Because now we're not talking about a single data center. We're not talking about a single instance of cloud. We are talking about, you know, like a 10,000 plus location spread across like all over the place. So how do we connect this in terms of, you know, like a networking perspective? There are solutions in the, in the, you know, pre SDN solutions like MPLS. And there is a many other way to connect through a physical network connectivity and are through the common routing technologies. But if we do need to imply the SDN thing, then it has to be a seamless thing, you know, in terms of connecting this, you know, like the VMs and containers across like this multiple data center. So that is the use case, one of the use case need to be resolved. And with that, I'm going to give the floor to Kossim. Thank you very much, Kandan. So we have next section. So we will now go through how we address all those challenges. So pretty much overall the main challenge here is how the container networking and the VM networking can come together. And here we have overall, let me just switch the slides. Give me one minute. So in this section, we will cover how open content networking will address those challenges. Kossim's slides are not coming. Yeah, it should come. Okay, here it is. So first of all, from a networking side, how we are addressing, we are actually adopting this OpenStack Helm. So you might heard a lot of talk about that, how Kubernetes is coming into OpenStack and containerization the whole control plane. So we are taking that approach. So overall OpenStack Helm, that will provide a whole OpenStack as infrastructure as a service. And then there will be a whole containerization of OpenStack control plane, where we are actually using OpenStack Kola and Stackinities containers. And they are all combined together with the Helm charts. So this is actually the project which AT&T has started. Now it is part of OpenStack Helm project as well. And then infrastructure side, we have this MariaDB, RabbitMQ, and other containers supporting that. And now the contrail networking, how it will come into the picture, where contrail has a contrail controller, contrail analytics, analytics TV, and the contrail agent. So all those will come together and they will provide not only a networking for containers, as well as your existing VNF and VM. So in how those things put together, that will be actually talk about this 10 to 15 minutes. And we actually put together a step-by-step process if someone would like to create that environment and how they can achieve that. So this is one of the main thing we wanna highlight. Everyone pretty much know how contrail works in OpenStack, but this slide is more capturing how contrail, OpenContrail is fully integrated into a Kubernetes. So in Kubernetes, when the Kubernetes cluster is up, we actually are communicating through CNI plugin. So OpenContrail fully supports CNI plugin. Once the API requests send to create the pods and the pods are instantiated on the respective compute, compute node one and two. At the same time, the controller actually send the information about the tap interfaces, what interfaces to be created with the IP address, and the CNI plugin, which is the binary that kick in and it stitch the vrouter with interface to the container pod interface. In this way, they create the networking. Now, once those pods are there, you will get and leverage all the OpenContrail feature which are available today and how those things you can set it up. So let's walk through step-by-step process. So we all understand how you can achieve that. And this is also a part of the demo which is available at the booth as well. So overall, here we are just showing five compute nodes. Those are five bare metal services. So creating a Kubernetes cluster, installing Docker, Ubuntu and Helm. Today, it's a very straightforward process. So here we are assuming all the environment is set up at this stage. And once this environment is set up, you have a cube system namespace and the pods are created on the node side as well as the master, Kubernetes master side. Now, the next step will be how you can leverage OpenContrail Helm charts. Those Helm charts will go to the directory Docker registry where they pull the containers for OpenContrail and then they will instantiate respective pods and also the main Contrail controller pods into the cube system. So those are highlighted in all the green color here. And at this stage, the full OpenContrail is up, but our next step is how we can actually use the same networking for our OpenStack containers as well. So before we go, there is one more step which is actually the gateway function in OpenContrail. And you know that that gateway function provide an overlay and underlay connection for that. So at this stage, we actually configure our gateway with the respective route targets. So we have the pod virtual network and service virtual networks. And those networks are created because next time when we instantiate the OpenStacks, all the pod networks and service networks we will use accordingly to give the seamless connectivity for both containers and the virtual machines. So as part of OpenStack Helm charts, so this step we actually instantiate and we actually go to the OpenStack Helm repo. So that repo, we get all the information, get the containers, then we will actually instantiate all across our OpenStack nodes, and then as well as the compute node where the containers instantiate it. At this stage, we use AMARDA tool. So for using this AMARDA tool, you can instantiate those pods for that. Now, once your environment is up, the next section is once you instantiate the VMs creating the same experience which you are getting today. So the VNF which are running today in OpenContrail, they can leverage that, they can still run this VM and networking for that site. And if someone would like to have a containers, they can actually also run those pods there. And then seamless integration connectivity you can achieve. So how this thing is achieved, so just to summarize this part, you can see this is how it looked like at very high level. So on the right hand side, we have a Kubernetes cluster, and then the OpenStack cluster. So when Kubernetes APIs are used, they are coming through the CNI plugin, then but they are using the same OpenContrail controller there. And the same methodology used from the OpenStack site, and the OpenStack when the neutron plugin come in, it actually connect to the OpenContrail plugin, and then this OpenContrail plugin can instantiate for all type of edge. So the edge can be at the telcoside, it can be edge as an IoT device. So definitely from controller point of view, when the compute is added, either that compute will be part of a Kubernetes cluster, or either it can be part of OpenStack, or it can be part of both. But once the compute is there, and the connectivity is available, the Contrail controller will use standard control plane, XMPP protocol to program the vRouter forwarding plane. And then for all the forwarding data, the MPLS over UDP and overlay tunnels can be established. So what we are showing here, using the one single SDN controller, you can instantiate both type of workloads, pod as well as VM at your edge. And now if other than telcoside edge, if you have any remote site, distributed sites, those can be further extended from that point of view. Now just building little bit more on vRouter and edge capability point of view, I would like to highlight couple of things here. So as you are aware, in vRouter has full forwarding plane, where we have the full layer two, layer three functionality, word support. There is another function in vRouter, which is available today is the gateway function. So when the vRouter is on the edge, and it is actually that thing we are highlighting here with this green line. So if you have multiple interfaces in your x86 box, you can leverage that feature to assign and control IP address assignment from that site. So what we are showing here, one compute running a vRouter. So it can have two interfaces under the control of vRouter kernel module and two interfaces, which can be actually program and get IP address and all the functionality through vRouter gateway function. And then the vRouter can talk to the central data center site via XMPP. And they can also have MPLS over UDP tunnel established. And now all those infrastructure is already available through REST APIs, heat templates are available. So any customer can use their existing orchestration and OSS-BSS system to instantiate workload, not only to the distributed site as well as the central site. So the edge site, central site, that will be just matter of your configuration and provisioning and pushing those APIs and program it and then utilize that. So and on top of that, I would like to highlight from ONAP point of view. The similarly, ONAP can use those APIs and they can actually push this configuration. They can use heat templates and standard open stack and Kubernetes API to establish end-to-end connectivity. So in the next one, we are just further extending the same use case, but we are actually showing from 5G vRouter at the edge use case point of view. At the 5G, if you have your virtual BBU and virtual NRC NRU and some small form factor of virtual EPC, you can actually put it in a central distributed site and then you can also push some functionality at the edge. And the similar way that you can leverage gateway functionality for your L2 and L3 connection to the towers or any of the connectivity point of view. Definitely at this stage, there are concerns from latency, jitter point of view, but those things definitely can be addressed as part of architect and design point of view. Because when we are talking about virtual RAN, CRAN or 5G, pretty much in the transport, the metrics and the KPIs are already there. So based on the KPIs, the decision can be make what should be the site where you can have some aggregators and where you can have your distributed edge standalone compute or multiple compute at the site as well. So the last use case quickly I would like to highlight is from IoT gateway point of view. Similarly, you can have the same concept. So this IoT, as you know, the most of the services are already there. So those services are already available in AWS. Just to launch an IoT, you just go to Amazon, Google and launch your IoT platform and start using providing devices. But here the message and the main purpose is for the customer who already have their own private data cloud and they wanna leverage their own virtually PC, they can actually introduce and leverage that and using the open con trail at the edge which will fully support the Kubernetes and the VMs. They can have multiple tenant support with the IoT platform point of view. We are just highlighting five different types of gateway. Those gateways can be software gateways or the gateway can be available as a hardware form factor. So those hardware definitely need an IP connectivity that can be provided through the VRouter gateway function and then connectivity to the tenant and end to end connectivity to the IoT platform can be provided through the overlay. So that is the whole message which we wanna deliver at this stage. Thank you and we can take a few questions. People who want to ask question, I would appreciate if we can go to the microphone. So I had a question with the Kubernetes, the open stack on Kubernetes, you're putting neutron drivers in a container. Are this the open stack neutron driver attaching to the NIC through the containerized NIC or is it attaching to the infrastructure's NIC directly? So it is attaching to the infrastructure. So that is going through infrastructure and doing that. Thank you. Hi, thanks. I was curious in the maybe the, you just used the 5G as an example what you might be thinking in terms of high availability of processes. Are you thinking that in one little server pool that it's just a fast fail? Yeah, there are two, I will say not two, there are multiple aspect of 5G, high availability and resilience. So as you know in mobility, virtual MME in a pool is a concept which is utilized from the mobility point of view to address that. But from VNF point of view, someone can leverage open control scale out model. So where we actually, if at the edge or some of the sites you have multiple computes and you wanna instantiate multiple VNFs, you can use the scale out model. If one goes down, the other one still be available. And then through the orchestration monitoring, we can instantiate more workload based on our requirement. So those are the two different aspects, some application actually support that. So especially for 5G point of view, the virtual MME is the most critical point. Definitely virtual MME in a pool will be leveraged for that side as well. All right, thank you guys for joining the session. Thank you very much. Thank you guys.