 Good evening everyone. This is Kandan Kadrivel from AT&T and I'm Haseeb Akhtar from Ericsson. Today we're going to talk about implication of 5G and edge computing in OpenStack. There's a lot of discussion going on about edge computing last a few days and also there's a lot of silo work happening across like multiple companies and multiple open source about the edge computing. So we like to bring a perspective of, you know, like what is Telco world is thinking about when it comes to the edge computing? You know, what are the use cases? And I just came up the work session, you know, there's a discussion about how the use case is going to look like. So we like to present to you guys what the use case is going to look like and what do we think about the solution? And I think, you know, then we will take the questions to address, you know, right of question. Regarding the edge computing, this is the next era in the computing. So we talked about a centralized cloud, then we talked about, you know, like a cloud for the CDN, which is basically providing a content delivery network closer to the edge. So this is talking more use cases in that particular area of, you know, what is needed to be done in the edge computing from an OpenStack perspective? And how do we satisfy the use cases defined in the 5G in other areas? So the best way to predict the future is to really create it. This is a quote, you know, like I, when I did a Google, I got this quote and I was pretty impressed by this quote, because when we talk about something, it's better to create it so that, you know, people can envision, you know, what need to be done from the technology perspective. So, so this slide is to provide the use cases related to the telco and I'll start on the IoT part. So there's a lot of innovation happening from IoT and a lot of companies already started supporting the IoT. So where is IoT stands is that there is a connected car and now sensors being actually installed everywhere in the home, in the stadiums, even in the, you know, agriculture field, everywhere the sensors been used. So they collect a lot of information and that lot of information need to be processed quickly so that the information is, you know, like a processed and given back to that application so that the application can make some decision or or use that, you know, data to make the processing related to that. So that specific use cases is what, you know, like primarily IoT is looking for. The other use case is that AR, VR. So there's a lot of innovation happening in the AR VR and everybody knows that, you know, there are glasses which people using for the AR augmented reality and there's also virtual reality is also, you know, like it's really picking up. So especially AR VR records a video processing in a very quick way unless, you know, people will see a jitter when the videos has been sent to the device and you will see, you know, like there's a lot of jittering when we try to process this actually in a centralized cloud. So usually when we talk about centralized cloud, you know, if you take a public cloud is usually deployed in, you know, up to 25 location, you know, some clouds, you know, up to 30 location, but it is not really in the range of, you know, 2,000 or 3,000 location. So these AR, VR application, especially, you know, image processing, for example, face recognition of the AR application, you need a quick processing at the edge and this centralized based cloud will not satisfy that specific need of that particular application. Then the other use case we see with the telco is that virtualized mobile network. So there's a set of boxes getting implemented or installed in the cell tower and they have a processing needed to, whatever the processing has to be done in terms of supporting the 4G LTE network, but the 5G is taking that into making them as a virtualized application. So this is an innovation happening in the telco industry to take the hardware platform into the virtualized platform. So last couple of years, you know, we hear about the NFE application that is basically taking a networking application like a firewall load balancer routers and converting them into the virtualized world. Now, what's happening is that the innovation is happening in the 5G and the RAN area to virtualize that content. So this also occurs, you know, like closer to the edge. So in this case, the edge is the cell tower itself. So it has to be closer to the edge so that this processing can happen very quickly so that that specific application work. So the other aspect is that, you know, the equipments get installed in customer location. For example, there is a Wi-Fi has been given in a specific location. For example, a stadium or in a customer home. So these devices, the wireline access, it is also getting virtualized. It is getting containerized. So these required is a quick processing which cannot be installed in the data center. So the concept is really, you know, bringing the cloud very closer to the customer or very closer to the end user. That is what the edge computing is about. So edge computing in, you know, because this term is like really convoluted from because it's not a one defined word that everybody uses, you know, some people call us a distributed cloud, some people call us like, you know, fog computing. So bottom line is that, you know, really taking the computing power closer to the customer, that's what the edge computing in this context, what we are talking about. So how many equation we really have to install? So this is really dependent upon the application. If the application, you know, like a cane, withstand the networking latency of, you know, processing in a centralized cloud, then it could be hosted in the centralized cloud as we stated. But if the application really needs something, a quick processing from the network latency perspective, also a quick processing of the application itself, then it has to be installed very closer to the edge. So there is a lot of studies been done. You know, I have seen myself that there's a lot of professors in the universities, they have been studying, you know, like how much closer this edge has to be, but there's need to be a still work need to be done because there's no common definition yet in terms of, you know, like how close of this edge need to be. You know, some people say is, you know, it has to be installed in a cell tower. Some people say it has to be installed in the home. So that particular thing of where it need to be installed, it has to be flexible in terms of, you know, where this computing is going to be installed. So that's why I'm showing us like a 2000 plus location, but this is not limited depending upon the use case, this could be a 10,000 plus location. So this slide is to talk about what do we have with respect to the typical OpenStack Cloud deployment and what do we really need from the edge perspective. So typically OpenStack Cloud, it supported x86 compute and it supported about 50 plus sites, what we talked about and local control plane, which is basically installing the OpenStack locally in the data center and also the virtualization layer needed to virtualize that particular compute host. You know, it is a very generic operating system. It's very generic, you know, like a KVM and all the ways which has been put in places like it's very generic and this will not cut the deal when we actually go to the edge. The edge has to be right and light. That is a key for the edge. So what does this really mean? Right? It is not just going to be x86 processor. It will be any type of processing. You know, if it could be FEGA, DSP processor, it could be any type of processor. Why? Because when we try to install something in a customer home, when we try to install something in a cell tower, it will not be x86 in all the cases. And we talked about the 2000 plus location, but from a generic use case perspective, you know, it could actually need up to 10,000 plus location. So the zero touch provisioning is also one of the key aspect when it comes to the edge. Why do we need a zero touch provisioning? The real reason for the zero touch provisioning is that in the data center, usually there is an operation team sitting there. And mostly everything is automated. And then still the operation team can go to some manual work or they can troubleshoot something with it. But edge, if it is getting installed in a stadium or if it gets installed in a 10,000 plus cell tower, it is not easy to have some person go every time and every day to actually do a troubleshooting or to install the application itself. So that's why it needs a zero touch provisioning. We know that from an open stack perspective, when we install it, there is a lot of complexity involved in terms of installing the open stack. And in this case, we do really need to think about the zero touch provisioning because we cannot have, especially if we take telco, we cannot send a truck which every time to the cell tower to troubleshoot something or install something. So that's why the zero touch provisioning is very key. And also, what are the options? So it has to be, it need to be both a way, meaning that it has to be regionally supported, meaning that open stack being regional, Azib is going to talk more about that and controlling the compute host distributed across multiple locations. The other option is that, like a local control plane, meaning that the open stack is very thin open stack, getting installed locally and controlling that compute host in that location. Why do we need these two options? The reason is that there are locations where only you can accommodate a couple of servers and there is no way to put a heavy control plane in there. For example, if you take a cell tower, if you put two compute, there is no way to put another two server or a three server for a control plane. So it has to be very thin and in that case, regional will be helpful. But if there is a location that you want really high availability and you can have two more servers accommodated there, for example, central offices, then in that case, you can have a local control plane in. So the key is less compute overhead. We also need to have a thin operating system supporting this virtualization layer from the container and the VM. So that is also a very key when it comes to the edge. So this is a summary of other use cases what we are seeing. I just explained this in word. This is a picture explaining what are the use cases we are really seeing. So the faster bucket is virtualized to mobile network. We can see the virtual ran elements, then virtual 5G components. The second bucket is the wireline access. In this case, virtualized to wireline access, virtualized network apps. Then we talked about the AR VR drones. In this case, high bandwidth media content. That is a very key with respect to the AR VR and drones because they do the image processing. They do a face recognition. They process something with respect to the image. So they have, they really need that high bandwidth media content. And also the content delivery at the edge. This is one of the key things is that the edge is currently on the way the CDN is being set up right now. It is like maybe 60 locations across the globe. But it is not too close, especially the 5G and the new applications like AR VR come in. You need the content delivery processing, the caching, very closer to the edge. The other part is the IoT. So IoT and Fog gateway need the edge and the security management edge. And this is also another interesting use case because now the application processing is happening at the edge. So there will be a lot of emergence of the security technology because you need to make sure that your application is not getting compromised. So that is also one of the key use case. So this slide is to talk about, you know, as I stated, it is not a one solution fit for all, but it could be one open stack fit for all. But where do you deploy them? It really varies depending upon the application. So current use case, what we see from the Telco world, you know, the edge could also be deployed in a data center. You know, because if that data center is close to the user, then the edge could also be deployed in the data center. So we should not think that, you know, like data centers are completely eliminated out of the edge. That is the wrong assumption. So the edge, the real reason for the edge is like something to be closer to the customer. If this data center is closer to the customer, then the edge also can be implemented in the data center. And the other, the second part is like the central offices. People who know the Telco world, usually the cell towers are actually all connected to the central offices. So this is where the 5G ran, virtual network functions are being managed and installed. And this is another use case, you know, like a VC. The reason why I see current use cases, this is something we need it now. This is not something, you know, like needed out of there five years or 10 years. So this is happening now. And the customer premise is that, you know, this is also a very important use case that, you know, having a customer provider edge at the customer place like universal CP sort of a thing, it's also requires the edge at the customer premise. The near future use case, the reason I say near future is that it's also happening, maybe not this year, but very soon this is happening is that having the edge implemented in the cell towers. Okay. Thanks, Kandan. So let's look at some of the business drivers for the for the Edge Cloud. So basically what's happening with the proliferation of the 5G technology and some of the IoT devices that Kandan mentioned, the user experience is demanding, you know, more or the higher bandwidth content at the edge. And then also with the some of the AR VR applications, the demand for very low latency edge processing is also coming up. And at the same time, you know, while you have the control plane as well as some of the applications running at the edge together, there is a need for security in terms of isolation, making sure that if you're running content and the control sharing the same virtualized resources, the security aspect is is looked after in the proper manner. So so if those are the ones that are driving from an user perspective, then I guess the operators would basically look for, you know, delivering the Edge Cloud based on some of the key incentive from the from the revenue generation perspective. So for, you know, there will be new services opportunities coming up with IoT that's already there. It's it's generating new businesses. But if we, you know, I guess have more capabilities that are available in the Edge, then that should drive up further, you know, services and businesses. Same for AR and VR. Once you have the latency requirement delivered at the Edge, the AR and the VR technologies are supposed to to grow in a bigger way, much more than, you know, what we have seen in in Pokemon Go or the likes thereof. In terms of the of the other aspect of the equation, we must have to to reduce the cost or at least invest in a way that are more cost effective. So the virtualization, for example, for the 5G RAN and the core part is an incentive for for having a cost reduction that can be put on the Edge. A lot of these components are today either sitting on the cell side or in the RAN aggregator location or even in the core. Many of those then can be virtualized, put onto a cost optimized hardware platform. So that would basically bring the the the causes for for cost reduction. And of course the thin cloud at the Edge like Kandan mentioned, that it has to be right and the light as as he coined the term here. So it's not necessarily a heavy data center type of cloud. The the form factor has to be smaller than what it is today. So that would basically create a case for cost reduction. So coupled with you know cost reduction and higher revenue, the there will be a use cases that would drive the the Edge use cases. So this one is an example of how distributed workload could help in the in in the telco type of applications. So what I'm showing here is using the green line is the normal scenario where if you are running an application today through 4G network even in the in the 5G network with virtualized RAN components sitting in that in the distributed data center towards the Edge and then connecting to the EPC or Evolve packet core in the centralized data center and then you're connecting to the internet and this is a video application for example. So there is a significant amount of latency that would be created by this if you are to go through the the Edge and then then traverse all the way to the to the centralized data center and that would not really fulfill the the low latency requirement that we have for many of these applications that are driving the Edge use cases meaningly the IoT and and and so forth, right? So in the red lined use case or the red line data path if you see that we have been local applications running on on top of the virtualized EPC and the and the virtualized RAN at the Edge, assuming that we have the small form factor and a lighter version of the operating system and so forth, we would be basically reducing the delay into three to four milliseconds in the in the range that that would be acceptable for those applications to be to be meaningful. So now looking at the technical aspects of it that how do you really architect the Edge components? Do we have the central control versus the decentralized control? You know how much of it do you put into the Edge and how much of it do you put into regions and in how much of it you put into the center? So we have to kind of go through the light and and right balancing act and as Condon mentioned that we really have not the industry has not figured out what is the right way of doing it. The basically what we have to do is we have to keep an open mind here and and keep a flexible option to deploy either at the Edge or at a regional site and then following that with with a somewhat high-level architectural view of what we think could be a proposed solution is that if we kind of slim down the open stack so today's open stack the way we have it it's a it's a somewhat heavy it has a lot of components and we kind of slim it down into a version that would be only required to do the things that are necessary for the Edge both in terms of the you know the Nova Neutron or even the Keystone aspects a very necessary you know slimmed down version of the open stack that would be deployed at the Edge and then controlled by the or are the connected with the centralized open stack so that's that's a potential one option and then the next one is that we you know control the the Edge or our support controlling the Edge using a regional open stack so so that would be kind of similar to the what open stack we have today but again with a limited scoped out version that would only be applicable towards supporting the Edge use cases that that we have been discussing and meaningly here is the IoT the AR VR the virtualized ran and the and the wireless access so these are the couple of you know variation of architectural view that we think that the industry should look at or as a community we all should should look at to support the the Edge cloud use cases so it's really come down to okay we got open stack in every location and how do we how do we orchestrate this across like 10,000 plus location right so when we need to orchestrate across 10,000 plus location you need some federation among the authentication and how do you distribute the images to a 10,000 location how do you upgrade them so the use case itself is divided into two parties like the infrastructure layer which need to support the VM and container and then you have the upper layer that need to support the design orchestration control and policy and analytics so today in the AT&T integrated cloud in AT&T what we do is like we use e-comp and we open source the e-comp so that's what the one app in this particular picture on the left hand side and this is a open source and lot of contribution coming from the community on this and what it can do along with open stack it is really powerful when these two things work together and what happens is that one app sits one layer about this location or the open stack which is on the bottom layer in this case it could get distributed to 10,000 plus location we still need to do the use cases what we talked about in the open stack make it right and light and then one app on the top can provide the orchestration across like multiple location so what can be done with this is that it's filtered down into this five use cases so today one app can talk to the open stack in a heat template why do we need to heat template instead of calling like a lot of API and you know you send a one heat template where it has the information about the compute it has information about the network also it has information about the storage and you package everything into a one template and you send it out to the open stack it creates that particular VNF or the VM you need it so in this case one app can talk to the open stack through the heat template and also it can support the multi open stack region support the third thing is support nested VNF setup this is also very critical what do we mean by nested VNF setup so the in the previous picture has he talked about you know like some application stays in the edge and some application stays in the data center so in this case even the edge evolves and it get deployed in multiple location the way it is going to work is like you know if a tenant need a workload and he's let's say that he is a cell phone user or he is a ARV user so that workload when it need to be spined up you know some workloads are going to be at edge and some workloads are going to be in a central office some workloads are going to be in the data center what do we need to spread them up because there is no way to accommodate you know enormous number of workload at edge because you know if it we deploy in a cell tower or we do something in a customer home there is no way to deploy a thousand compute at the customer home or there is no way to deploy thousand compute at the cell tower there's only particular level of packaging can be done with respect to the computer so they need to be a distribution across like multiple location so then it comes to the challenging of scheduling how do you do a scheduling where do you place the you know workload do you placed on edge cloud or do we placed on you know like something in central office or do we placed on the data center so that specific analysis about you know making a real-time decision around what need to be done that is something one app can actually do but also there are some open standard API need to be developed in this particular case when I say open standard API what does that really mean right when a user comes in you know he should not be worried about whether he is putting in this telco number one telco number two or this public provider number two so they need to be a open API you know that is where this you know like a one app and open snack and help them is that you know like a one set of API that can be called in doesn't matter you know like this provider versus that provider as long as the user is ready to you know like want to create a resource and there is a financial arrangement between the user and the provider the API's are exactly the same you know you'd go to this provider or other provider so that is something you know community has to develop because that would give a flexibility of placing the workload anywhere across any different provider the other other aspect is that you know supporting a complex networking you know how complex this networking is going to be for the edge this is a multi million-dollar question right now is like do we a do I support SRHOV do I support you know overlay do I support all sort of SDN configuration to me the answer is really yes the reason is that end of the day the application the networking application are you know like any type of application need a performance needs security all the characteristics what we are planning to support or what we are supporting in the data center it's still needed in the edge cloud so it does need that complex networking so how do we solve it we still need a thin and right and then also we're talking about all the use cases so this is where this balancing act come in it's like so that's what I see was talking about is like we do need to see what is a really needed use cases and how thin it can be and support those use cases at the edge and also stretch them across you know like other data center so that they have a way of communicating between that application and another good example is like AR VR you know let's say that a cop having a AR glass and he is actually going and looking at a person and immediately the application is doing a processing by looking at that person doing a face recognition and now it got the identity of that person but it has to compare to with the you know like a millions of records which cannot be brought everything into the edge so some select you algorithm also need to be worked upon so it's really a mutual work need to be done in terms of what application has to do in the edge as well as what the infrastructure has to do so but here we pretty much focused on the infrastructure part like what does one app has to do what is open stack has to do but there's also application level work need to be done when in terms of supporting the edge the other fifth one is this is what the policy driven metadata driven this is what I talked about that open stack open API what it does expectation of this API is that you know like it is really policy driven which provider I have to place and they need to be a flexibility for the tenant to say that okay I want to be on this thing and I'm ready to pay this much so that he get placed into that particular cloud so that open API could provide that you know like policy driven meta driven metadata driven you know placement of this workload so this slide is to summarize you know for the open stack community and about where we are today with respect to open stack and where we need to take open stack in terms of the H in the edge computing world so as I stated in the previous slide you know currently we only support x86 servers but we here we have to support x86 servers and peripherals so this is where peripherals has a different type of processor they are not x86 processor in all the cases and for example the virtual ran is one good example then the control plane sizing we talked about you know how big the sizing has to be the thin and thin means that you know like we have to get rid of some of the things you know like we usually support in the data center the heavy layer of pass heavy layer of you know like anything as you know service those sort of a thing you know it has to be thinned down a little bit and go to a opinionated you know specific use cases support it at the edge because that is really needed to keep the control plane sizing you know like less and also we talked about you know like whether place a open stack regionally or whether we place the open stack you know like in the in the data center itself so it is really dependent upon the application need but they need to be a flexibility when we develop the open stack to support the edge the other thing is complex install and upgrade process in the current open stack this is a one of the key thing you know like everybody is using open stack and you guys may aware that you know like when we try to install open stack and when we try to do upgrades there are pain points there and a lot of people and including us actually solve the some of those things using the automation stuff but still there are pain points and when we go to the edge open stack we really have to care for the zero-touch provisioning there is no way you know 10,000 location without a proper automation and zero-touch provisioning this edge cloud and open stack would be deployed the other thing is like container and and we have both the VM and container world playing together now and in the morning I talked about in the session I talked about you know how do we support a container using the open stack so in this world it's going to have a play of both you know people could ask like a we are in the edge why do we need a VM because this is a question you know like every time we get asked is that there is a specific need for it because security plays a key role and container is still picking up on the security and especially some application related to the government for example cannot be in container and there are specific government guidelines today and they still adopting into the container so they still need to be industry-wise some work need to be done but that's why you know like both container and VM need to be supported in terms of you know like edge and the software availability is another key right so we usually the open stack deployment even if you do all sort of run and see in it's about the three nines of availability in a in a single install open stack in this case when we deploy in the edge as I stated you know like there is no way to you know send a person to fix something and we really need to have a more availability in when we talked about a more availability there need to be a less box and they need to be you know like a more more compact application scenario in terms of you know supporting that high availability then we talked about the scale deployments so it really we're not talking about 100 data centers anymore we're not talking about 200 300 data centers anymore we are really talking about 10,000 plus location so that's also need to be considered okay we can take a few questions given the timing and the people who want to ask questions so please go to the microphone and ask that question can you hear me okay great thank you so what is the relationship with the platform as a service with this H cloud platform as a service it's going to be not that many use cases will be required in the power platform as a service need to be supported at the edge but as a user when I come in I need you know like a specific VM or specific container with let's say that I need a database you know like in order to process my video processing or I need specific you know like a thing as a customer I do not want to see that you know I need to install everything after getting the VM then I will lose that efficiency of you know getting the edge closer to me so when we create the VM it has to have that you know like the packages installed to it that's where the pass comes in is that it's really a mixed use case but we also have to keep in mind that it has to be thin so that it can be quickly created and quickly processed at the edge hello my question is about court there is a court project which has different approach to the same problem and it contradicts the paradigm you now explained with edge computing so do you see the court and edge computing will be integrated together or it's a different strategic are you talking about card the CORD project court CRDS yeah so card they as I stated everybody is trying to attack edge the thing is like it's all solo situation right now right and each each open source entity is trying to focus on specific area so it's it needs some connection to it but you know as far as I understand you know like card also address some of this aspect they also focusing on the some of the central office application running as a virtual mission or the 5g application running as a virtual mission but in terms of the infrastructure they're still relying on you know the container and the open stack to support it so what are we talked about you know like slimming down the open stack running in the regional control or you know running it locally in a small thing you know that has not been addressed to car by the car as far as I know can I can add to it so I'm in court has the wireline part today for example the the virtual OLT there is a demo that has been going on for some time that the industry has seen but I think the the area that that where the industry needs to come together is what Condon is mentioning about the zero touch provisioning that's what card is using XOS today so we need to see and and in one of the slides that Condon shared it it has own app so I think the industry is converging towards creating an automation or automated provisioning that would help the use cases for edge thank you so for that one there's mCord for mobile as well that address this similar things my question is do you foresee multi-tenancy in your blueprint to be used in this topology such as MVNOs to be hosted as a as a tenant or just this is purely just AT&T no we this is not about just AT&T okay so this is not just AT&T use case I'm representing you know I'm just stating this is a telco use cases and this could be applied to any other use cases there is a gentleman on the previous work session he talked about you know deploying in every stores for example you know there's a multiple stores in the different location you can install the same thing so there's a use case is not just limited into the telco itself so if a open tracker does resolve the use cases we like to see this is resolved for the every one perspective so definitely it is not limited for AT&T or a telco world we see this is a generic set of use cases the reason why we are sharing is that we are showing that okay these are the use cases from a telco world but definitely other use cases also been involved so my point was using hypervisor or not just bare metal workload for one tenant or multi-tenant purposes yeah I wasn't that session there are different use cases but in this case for what you're proposing with the at least not a slim power but at least reasonable power on the edge to deliver these services but still still a quite expensive equipment to invest in multiple thousand locations so if this is going to support multi-tenants to be the hypervisors or just bare metal just AT&T workloads on that was my purpose of my question sure thank you I think thanks for clarification on that question it has to be multi-tenant today even in the data center the applications when they get deployed they get deployed under a different tenancy so each application like for example if you take a firewall you know the CSO department actually do a firewall thing and other department does it something else they've been given a separate tenancy so in this case you know like it from a user perspective also from a telco application perspective also you do need a multi multi-tenant concept there is no way to dedicate a specific server for a specific application anymore right so that's a notion of the cloud itself so the multi-tenancy is by default need to be supported yeah you talked a lot about the quantity of data center and a lot more lot more data center at the edge but how do you see the the quantity of server because I suspect at the edges will be very small and the data center very big so the overall spread of the the the server will be more in the data center more in the edge global point of view thank you I think it is really dependent upon the the provider and the space power cooling and other aspect of it from a cell tower or from you know deploying something into a customer location they are usually be thin you know one or two compute or three compute that is the maximum scale but when we talked about you know like the next hop which is like a central office or a wire center they could usually accommodate you know like hundreds of servers in them and if you really need to deploy a thousand servers for example for some processing you know there is no way to deploy in a cell tower unless until there is a multiple cell tower or a multiple location so edge by default it is a less number of compute but that less number what is the exact number it is really dependent upon the provider how much they can accommodate and package into that specific location I would like to just add that I think one of the thing that I mentioned was the the revenue and the cost model is what really going to drive that how many servers are you going to put in what location and how many location this is really dependent on or it will be dependent on what use cases are having more uptake in the market and what services that operators are rolling out right so those are the things that are still unknown but but I guess what we know today that there will have to be a smaller form factor in many of the cases especially on the V-RAN and the core virtualized core for the 5G cases will have to have a much reduced footprint right right thank you everyone we really appreciate for you guys for joining us thank you