 My name is Roy and we'll be talking about edge computing and how this edge computing is being used with open stack cloud to host multiple applications. Over here I have just given an example of IoT. But you can host multiple applications like IoT applications, 5G enabled applications, AR, VR applications, any kind of applications. Yeah, so I'll be collaborating with Nilesh, these are very very short bio, that is we are open stack engineers. So this is what we have in our plate today. Those who are not aware about what is exactly edge computing and those are aware about cloud computing. So for them it's just a quick intro to edge computing and distributed computing network with open stack, then a spinelift topology. It's a network topology and a director based deployment method. How will be deploying the exact, the open stack, then how to scale your edge deployment. That is basically your open stack cloud, then potential failures, limitations and at the end we have very interesting demo for edge computing. So stay tuned. Yeah, so you guys have heard about exactly what is about cloud computing, right? So what is a cloud? Cloud is nothing but a huge configuration server sitting at some remote place and you guys are accessing that server using some kind of application, maybe a browser. And then paying as per your usage, then scaling that cloud, that is nothing but a cloud computing. But what exactly is edge computing? Any guesses? Okay. Yeah. So edge is nothing but whenever we have some kind of production servers deployed and it's kind of difficult to manage production servers at different remote locations from a centralized location. At that time this kind of concept is useful. So this edge computing is nothing but a distributed computing environment where we are managing this environment using open stack cloud. Okay. And then we can use this environment for hosting as I said different kind of applications like IoT, VR, AR kind of applications. Okay. So DCN with open stack, DCN nothing but distributed computing network. So if you see over here, this is a block diagram which shows the block diagram of open stack. So over here, the upper part is if you can see the first is under cloud, then we have container registry, controller nodes, compute nodes and volume. So this is nothing, the first block is nothing but your open stack environment. Okay. So forget about this part. This is our exactly edge nodes that is edge computing. So first I'll just focus on the upper part. So if you guys are aware about how exactly open stack architecture is. Open stack has its under cloud node, over cloud node, then the storage node. And on the over cloud node you host your VMs and then on that VMs you host your application. So if you see we have the under cloud and we have container registry for storing the images and we have controller nodes, compute nodes and volume. So the compute nodes will be using the local fMirror storage or any kind of storage which we are having. You can use that storage. And then comes our edge site that is the DCN site. So in between we have L3 routing. So L3 routing is nothing but your MPLS network where you have your, what you can say, L3 layer of devices, switches, routers and that switches router are connecting to our compute nodes which resides at remote locations. So if you see in this diagram, I have my open stack environment upside and this open stack environment is connected to my edge nodes. Compute 1, compute 2 and compute 3. So these are my edge nodes. So talking about the topology. There are multiple different topology in network like star, hub, ring, mesh. So what kind of topology is useful in this scenario for hosting our edge computing. So if you see I have a spine switches at the top. We have three spine switches. So I was talking about L3 routing in the previous slide. So L3 routing stands over there. So these are spine switches under the spine switches. So consider these three columns as a single server rack. So this is the one rack, another rack is the third one rack. So each rack I have multiple nodes under cloud controller, controller 1, controller 2, and rest of the nodes. So each rack has similar kind of nodes. So this is how exactly spine lift topology works. I have the main spine switch. Main spine switch is connected to the switch which is located in the rack and the switch connects to all this compute controller and under cloud nodes. So if you are aware about how exactly open stack deployment works. So there are multiple deployment methods for installing or setting up this open stack. So we are considering over here the director based deployment method. So director based deployment method is nothing but we are using some sort of YAML files. So YAML files are nothing but it's a kind of what things you need to deploy or what kind of network you want to have, what kind of services you need to have in your open stack environment. So these are nothing but the YAML files. So using this YAML file we are making this director based deployment. So initially in the first slides I said open stack has two parts that is the under cloud and over cloud. So considering this edge scenario what are the things we need to make change in our under cloud node. So this is a snippet of a JSON file. So as I said there are multiple nodes in open stack environment. I need to define parameter for those nodes that those nodes should be visible to my open stack cloud. So this is a JSON file where I need to mention those parameters. So I mentioned just info about one of the nodes. So I mentioned MAC address, the name of that node, number of CPU, memory it is, the physical machine type, username, password address. The last important thing is deploy interface direct. So this is the only change which I need to make in my under cloud in order to make this environment work. This is the first thing and the second thing is Swift URL. So Swift is nothing but object storage in open stack. So this object storage is used to store some kind of images, some sort of data and it works on the under cloud. So once, if I want to just transfer the data from my edge nodes to the original central site. So this will be used, this will be done by using Swift URL configuration. So that was about under cloud. Then what about over cloud nodes? So over cloud we can have multiple edge nodes. So this is just an example or snippet of one of the edge nodes. So there is a default role file for each of the nodes. So if you see the location, there is user, share, open stack, triple low heat templates, role and compute.yaml file. So this, what this exactly yaml file do? So this yaml file has all kind of definition regarding the edge nodes. All kind of information, what things I need to have for that edge node. So all this information is present in this file. So likewise you can have multiple search files and just put it as a parameter while deploying your open stack cloud. So I will just read out what are the things over here. So this is role, compute, az1. So az is nothing but a kind of a zone, zone one. Then the name of the compute node and the description if you want you can put some basic compute node role. Then the count, so right now I have only single nodes, so this account is the one. And the kind of networks which you have, which will be using internal API networks. So internal API network will be used for communicating your compute nodes with the controller nodes, then tenant network, then the storage network. Then you will be having your host name, then role parameter defaults, then tune the profile virtual loads and display upgrade deployment. So these are the kind of optional parameters. So as I said, this specific edge computing is being used by telecommunication companies for hosting their applications. So if you take an example of any telecommunication companies, they are not working in a single country. They are working in multiple, all over the region, all over the globe. So how they exactly communicate with the central one. So if you see over here, I will be deploying my cloud at the central side. This is OpenStack cloud. My OpenStack cloud is the director node, controller node. And we can have controller, compute as well as storage node. So this is my central side where my OpenStack environment is hosted. And these three are my edge nodes. So my edge nodes at the center is any kind of application, any kind of operating system. And these two are connected via my MPLS network, that is the back hall network, where my physical router switches, all those kind of devices will be present. And this is a part of application where we can host any kind of application on your edge nodes. Coming to the scaling, let's say my business grows and I need some more resources, some more servers to put at my edge side. So how exactly the scaling of this edge computing works. So here we are scaling the compute nodes one by one or in group or in batches. The scalability of compute node is limited by what the under cloud can manage. It's all dependent on resources. Exactly. So this depends upon how much load your under cloud can manage so that it can just control all your edge devices. So your under cloud should be capable of managing all your edge devices. So as per my knowledge, till now 150 physical edge devices has been tested successfully in the environment, production environment. And yeah, so whenever you'll be doing scaling, it's a kind of, you know, iterative job. So it's better to just scale in batches or one by one so that in case some failure happened, it should not disturb your entire environment. So this is how exactly scaling works with the compute nodes. Yeah. So planning about the potential failures. Let's say I'm deploying this edge computing environment. So how should I plan to deal with the failures in my environment? Okay. So regarding the controller nodes failure. So this is a kind of HA high availability management. So I'll be having some three controller nodes. So in case any one of the controller nodes gets failed, the other two can manage the load. And there's no need to worry if the controller nodes gets failed and compute nodes. Yeah. There is need to worry because my application will be hosted on the compute nodes. If any one of the compute nodes gets down, I need to worry. And the storage nodes. So storage nodes has, you can consider any, the local storage or some external storage as well. This is a kind of, you know, important things which we need to consider while handling the potential failures. And limitations. This edge computing being a kind of new technology or being used with this open stack cloud for the first time. There are some sort of limitations or drawback we can say. So the very first thing is can deploy only compute nodes as the edge side. So if you remember the very second slide, I have mentioned three edge nodes, three compute nodes. So I cannot have controller node or director node or storage node at my edge side. I can only have compute nodes as my edge side. And then network latency. Whenever you are hosting your application on the compute nodes, the network latency should not be more than 100 milliseconds. The network dropouts and site specific network. Yeah. So for the edge nodes, the network should be same. It should not be different. Yeah. So we'll just see the demo. Good afternoon, guys. Thanks, Rohit. My friend Rohit just talked about the theoretical part. I will focus on the practical part, how we can deploy it. And at the end, how you can deploy your IoT application or any edge application onto the environment. So this is demo regards to the open stack edge computing using the DCN technology that is distributed computing networks or nodes. When I said distributing compute nodes, it means that my compute nodes are located onto the remote side. And onto the remote side, there is no controller or no under cloud nodes. You can see that in the central side, I have under cloud controller and compute. However, onto the remote side, there is only one compute node. So my cost is saving. I don't have to set up another open stack environment onto the edge side. Only the processing of the application which will carry by the open compute node. And this point will manage the L3 routing between the two different networks. So that is actually the basic and the minimum configuration for the open stack edge setup using the distributed computing networks. So the next part is the setting up the over cloud nodes, how I can deploy it. So once my under cloud is ready onto my under cloud, I have to mention two important things that is the subnets. So as I am using only two subnets, central side and the remote side. So I have just mentioned lift zero and lift one. If I have multiple subnets or multiple remote sites, then I have to edit all that subnets over there. For the lift zero, I am using 24.0 sider. However, onto the lift one side, I am using 34.0 sider. So two different networks. So definitely I need to have a L3 routing between two different networks. That's why we are using the spine for L3 routing between the two different networks. Plus the important point is that whenever we are trying to set up a DCN or edge computing, we should have a DSCP relay between these two networks so that whenever the remote compute nodes want to set up a networking between the central and the remote side, the central side should pass the network details to the remote side. So by default, the nodes works onto the control plane network. So you can see that here the physical network of each device is set by default set to the control plane network. So as my remote server having the different subnets, so I need to change that different subnet. The physical network is changed to the lift one. The reason is because of the different different sider subnet, 24.0 sider network and the 34.0. So once my undercloud is set up, it's part to deploy to the overcloud nodes and that is my template files which I am using to deploy the overcloud nodes. Important is network files, role files and so on. Let's deploy it. So this particular topology and setting on KAMU emulator, it is not a KVM livered layer. So I am not able to manage the power cycle automatically. So for that matter, I have written a small python script. This is python script. This script will automatically manage the power of my compute nodes or my nodes which are getting involved into the deployment that will automatically turn on the machine, turn off the machine, automatically pushes the configuration. So that while running that script, it will also ask me to pass the absolute path of the template file where I am storing my templates. So once I ran the script, it asked me to pass the absolute path of my deploy script. The script is running. It's trying to import the node. You can see that in the bottom, this side, the nodes are imported. It's starting the VMs. You can see that the VMs are not it started, but it will start. Still the nodes are introspecting. Now you can see that the power status has been changed from off to on and then you can see that the machines are on. The machine's color has been changed. That is the console. The introspection is going on. What is basically doing is so whatever we have mentioned in the templates, it's collecting that data. It's storing into the under cloud and once it is ready, it will push that configuration on to the central nodes as well as on to the remote nodes. Whatever the nodes are involved into the deployment, all gets automatically deployed by the script. Still it is introspecting and now you can see that the power status has been changed to the off. It means that the operating system gets installed on to the nodes and after some time it's automatically off. It's still introspecting. Introspection is completed and the console has been gone. The introspection is completed and now it is waiting for the next call. There are different different calls involved during the deployment. Once the nodes provision states is available, the next call is waiting on call and once the script finds it out that the provisioning state has been changed, it will again automatically starts the node and it will push the next configuration. Now you can see in the bottom the stack creation is in the progress state. So whenever we run the open stack deployment in the backend, it will create this stack. It will combine all the configuration and push that configuration on all the nodes into the sidebar. You can see that the progress what it actually doing. So before this step one, what it will do, it will collect all the information about the networks. The first part is to collect the networks detail. Once it has all the network details, it will push that network details on to the overcloud nodes, controller, computes and the storage nodes. At this moment, my nodes are still in the off state. It's not power on state because it is waiting for the callback. Now it will start once it gets the wait on callback signal. You can see that the power state and the provisioning state is on and wait call. It will move to the active state once it pushes all the deployments to the overcloud nodes. Now it's deploying the rest of the configuration what it collected before the step one. Now after the deploying, the nodes will again turn off. It's going off and the provisioning state changed to the active state. Now at this moment, the open state deployment just collected the information what we passed into the template files but it's not been completed. So there are total five steps to complete the configuration. So at the very first step, it will collect all the network detail, software detail, deployment detail, what networks I have to deploy on to my overcloud nodes. For example, on to the control node, I want only selected in networks. However, on to my compute nodes, I want two, three networks. Let's say on to the controller side, I want external connectivity but on to the compute side, I don't want external connectivity. So I will create my templates according to that. Now you can see that we are on to the step three. As I said, there are total five steps. And to the step three, what it will do, it will basically do the pacemaker configuration. So the pacemaker is another feature inside the open stack that will automatically manage your resources. We are on to the step three. Yes, we are now on to the step five. It's basically at the step five, it's basically verify your deployment, whether the previous steps successfully executed or not. And at last we can see that the overcloud is deployed in 82 minutes. So once the once the open stack is deployed, what we will use to do is we used to create the VMs so that the end customer or the user can deploy their application. So the next part is I am deploying three VMs. The one VM is on age site on to the when I say age site, it means that my remote site. And I will deploy two VMs on to the central site and we'll see how we can access that application. So you can see that here I have compute leaf one that is my remote remote compute node where the name of the VM is rd underscore age and the status is active that VM is running on to my remote location compute node. And I have another VM that is rd central, which is on compute leaf zero that is on my central site and the name of the VM is rd underscore central and the status is active. That means I have right now I have two VMs on central and remote site with the name of rd age on age site and rd central on central site. And the another VM is web server for simplification of easily easily go around to the different different sites to access my or we can say that to easily navigate to the different different application. So three VMs are currently running. And as I say that I have another web server for easily navigation to the each application so that I don't have to go to the each site and check and then I have to access the URL for the application. So I have contributed all the URLs in one single page so that I can easily navigate at the very first page. I can go to my dashboard and I can check my VM details my hypervisor details I can access anything from the dashboard. So there are two ways to access the open stack environment one is basically three ways you can access via horizon dashboard CLI and using API as well. So these two VMs are currently running that is rd underscore age and rd underscore central and the web server which is hosted on to the central site and as well as on to my remote site. Okay, so we talked more about just the deployment and we were talking about the IoT how we can deploy the application so just my friend says that is just not about the IoT. It can be a telco application as well if it can be a broadband application as well. It can be a non telco applications as well medical applications, educational applications, it can be any application. So here as a IoT application we are using a radius desk that what what this application will do is it will allow the users to easily access the internet from the mobile devices. So we have deployed two applications one is on central side and one is on remote side. So you can see here the port detail is one zero one five five one five five is the IP of my central server. And once the next slide is coming video is coming it will be replaced with the one five eight one five eight is my remote remote server remote computer node where the application is running. So here I tested my username whether it is authenticating or not. Yes, it is authentic authenticated so that I can access that application or from the users from that region can access the application. You can see that the the port is changed sorry not this one this one the port is being changed to one five eight it is one five five one five five is central and one five eight is remote. And the username is of course the username is different for the remote sites cause that application will be locally to the remote users. We can test the application from the CLI as well the central site is one five five one five five and the remote is one five eight. I'm sending the API request to check whether that application is really working or not. So I'm sending just API command. Yes, I'm getting the access accept received access at accept acknowledgement that means the application is well working on to the central site. Let's see on to the remote site. And yes, we are getting the success message that means that the application is also working on to the remote site. So yes, that's all about the deploying your IOT application on to the open stack each site and on to the remote site. Thank you guys. Do you have any questions please feel free to ask. So as my question is what happened if my L3 connectivity is lost to my age notes. So the answer is like, of course, if you are deploying your open stack environment on to the age site, at least there should be some failure link so that the traffic can be easily moved from one link to the another link. For example, if in a data center, we usually see that there are two links or multiple links into the data center so that the redundancy should be there. If one goes down, the other link can be takeover, but it does not hamper the application which are hosted actually on to the compute node. So of course, there should be a routing between the two different networks which is easily configured. Yes. So do you think we are going to see the open stack requirements in private? Thank you. Sorry? Sorry. So my question is, in your opinion, this one you showed here, would it be see open stack requirements in private? Yes. Yes. So the question is, do we see any age deployment? So if I were to directly, this presentation is about the open stack at the far edge of this. Yes. I really want to try to delete this. How does this compete with any other metal? Well, not sure about the Kubernetes open shift, but yes, for the telco applications. This definitely is going to help. And yes, the customers are still deploying their application on to the age side. This application has been tested only for some kind of telco customers. Some VRAM devices like which telco customer uses. For Kubernetes and all those kind of thing, the testing is still in kind of progress state and we are still developing and testing those kinds of things. So the reason is, basically, if we are talking about Kubernetes and the open shift, the RAN application is not yet to be containerized. Once it is containerized, yes, there may be a chance we can deploy that CNF, the container application, on to the age side. But right now, if we don't see any containerized application, so no idea on it. But yes, from the open stack side, we have the VNF, that is the virtual network functions, that can be easily deployed on to the age side. So the question is, can we use the DVR? Yes, definitely we can use the DVR, that is, DVR is distributed virtual routing. But here is, we are actually provide, and it's also depend on the use case. So if I have a telco application, I will not use the DVR, because I am directly providing the provider networks to my compute node, so that there will be a direct access of the application. In case of DVR, maybe telco customers will not use that. But yes, you can definitely use that. It's up to the use case. Thank you. Thank you so much.