 So, this is another edge panel, but we're going to try to keep the scope very narrow, right? It's just about at the far edge, what's going to be there? We're going to run VMs, we're going to run bare metal, we're going to run containers, we're going to run containers inside of, run by KVM, or is it going to be deployed to bare metal, that kind of thing? To kick it off, I get a couple of questions here. So basically, the panel would like to go through, what are the options for what you run at the edge? Yeah. So, first of all, let me introduce myself. My name is Kasim Arham, I'm part of Juniper Solutions Team. And from last four and a half year, mostly working with OpenStack. And the focus was Tengsten Fabric, which is the new name of Contra. So going back to the question, so overall, today, if we look into that, industry has already designed and validated, and most of the core, the big sites are already in production. But now a lot of focus is coming in 5G edge sites, which are mostly the cell sites. So if we look into the cell site infrastructure, so there are multiple options available. But if we look from the infrastructure point of view, which is also the one of the main theme here, OpenInfra, so we actually need compute network storage. So if you actually categorize that part as a small subset of a big data center, so you can actually design that. But there are a lot of limitation in terms of how the workload will be handled there. What are the orchestration options you will have? So if you take those things in consideration, we are actually limited with the number of design options, and then we have to come up and address compute networking storage in a way that, overall, you can provide seamless experience at that as well. So I will actually try to take a little bit of time from the networking angle. So if you look into the OpenStack, which is running Neutron at the central site, and if we are extending that part at the edge, definitely the Neutron can be there, the other plugins can be there. And then from the workload point of view, today, if you look into the edge site, that can be a VM, that can be a bare metal, that can be a container. So from networking angle, you will be looking for an option which can provide networking seamlessly across all type of workload. So I will take it from the... So I question whether you can use Neutron at that for an edge implementation, because I don't think that it has the tools really to support a VM deployment. So edge, and of course the constraints at the edge, some people, unfortunately, like a lot of telecoms, when you hear talk about edge and 5G, when you dig down into it, they are talking about what you would consider a data center. So typically they have five or 10 or 15 or even 20 racks of stuff in it. So for a telecom, that's a tiny implementation. But for the average user, they are like, well, that's just a data center. That's bigger than my data center, which is usually four racks in a closet. So I think the constraints of getting it down to a box or getting it down to something on a light pole, that really takes a whole lot of more squeezing down the essentials. And I don't think the answer is it's containers or VMs or whatever. I think it's a combination, depending upon where it is in the edge infrastructure. Yeah, I'll kind of follow with what you said. It really depends on the application, ultimately. So for example, if you are trying to virtualize a basement unit, a BBU, you would certainly not use the same technology as if you're trying to push an analytics application down to the edge. So you will need a combination of both. It always depends on what you are trying to deploy. So I kind of agree with what you're saying. You have VM containers, bare metal, and you might also consider unique kernels if you want. Yeah, and microservices. So the trick is getting them all to work together, right? Yeah. So I can give a brief summary of the answer. The answer is it depends on the workloads, what your workloads look like and how you would like to deal with your workloads. So the answer is it depends. And I would give a recommendation to do a proof of concept before you implement your use case because it is use case dependent. This is like an engineering trade-off. If you would like to expect more security, you might pick up VMs. And if you expect more efficiency, you might pick up containers. And you can also use the both, since if they can integrate well with each other. And you can also use bare metal if your applications are strictly respecting to the compliance measurement. And there are some other solutions like Cata containers, which can give the speed execution and security at the same time. It also minimizes the result utilization. So there are several choices that all depends on what you need to implement for your use case. And to further add to that, if we look into what we have done in VNF and NFV world. So initially, if you recall, whenever the new VNF was actually given, it was actually the whole line card was converted into a big image. So overall, when we're talking about edge, the application also has to be optimized. And then they have to actually adopt the cloud-native architecture. And based on that, if they are coming up with the container-based architecture, VM, and the bare metal and layer-to-connectivity will also be essential because the sum of the component, especially in 5G, cannot be virtualized. So all that combination will stay there. So aside from workload, what are some of the pros and cons of each? For instance, some of the limitations with containers might be hardware acceleration, that kind of thing. Could you guys touch on some of the pros and cons? Yeah. Let's say, for example, containers. Containers are fast to run. So if you ever started a container in your life, you know that it starts up, well, it downloads the image first if you don't have the image, and then it runs in a matter of seconds. On the opposite side, if you are using VMs, you need to boot the operating system if you haven't, and there's a small delay there. Cons of containers, well, you probably know there are security issues that you need to address. So you need to take real good care of what you get from the Internet, what you are downloading as an image, what you are deploying in your Edge, and how you set your permissions. So you need to be aware of that constantly. So what I find is that a lot of applications have not been containerized, which means that if you are using an application that you have had for a while, or if it's a commercial application, and the example I always use is firewalls and security applications, they are not containerized. They are not going to containerize in a million years, unless they can be assured that the container will be absolutely secure, because they are selling security. So when you do think, oh, yeah, containers are good because they are small, they are efficient, and fast, you do have to add in the effort to containerize, and in the case of many applications, that's not even possible. Another thing I find with containers is the inability to service chain them together is a significant drawback, and again, particularly in the networking world where you want to string together applications, containers can be significantly less efficient actually in terms of the speed and efficiency, because you get a lot of back and forth and north-south traffic bouncing around that doesn't need to. And if we look into the base equipment available, which is an X86 server with storage, networking, and compute, and if we are considering a container there, definitely the limitations will be there from performance IO side, then you can have an option of DPDK, then you can have an option of SRIV, and going to the service chaining, definitely there are multiple options now available in terms of MULTUS, CNI, GINI, and then also tungsten fabric is also coming up with multiple interface support that can actually bring firewall into this whole equation and then protect the service chaining via the workload. Then I will also go with the containers workload instantiated at the edge, definitely you required some type of security as well, so that thing also has to be taken in consideration. So, those are all great, and MULTUS is one that I know is addressing that, the need for containers to understand networks better, however, do understand those are in development. Yes, that is correct, but ultimately, if we look into the Kubernetes, that has a single interface, and that the whole orchestration was built from a multiple tier application, tier 1, tier 2, tier 3, but if we would like to go and adopt that same model in telco side, especially VNF side, we have to actually either adopt their web scale architecture, we have to come up with the MMEs gateway, P gateway type function as a web scaling architecture, which is not happening, but here just to see where the whole vendor will adopt and how they will actually come up with this cloud native architecture, then we can actually take that architecture in consideration and address those things from the networking and storage side. So, you are saying VNF is not a good fit for containers right now, is that the consensus? So, VNF will be transformed into CNF, so I will call it container network function, and that function will be actually, so I have not seen it yet, so just to be honest, but I would love to see as a CNF, as a three tier application, front end, back end and database, and then they have to adopt their protocol. If you look into the telco side, everyone remember SS7, and there are so many complicated protocols they use. Now ultimately, they are coming to GTP, SCTP protocol, simple protocols. I will go one step further. When they adopt this CNF, I am actually looking, they should actually have a multiple tier, tier one, tier two, tier three app, and they have to actually follow this web scale architecture to scale their application. So, I cannot answer how the vendor will address it, but this is again my thoughts at this stage working on the current problem. So, the bottom line is it is definitely work in progress, and I think the goal is to get to containers, because I think there is an understanding that those are more efficient. However, you know, there is work around microservices and Unicode, which particularly, you know, I am thinking of like Lambda and some of those other sort of, in the IoT space, where the constraints of the hardware are extremely significant, and so you really have to think in little fragments of code, and I think that's, personally, I think that's the future of a lot of this, of a lot of where edge is going to go. And the last thing I would like to add, based on some of the interaction we have with the web scale side, they have single responsibility metrics. So, that means that a single application has to be reside in a single container, and then you adopt the cloud native architecture for that. So, ultimately, those things has to be taken in consideration when we are actually launching application and building application on the edge. So, that will not only go on the standard mobility applications and CNF, it will also go to the Mac type application, which is a client server model and the Mac architecture, and the CUPS as well. Yeah, well, again, it really depends on the applications. Like you said, there are functions that you won't be able to containerize. Let's say, for example, firewalls. That's one typical application that you won't be able to do. Or a virtual BBU, or a VPC, for example. You need to have that kind of stability, and you need to provide the SLAs that are needed for your customers, because you're selling them. So, you need to provide that kind of predictability in your infrastructure. So, it will always depend on the application. If, for example, you're talking about virtual reality and augmented reality, that's one thing. And if you're talking about proper VNS, then it's our dummy. So, as we've talked so much about the drawbacks of containers, there is another option named the Cloud Container. What do you think about the Cloud Containers? As I said, just now that the Cloud Containers can provide the SPD execution and the security at the same time. And it also uses a customized gas kernel images to minimize the results utilization. So, what do you guys think about Cloud Containers? I like them. But, again, a lot of the applications I'm using are commercial applications. So, you know, I have to go to the vendor and say, oh, guys, this is a product you've been selling for 15 years. Guess what? You get to re-architect it from scratch. And the answer is generally hell no. So, I think, you know, a lot of what you're talking about Cloud Native, I'm all for that. I just think it's just going to take a long time. Yeah. Okay. So, we didn't really talk too much about security. What about security between VMs, containers? Yeah. So, if we take different type of workload, which is part of our discussion today, container VM as well as the bare metal, definitely that has to be skewed. And the security has to be implemented from the networking side as well as the application side, all the DDoS type of attack. So, then we have to make a selection of the technology which actually can seamlessly provide connectivity as well as security across that. And I will actually, on the security topic, I will go one step further. So, even in new 5G sites, we are not seeing those sites connecting to the core sites. There are actually use cases where those sites will have a KOLU or Equinex or some distributed connection going to a public cloud. So, then if you look into the edge, edge is actually moving toward the multi-cloud architecture as well. So, then when you talk about multi-cloud, then the security is again another aspect which you have to take it in consideration. And you have to make sure your site where your actual application workload will reside along with your Mac applications. So, if those Mac application server is running in AWS Azure or GCP, you need a secure IPsec-based VLS-based connection. So, technology selection will be very important as part of those use cases. And definitely those 5G sites will not be traditional site going to the transport ending up in a central core site. Those will have a distributed connectivity, and those distributed connectivity has to be secured. So, in that aspect, so there are multiple technologies available which can provide those security layer on the networking level, and then they can also secure it to the public cloud side as well. So, another aspect of security which is very important in the edge, which doesn't apply in the data center, is the fact that physical security. Because typically these edge devices are not located in a data center. Some of the 5G things are sort of many data centers. But what I'm seeing, particularly in the IOT space, is they're in smart cities and some of the other stuff. They're in harsh environments. They're stuck on light poles. They're in oil and gas facilities in the middle of the desert. They're out wind farms in the middle of the ocean. So, I think the wind farm in the middle of the ocean probably has some physical security, but it also has harsh environments. So you have to think of those aspects too. And security in my mind is almost impossible if the general public has access to the device. So if the device is sitting on a light pole, yeah, the general public actually does have access. So you really have to think about security in the sense that if a bad guy gets access to that device, it shuts itself down and can't be used to get back into the network or somehow harm the rest of the network. So these are considerations that we've never had to deal with behind the closed doors of a safe data center. And that's where you bring in some of the best practices of the security used in data centers. For example, thinking about separating the data plane, the control plane, management plane is really important. And at the physical level, not just virtually or let's say for example you have an interface that you can enable with a backdoor or something, it's not enough. You need to be physically separated because your user will be able to access some of those end points. So it really is important and that's up to the hardware vendors to figure out ways to report intrusion detections, fault management, things like that. That's right, those have to happen at the hardware level. It can't be at the software level. I mean, if that box can be taken off a light pole and taken home and stuck in and plugged in and somehow be accessed, that's not a good thing. So for me, the edge security can be enhanced from two angles. The first angle is that the question has already pointed out that related to the communication protocols. And another one is that the best has just pointed out that it's related to the infrastructure security. For the infrastructure security that we can benefit from some of the hardware security technologies like TPM, SGX and so on, we can also leverage some OS level security enhancements like integrated management architecture in the Linux systems. And from the communication protocols perspective that we can just want to respond to this question, how to provide secure connectivity at the edge using containers. The simplest answer would be you can run your containers in the VMs or use the VM based containers, but this might not be your expected answer. There are some other options like you can do the security enhancements at the application layer using HTTPS and using TLS in the, for example, in the containers. You can also add some other security enhancement gateways to implement some lower layer security enhancements for your edge cloud. Those would also be the technology technical options. Okay, that's all I wanted to express. So, excuse my naivety, but I was always under the impression that by nature containers are less secure than a full blown VM. Is that not the case? And if that is the case, I know Cata containers is supposed to address some of those issues, clear containers. Do you guys have any comments on that up in coming technologies? Yeah, absolutely. But with the passage of time, they are actually coming up with some new initiative with CNCF to protect the container site. The container biggest drawback is access to the kernel and the user space. So some work is already going on. Definitely those things will evolve and then they will be applied there. But ultimately the trend so far is whenever someone adopt the container architecture and microservices they prefer to run those things in a VM. And then VM in the container today is the, from security point of view approach, but I actually think that will change with the work which is going on in community and some of the stuff coming going forward. Well, right, because running a container in a VM is just adding another layer of infrastructure that's theoretically unnecessary. It's always amazing to me how much we kind of recreate the wheel over and over again. So you can have VMs and containers and containers and VMs and VMs and VMs and containers and containers and it kind of becomes a iterative process recursive. We've recreated recursive infrastructure. So we've talked about containers and VMs, some of the pros and cons. We haven't really talked about bare metal, which depending on your workload, I think bare metal at the edge definitely has a place. I'd like to get some feedback from you guys on that. So if we look into the traditional cell site, you can see even in 5G they already realize some of the component they cannot containerize. That has to be, remains as physical component. So I actually envision that part as a bare metal as well because that is just like equipment which required a layer 2 or maximum layer 3 connectivity. And along with that, if you look into even traditional core sites, which are the main data center sites, there is already connectivity to the legacy hardware and the VNF side as well. So bare metal will stay there, but an important aspect from infrastructure is who will provide seamless connectivity to those bare metal. Bare metal, when it is actually there, it has to be boot strapped, it has to be actually re-image. So we need ironic type capability there first to bring it up. So once it is up, then we need a networking type capability just like DHCP, DNS to provide an IP and layer 2 connection. And then if that bare metal is connected to a fabric which has a top of the racks which is spine and leaf architecture, then someone actually has to configure that fabric as well. So you can see there are multiple approaches to that. And then with the fabric side, I also think that the orchestration which is used for the edge should be fully capable of providing that connectivity to the bare metal. And then the orchestration we use, if we use OpenStack, then we can use Ironic for the re-imaging and all that stuff. And once the infrastructure is up, then we can use other networking features to even bring bare metal as a part of the infrastructure and direct connection to VMs containers. So you bring up a good point which we kind of skirted around, which is of course the edge is dependent upon orchestration and automation to be successful. Because every time you send somebody out to reboot a box out in the field, that costs money, time, etc., etc. So I'd like to talk about the different containers versus VMs and versus bare metal in terms of ease of orchestrating. Because there are definitely differences in the tools that are available. And again, over time I think this will kind of smooth out and they'll become better tools. But right now I'd say that the orchestration tools are probably best for VMs and not so good for the containers and really awful for bare metal. Actually we have an open orchestration program, open source project named OpenNet OpenNetworking Automation Platform which has a component named multi-cloud. This can have multiple types of infrastructure being plugged into that orchestration system. It supports like open stack of different versions. It also supports starting acts and I think it will definitely support some container management project as well. I agree. Onap is critical and now Onap right now is very telco focused which is a little unfortunate because right now most of what's going on in the development and the edge is really telco focused but I don't think that's going to stay that way. I personally am already working on some applications that are not yes I work for telco but the applications are not strictly telco applications. There's always that special application that you want for example some specific instruction set from the CPU that you want to fetch because you've bought for example a specialized CPU that's for transcoding. So you need to figure out where it is and provision that specific resource with your load. So bare metal is kind of the it depends on the application what you're trying to do. If you want to program an FPGA that has an ARM soft core in it, you can do so but you need to be able to detect that resource. So that's where we are developing proper descriptive models to be able to describe what actually the hardware is so you can manage and orchestrate your applications after that. We're getting close to time. I have one more question because Onap came up. Does Onap also cover bare metal? Also cover bare metal provisioning. Is that part of Onap? So as per my information not today but definitely the work is in progress because Onap is actually the new name of Ecom which is actually the main AT&T project but definitely with the new evolution with the Akareno project and some of the new stuff which is actually hosted in bare metal. So it is actually not available as per my information today but I think there is some work going on. So Akareno and Airship both support bare metal I believe? They support the management of that but then on top of that bringing it up end to end connectivity that is because this is the Airship is the infrastructure part and I think the question from the Onap point of view to provisioning of application and running end to end. Am I correct? I know that Onap is responsible for orchestration. I was wondering if part of Onap is to actually provision bare metal Pixie boot. So that part is going to Airship as Beth said. So they further divide it? Onap will consume some kind of B-mouse. Onap is the controlling orchestration. So we are getting close to time. If nobody has any other Onap comments I will let the floor ask some questions. We have a couple of minutes. Does anybody have any questions to like to ask? The mic is right there. You had mentioned about comparative weakness of container orchestration. Have you taken a look at Kubernetes? What are your thoughts on that? I am actually thinking more along the lines of container orchestration over WAN connections. Yeah, container orchestration within the data center Yeah, no problem. But when you have a top of rack switch running under gig, you don't care. But over WAN connection or satellite connection or intermittent connection, then you have a problem. Hi, I have one question. You guys talked about containers and VMs as possible deployment models for edge applications. What are your thoughts on Unikernels? Is that also a possible deployment model or is that completely out of date? Was it Minicons? I couldn't hear what you said. My question is what are your thoughts on Unikernels as a possible application model for edge? Well, for the IoT part or a UCP, Unikernels would be a good fit. I am not aware of any orchestration applications that is pushing Unikernels at the moment. But maybe I am wrong. Do you know any? No, I actually am not familiar with Unikernels. Yeah, I know a little bit about Unikernels but mostly they have been at the IoT on the IoT side. And again, as I said earlier, I think IoT is in its infancy from the edge perspective. So I suspect we'll be seeing more of that. Yeah, I mean Unikernels as a technology has been around for close to 20 years now. But like you rightly mentioned, this is not something that is in the scope of most of the orchestration frameworks like Kubernetes. Any more questions folks? If not, you can go have beers. Thank you very much. We're between you and the beer. Thank you.