 Hi everyone. Okay, let's jump in. The title of this talk is PCEI, which I'm sure none of you know what that means, so I'll explain it at the edge and this blueprint is part of Linux Foundation Edge Arcario project and The blueprint is led by Oleg Bersin. He's the project technical lead. He's a fellow of Technology and Architecture at Equinix. So this is really his brainchild. He couldn't be here. So I'm going to try and do my best to represent the blueprint. And as Tina said, this was submitted to the Etsy Mac hackathon and it was the winner about two to and a half weeks ago. So what you're going to see is a really practical implementation of edge computing. So what is PCEI? PCEI, like I said, is part of Linux Foundation Edge Arcario. It stands for public cloud edge interface and really the idea of PCEI is to specify a set of open APIs for cloud edge instances. And the cloud edge is defined as an edge that is closer to the cloud than the user. So you've seen all kinds of definitions of the edge, right? This is sort of closer to the cloud, not really device-centric, but in a data center or a co-low. And the cloud edge exposes APIs towards the public cloud. So public cloud service provider, instances at the edge. And the whole idea is that the edge and cloud are complementary. So it's not like the cloud can replace the edge or the edge can replace the cloud. They're complementary and for that reason, the thinking is that the cloud edge deployments offer many opportunities for collaboration by exposing the network capabilities to provide value-added services. And on the right-hand side, you see the two pictures. So cloud edge and cloud workload. So your application can straddle all the way from the edge to the public cloud. And same thing on the infrastructure. Your infrastructure can straddle from the cloud edge to cloud infrastructure. And we are going to see examples of both. So this blueprint has been progressing for almost two, two, maybe even longer, two years now. And this is a logical representation of the blueprint. And then I'll get into sort of more details and show the physical structure. So we have two edge sites, cloud edge sites, and one public cloud site. So what you see below is the physical infrastructure. So the physical structure constitutes of equinex metal, which is bare metal as a service, equinex network edge, which is networking through equinex network edge. You can get things like firewalls, routers, load balancers on demand and fabric, which is on demand cloud connectivity. And then Azure expressed out as the other side of the cloud connectivity on the Azure side. So that's the infrastructure that was used. And on that two clusters were created. You will see one in Silicon Valley, one in Dallas. And then there was also a service used on the on the public cloud on Azure. And cluster one ran 5G. So what's running on cluster one is the 5G ran, 5G core, and location services and IoT, an IoT device. And some things are simulated, like the core is real, but everything else is actually simulated. The radio is simulated as well. Site number two is running the multi-access edge computing application, the MEC application, which is IoT edge. And the public cloud is running IoT hub. All of this is fully automated. The orchestration is fully automated. So literally with a click of a button, the whole thing spins up. Normally, this might take you sort of a regular user a couple of months to set up. So by automation, we are cutting those couple of months into couple of hours. So the same thing in sort of words, we are using the PCI Blueprint and the MEC Location API service to demonstrate orchestration of a federated MEC infrastructure. It federates across two MEC sites and a public cloud site. We are going to show bare metal interconnection, virtual routing for both the MEC site and the public cloud sites, IAS and SAS, fully automated, and it's across two providers. 5G control and user plane functions and then the deployment of the MEC application in which in this case is IoT. So the IoT application making use of 5G access and it's distributed across geographical locations and across hybrid MEC, edge cloud and public cloud SAS infrastructure. So with that, I'm going to dive in into some details. So maybe I'll just come here. Easier. So what you see on the left-hand side is site one, that's in Silicon Valley. In Silicon Valley, you see three metal servers. One metal server is running the UE and G-Node B simulator and for those of you who don't know 5G, that's just the user equipment which could be a phone. So it's a simulator for a phone. And G-Node B is the radio area network. So it's a simulated user equipment and G-Node B. The IoT data generator, the simulated IoT device is also running on that server. Then we have another metal server and all of these are running Kubernetes by the way. So the second server with Kubernetes is running the 5G UPF. So I'll show a slide on this but 5G can be broken up into 5G core, can be broken up into a user plane and a control plane. And the user plane is the element through which data flows and control plane as the name suggests it's a control plane. So this runs the UPF, the user plane function and it runs the location API server. So because in 5G, one of the key things you get is you can get the location of your device or in this case IoT device which can be very important. All of that is connected, all the metal servers are connected over an IP network. The third metal server with Kubernetes is running the 5G control plane and we'll see more details. So that's site one. Site two, which is site one and site two are connected over Equinix fabric and we are calling that the MEC Federation data plane and that's not really been specified in the standard. So Oleg is trying to push that back into the standards. The second site, the site in Dallas is running one metal server with Kubernetes and that is running Azure IoT Edge, which in this case is the Edge Computing or the MEC application, multi-access Edge Computing application. That in turn is connected over Equinix fabric to a public cloud site, Azure in this case and Azure is running IoT Hub. So as you see, fairly complex environment, actually quite representative of a real-life environment and at the top you have the orchestrator. So it's either called the MEC Federation broker or orchestrator, however you want to call it and it's orchestrating the left-hand side, the fabric in the middle, the right-hand side, everything fully automated like I mentioned. So that's this blueprint. What does the use case do? It does three things. The first thing it does is setting up the sites, the infrastructure. So it's setting up the 5G provider and in this case another thing I didn't mention is you could have the 5G provider and the Edge Computing application provider be two separate companies. They could be two separate entities. So you have the 5G operator and what we are doing in the orchestration stage is we are setting up bare metal servers, we are setting up Kubernetes, fully automated. Then we are setting up the MEC Federation Internet Connect provider that's at Equinix fabric and that creates the private MEC Federation data plane connection. And then we set up the right-hand side site which is the MEC provider. There we set up bare metal, we orchestrate the bare metal, we orchestrate Kubernetes on top, we orchestrate the virtual router that connects both to the 5G provider and to the public cloud and then we orchestrate express route to Azure. So that's step one. Step two, we orchestrate the network functions and the application. So on the left-hand side we orchestrate 5G functions, control plane functions, user plane functions, MEC location API server and IoT Edge and on the right-hand side, sorry the IoT data generation. On the right-hand side it's IoT Edge Gateway and Azure IIS with IoT Hub. And then when all of that is set up then we actually run the service. So that stage we register with the 5G network, the UI, the user equipment with the 5G network, send encoded IoT sensor data, then we enrich it with location services data and then finally we send it to the IoT Hub on Azure where you can actually see it, you can visualize it. So this shows 5G and those who are familiar with 5G, this will look very familiar for those who aren't, I'll maybe just spend a minute explaining it. So what do you see on the left-hand side is the user equipment and the G-Node B. So like I said the user equipment can be a phone, it can be an IoT device, it can be a HoloLens, anything generating, connecting to 5G. The G-Node B is the radio area network, that's the radio part and the lower-level protocols that go along with it. That then connects to the UPF. UPF is part of an entity called the 5G Core. So it's the core network that connects you to either MEC applications or to the data network, to the internet or whatever as the case may be. So the user plane is there and that's how the data flows, the green arrow and then the purple arrow. On the top you see the 5G control plane. The 5G control plane is quite rich, there are lots and lots of functions. I'm not going to go through them but they all need to be, you at least have to have what's called the AMF and the SMF to be minimally operable and then many of the other functions are required for full functionality. So all of that was orchestrated and you see the network, some of the details of the network as well, and that was then sent over Equinix Fabric to the Dallas site, to the Dallas server where the MEC application resides. And the Kubernetes stack that was deployed on each server, you can see that as well. You have Kubernetes with Maltes and Flannel and all of these applications are packaged up as Helm charts. You do see a couple of virtual machines here and we, as you know, Kubernetes and virtual machines are not incompatible. You can use virtual machines with Kubernetes using technologies such as KubeWert. So yeah, we don't use different technologies, it's all through Kubernetes. So that shows the 5G deployment. This shows the flow of data from the user equipment, the IoT device in this case, all the way to Azure IoT Hub. So what it's showing is that the sensor data is sent through the UPF, through the MEC Federation data plane, which is the connectivity between the two sites, to the MEC application, which in this case is IoT Edge. So that data is sent, then the IoT Edge application then says, oh, I need the location data. So it then sends a request to the location API server, gets the location information, and then enriches the data with the location information and sends all of that, so temperature, humidity, pressure and latitude, longitude. All of that is sent to the IoT Hub on Azure, where it can be displayed. So this, don't be scared, I'm not going to go through the whole thing. This shows the orchestration, how the orchestration is done. So what we are using are Linux Foundation networking open source projects for the orchestration piece. One element we are using is called MCO, Edge Multicluster Orchestrator. It's a way to orchestrate network services using cloud-native network functions or MEC applications using cloud-native applications across multiple Kubernetes clusters. So that's MCO. We use another piece from Linux Foundation networking called CDS, Controller Design Studio. It's part of a project called ONAP, and that's used for infrastructure orchestration and configuration management. So those are the two pieces we use. We use Kamunda on top for service orchestration, and that talks to either APIs on top or to a GUI. Both are available. The combination is able to do everything I mentioned. If you attended my earlier talk this morning about Nefio, we are actually going to start integrating Nefio in this, because Nefio provides a very nice declarative intent, which frankly is missing from this blueprint right now. So anyway, the orchestration is agnostic of the domain, so you can have different domains. Some domains may need Helm charts like the IoT Edge or the location services component or the 5G core. Some may need Ansible. To deploy Kubernetes, we use Ansible. And to bring up metal and to bring up fabric, we use Terraform. So we use different southbound technologies depending on what's needed. And the cluster config, Helm charts, Ansible playbooks, Terraform are all in a git, so it's all DevOps CICD flow. So I think that covers the orchestration. Yeah, so the idea is orchestration using infra as code. So it's uniform. It's model-free. The orchestrator does not have to know the intricacies of the underlying domain. The state is kept external in a Git repo, and through Git, it's all DevOps driven. The Mac location API implementation is, as a Kubernetes application, it's through a Helm chart, and the Helm chart is onboarded using the orchestrator and then orchestrated onto Kubernetes. So that's what this shows. And finally, it's just a summary again of the end-to-end flow. We saw this picture already, but what it's showing is that the yellowish arrow goes all the way from user equipment, the IoT device, to IoT Edge. The green arrow is where the IoT Edge queries the location server and enriches the data with location, the latitude, longitude. And then the black arrow shows black or purple, I'm not able to say black, shows how that information is then sent to IoT Hub on Azure. And then the blue solid line shows the BGP peering between the two sites. And just to sort of complete the picture, you can see the color coded key at the bottom. The metal network Edge fabric are all equinex. The orchestrator is from Arnau Networks, the company where I work. The green is from Etsy Mac. The gray free 5G is an open source 5G core. And the yellow is Azure, Microsoft Azure Services. All of this has been contributed to the open source. So this is of interest to you. My presentation is uploaded already. So if you are interested, you can go to these places and see what's going on. And now I'm going to fire up a demo. The demo does have Oleg's voice. So I'm going to play it and just sort of let him show the demo. It's not letting me. Do I have to do something on the audio? Okay, yeah. It's Mac, Airbook, Speakers, Exetron, SkillrD, USB audio device. The USB. And Edge Cloud provider. And the other one is the 5G access. What you see on the screen is the use case for the two providers. One is the Mac provider and Edge Cloud provider. And the other one is the 5G access provider. Federate their resources by way of providing services to each other. The Mac service operator provides access to a Mac application and Mac resources. And the 5G operator provides 5G access and location API service from Mac. Both are interconnected using the Mac federation data plane across the global fabric. Our use cases proceeds in three stages. The infrastructure orchestration stage, the 5G network function and Mac service and application deployment stage, and the end-to-end operation of an IoT application. First, we orchestrate the topology using our network's on-cap system. And we examined what created topology. The next step, we will examine the input into that topology. So the intent-based input describes various components of the solution that we're going to create. And those components are described in this XML file, JSON file. And then we launch the topology creation. And then we'll see that the topology has been launched and it's been orchestrated. In order to see the orchestration, we'll first look at the Azure side and look at the creation of the ExpressRoute connection. So you see the ExpressRoute connection is being created. You also see that it has been provisioned on both the provider side and the Equinix and the Azure side. And we see the private BGP pairing has been configured for that connection. Next, we'll look at the Equinix side to see the status of this connection to Azure. It is active. And this connection is enabled on the virtual device, the virtual router. So we'll look at that. The virtual router is the VNF that Equinix provides. And then it is a Cisco CSR 1000 V virtual router. It has been assigned an IP address. And we can see the connection to Azure ExpressRoute is active and provisioned. We see the parameters and the description of this connection here in the Equinix portal. The next step for us is to look at the status of the BGP pairing. After the creation of the VNF and creation of the private pairing on the Azure side, we examine the routing table on the Azure cloud and we'll see all the prefixes that are advertised from the VNF and Equinix to Azure and the prefixes that Azure advertises back. This is the components of cloud interconnect of the MAC service provider. Next step, we'll look at the water orchestration created for the MAC federation data plane. So this is the connection that links the Silicon Valley and Dallas together where our providers are. The next step for us is to look at the bare metal service. So part of this orchestration we orchestrated bare metal service and Equinix data center in Dallas. So you'll see that. This server has the properties that I'm showing from memory and CPU. And then it's connected to both connections, one to Azure and one back to the 5G service provider in Silicon Valley. So here we can log into the server and look at its configuration. So we also as part of the orchestration deployed Kubernetes on this server and see the basically the system pods running on the server for the Kubernetes part. The next step in our orchestration is the deployment of 5G functions and MAC service applications. So we'll see this next. And this step consists of creating the 5G functional deployment with the control plane with the AMF, SMF and all other functions including the web UI in order to log in and to see the status of the connections. We also see the UPF cluster on which we deploy a standalone UPF distributed here in the separate location and also simulated UE. So to examine that, we'll log in into the control plane cluster. We see all the pods for all the control plane functions up and running. We next connect to the UPF cluster. And in that UPF cluster, we also look at the UPF pod that has been deployed successfully. So you see this here at the 3.5GC namespace. We see the UPF pod and also the MAC location API pod that we deployed as part of the orchestration. Next, we'll look in our simulated UE just to make sure that it has connected. So it has deployed the simulated UE is realized in a Kubernetes pod. We'll look at the status of the UE connection and on the 3.5GC web GUI. And we'll see that that UE is connected. It is correctly associated with the IP address in this case 1010.9. And this is what we can verify on our UE if we connect into the container and then look at the address assigned for the mobile tunnel and we'll see the same address on the mobile tunnel interface within the container. We can also verify just basic traffic that we can send from the UE to the MAC server. So we do that by specifying the source IP address of the mobile interface of the UE and then the MAC server IP address. And we see that the latency here between Silicon Valley where the UE is and Dallas is approximately 33 milliseconds on the equinex fabric. One other interesting test is that we can ping the internet directly from the UE from the mobile interface and because the UPF is distributed our local breakout delivers very low latency under two milliseconds. Next thing we go to the MAC server to make sure that we have deployed the IoT application. This is the Azure IoT gateway which is a cloud native IoT application that runs in the paths as you saw. Our next step is to actually show the end-to-end operation of the IoT application itself with the ability to to enrich this application with location information. The first step is that the UE and the IoT client will send the measurement data just the sensor data. The IoT Edge gateway will go back to the location API server, obtain the location information for that UE and add that location information into the IoT data and then the newly constructed message with the sensor data and the location data will be posted to the cloud. Here you see the end-to-end traffic path. So from the UE to the IoT Edge gateway, from IoT Edge gateway to the location API for location information and then post it to the cloud. At this point we will go back to the UE itself again looking at its IP address and we will launch from within the UE container our reference IoT client. So this is the Python script and we just basically launch it. In order to launch that traffic we need to specify the Mac server IP address where the IoT Edge gateway is running and the port on which it's listening and that port is shown in the display here. So once we specify the port the IoT client will periodically start sending the data and we see the first attempt was successful. It sends an encoded PDU field which encodes temperature humidity and fresh information into the message. Then we can look at the Mac server across the Mac federation connection and look at the log of the listening IoT Edge pod. You see that it receives the same data. It then goes to the location server, obtains the location information for the UE. It then adds that location information into the message with the sensor data and then posts the enriched message with temperature humidity and pressure, longitude longitude and altitude information to the cloud. On the cloud we can see that it is received by the cloud IoT SaaS backend which is called the IoT hub in Azure and we'll see the messages that are being received in the metrics statistics for that for the device shadow. You'll see that the counter messages is increasing as we expect it all the way through this infrastructure that has been deployed and activated. At this point we will also show the end-to-end path that we realized here. So we constructed the Mac federation environment with two providers connected via Global Factor. Okay, so that concludes the presentation and even if you sort of put aside the 5G and some of those complexities, I hope that the big takeaway from this which I hope you got is that there's a lot of talk about using hyperscaler clouds and private clouds and networks in between and here's actually a practical implementation that shows all of that. So with that I'm happy to answer any questions. Questions? Yeah. Can I have a question? Yes. Yeah, so I discussed with the volcano engine people. Volcano engine is one of the CNCF project. So they are using the TTOC application. They try to, so if you go back to one of the slides you have the Azure there and the IoT applications. If they want to replace the IoT application as the TTOC application but they found it's not quite, they can apply directly the volcano engine edge cloud to replace the Azure edge cloud. So is there any tips or tutorial or advice to the team how they are going to slide in this piece of blue thing? So they want to replace Azure with... So there are two things. They want to replace the IoT application here as the TTOC. Second, they want to replace the Azure with volcano engine edge cloud. Yeah, it should be possible. So what has to be changed? On the left-hand side it's a helm chart change and on the right-hand side currently it's a terraform and depending on what volcano requires maybe it's something different. So if they attend the next PCEI call we can go through it technically and we can sort of give them, I'm sure we can, I can give all like a heads up and we can we can resolve those issues. It should be possible. Yeah, this relates to my other question. If you were here when I talked about the LFH catalog. So we try to do the Kubernetes one click deployment but we don't know whether the Kubernetes deployment file here is too complicated. We cannot do one click like how many Kubernetes extensions the installation file you need. Do we actually can do one click? We can do one click. So in the orchestration there are two steps, design step and deployment. The design step has many many steps because you have to onboard the the CNF or the CNA you have to put intent on how you want it orchestrated but for deployment it's one click. You just click on it and it deploys so yeah. Yeah so I would suggest you to double check of LFH catalog to see whether the ham chart there can be identical to this ham chart or any anything like a joint things could be done. Okay that's a great idea that's a great idea. Yeah yeah then it will be this blueprint will be the pioneer to use that. It'll be a lot more valuable than yeah. Okay thanks. Yeah I just saw a briefs for the questions then to introduce the Jay that you may have.