 Hello, my name is Oleg Borzen and I'm a fellow technology and architecture at the office of the CTO at Equinix. I will be presenting this project that's called complex infrastructure design and multi-domain orchestration, in which we're using edge multi-cloud administrator and on-app controller design studio. This project has been developed under the Linux Foundation Edge, a crane of public cloud edge interface blueprint in collaboration with our networks and I'd like to also present some key contributors from ARNA with a Christian architect and then Kavitha Papana, member technical staff who collaborated in the development and implementation of this project. The agenda for today's presentation will go over the motivation behind this project. What is going on in the public cloud edge interface and what use cases we're pursuing will go over the software integration architecture between on-app controller design studio and the implementation of a Terraform executor that allows us to automatically run Terraform plans as part of the orchestration workflow and then we will conclude with the demo of the infrastructure design and multi-domain orchestration. There's motivation for this project. We observe some key industry trends that are evident in the market. The first one is that often today public cloud is providing the edge computing capabilities and we have examples of AWS Outposts, Anthos, IBM Cloud Satellite, which requires some integration between the public cloud infrastructure itself and the edge resources. In some cases, this integration is completely provided by public cloud, which is coupled, what we call a coupled model like an AWS Outpost. In some cases, integration is partial where hardware is orchestrated separately from the virtualization layer and then the application is right in the top. We need to figure out how to deal with this trend. This also kind of highlights the nature of most practical deployments of the edge infrastructure is hybrid, meaning that the hardware for the infrastructure could be, edge infrastructure could be provided as independent hardware but you know virtualization layer like for example Google infrastructure could be provided from the cloud. So in this case, we really need to, we're dealing with multiple domains at the edge and in the core and we're also trying to understand that they have to inter-work and coexist together, which leads to the need for this multi-domain inter-working. And in that case, each domain, for example in the edge cloud and the interconnection fabric in the core cloud, there are different APIs to different provisioning methods, including CLI, and that makes the deployments time consuming and complex. And also, we basically need some sort of an orchestrator that would understand all of this and leverage some common denominator for the infrastructure, for both the infrastructure and the application deployment. And in this case, we see a great role that Terraform and Kubernetes in health could play in this area. In addition, since this is all multi-domain, and then we are talking about the need to interconnect the edge to the cloud, iterate edges, and that reliable and performant interconnection is required for this solution. In addition, because of this infrastructure is targeted for new applications, developer-centric capabilities are required. The ability to integrate both the infrastructure design, the infrastructure provisioning, and application deployment with CICD environments is one of the key requirements in our project. To transition a little bit to how it relates to the Linux Foundation Edge Acrena Blueprint, the overall context for this is what we call a public cloud edge interface blueprint under the Acrena Linux Foundation Edge. The purpose of this blueprint is to develop a set of open APIs, orchestration functionalities, and also edge capabilities for enabling multi-domain interworking across domains that we define in this picture. We basically want to see interworking across the operator network edge domain, so this allows access to 4G, 5G network to fix networks, the public cloud edge and core domain. So in this case, according to what we're seeing in the industry, certain public cloud capabilities, although they're topologically split between public cloud core and public cloud edge, they're still linked or coupled together from the hardware to virtualization to application and services layers, so we need to be able to orchestrate that. In addition, we recognize that not all edge applications are provided by public cloud, so therefore we want to have a solution that is also uniform across all these domains for the third-party edge applications. In addition, we also kind of recognize that all of these domains do not just hang in the air, they are located in certain data center facilities distributed around the geographies and interconnected by network providers or by interconnection providers. In addition, we developed a set of terminology that you see on the screen, referring to public cloud core as PCC, public cloud edge specifically that has a relationship to public cloud core, either on virtualization or application services or even throughout the hardware layers. Public cloud edge, third-party edge is anything that's provided by third-party independent from public cloud, and then the operator network edge is the resources provided by the mobile network operators. This could be a user plan function, it could be a radio access network, it could be a combination of things. In Release 5 of Acrena PCAI, we developed this architecture slash reference implementation where you see on the screen, we separate entities or we define these entities in the architecture. And then from the right-hand side in blue, these are developers, developers or developers and architects that provide infrastructure as code, for example, templates like Terraform plans that could be executed against public clouds, against the edge infrastructure, against bare-metal clouds or edge clouds and against the interconnect providers. In addition, we have other capabilities where the developers or architects could actually include other things like help charts for the application deployments, and things like identification of where the edge clusters, for example, Kubernetes or OpenStack clusters exist. The pink up top is the orchestration there. So this sometimes we refer to it as the PCI enabler, sometimes as a multi-domain orchestrator, but those terms are interchangeable. The PCI enabler provides API, it's supposed to say APIs, provides the API handling capabilities like the accept API calls from Hylare constructs, and you'll see that in the demo where we have a software layer that we call infrastructure design studio that enables us to develop the, or design the topology and use those API calls to tell the orchestrator what to do. The orchestrator then would basically pull through its Git integration will pull the appropriate Terraform plans or appropriate information from Git and start executing the workflow based on prescribed topology and prescribed design. In the orchestrator we see things like the ability to invoke Terraform to enable infrastructure provision in the public cloud core. In the demo you'll see how we do it with Azure. We have the ability to actually access edge cloud resources in the edge data centers. In this case we use Equinix Vermetal Cloud, which is called Equinix Metal, to actually bring up servers and install Kubernetes. This is interconnection to the public clouds between the edge clouds that is enabling connectivity between edge cloud and public core clouds for both the control and management of the application and for transferring some of the data. On the left hand side you see operators, so operators provide the networking capabilities. In our demo you'll see the ability to connect an IoT device to 4G, 5G network and to actually pass data through the edge application running at the edge and the edge infrastructure deployed through this method. And then process this data and publish the results over the interconnection to the public cloud. In the demo you'll see the public cloud edge interface. On the right hand side you see the capabilities that I just described, but we actually provide northbound APIs for grid integration, dynamic cluster registration, application deployment and animation of deployment of the application instances. In addition we also provide APIs and capabilities to run Terraform as the main engine to kind of provide a uniform capability to orchestrate the provisioning of the infrastructure across all of these domains. And as far as the edge we demonstrate here the ability to interoperate with Kubernetes but we also tried OpenStack and we have plans for deploying the 5G functions in the same manner as other applications based on health charts that are provided by the developers. So this diagram shows both the architecture, the integration architecture, but also it's how we do what we do in the demo. So if you look at the across the bottom, you know, from left to right, we have a UI attached to the 5G network that is then interconnected to this edge cloud and an edge cloud is provided by Equinix Metal. In an edge cloud we have edge infrastructure and basically bare metal resources that we will bring up and orchestrate. Part of that orchestration will involve the deployment of the Kubernetes cluster itself across the bottom. Then later on we'll deploy the composite application on top of the Kubernetes cluster but also with the ability to provide for some BGP routing in order to connect back to cloud. So further moving to the to the right we have the interconnecting fabrics and in our case for this demonstration we have the public cloud edge or edge cloud is in the Dallas region, right, and then the public cloud core where the Azure cloud and private interconnect to it through Equinix fabric across the US, right, and then we terminate that in a private express route circuit within Azure, within the Azure cloud will build private interference so that that private circuit could provide for connectivity to the edge all the way to the edge cloud and all the way to the pod application pod resources and within the Azure cloud instantiate IoT Hub, IoT infrastructure, and also for testing a test VM connected to VNet so all of this is built using using the orchestration up top. But starting, you know, before this can be built. If you, if I can draw your attention to the top of the picture, we actually provide a what we call the infrastructure design studio where we define, you know, the user logs in defines the topology based on the requirements for example selects the edge cloud selects the core cloud understands the latency requirements in between the two provides for authenticator parameters and user input in order to bootstrap this orchestration right with this in the GitHub we or in Git Labs, we have a repo that holds the Terraform plans or templates of what we need to want to happen in, in, in the Azure cloud in the interconnection and how we deal with the equinex, for example bare metal service, equinex metal. So all of this is parameterized and defined in the Terraform templates or Terraform plans. We also have health charts that are responsible for defining the actual applications as you see in the edge Kubernetes cluster. That's called Azure IoT Edge, all the pods, all the authentication parameters. Custom resource definition plus the VGP routing that's provided by cube route in this case as an example. And we also store cluster configs once we deploy this bare metal and install Kubernetes on top where we actually load the, we actually grab the Kubernetes config file and put it in into the Git repository so that the orchestrator can be can forward the cluster resource into its cluster registry. Right. So once the infrastructure design is completed, based on, on, on the desired topology and relationships between core cloud, interconnect and edge cloud, we execute the, the to two actions we, we provisioned the infrastructure first so we basically start with invoking the Terraform plans under the hood that actually build the edge cloud by bringing up the equinex bare metal server. And then we build all of this on Kubernetes on it and, and preparing it for the connectivity. And, you know, we do infrastructure provision in the, in Azure cloud so we basically build express route we built all of the software components here for gateway vNet all that stuff is, is, is been built by Terraform, and then we'll link the edge cloud to the core cloud with the equinex fabric by also using Terraform and building the virtual connectivity across. Right. Obviously, all of this infrastructure underline physical infrastructure is in place, but the big point here is that all of those infrastructure components are actually exposing Terraform providers, so that orchestrators like the one shown here that's called multi domain orchestrator can take advantage of it. Right. Then the M co part edge multi cloud orchestrator of the orchestration layer will be able to actually deploy the applications. The most bound API call will trigger the actual application deployment, which is the Azure it edge, based on health charts stored in the get repo, and basically putting the putting that application in the on the on the edge server. This application deploys based on the information provided in the most bound API calls, it will authenticate back to the Azure cloud stand in the pods will will start running and then in a world will will see the ability to actually send end to end send the IOT data in the low power IOT format compressed data or encoded data for latency for temperature air pressure and humidity in the compressed format we will send this to the edge application edge application will process this data and publishes also the cloud based on this based on this infrastructure. So the orchestration workflow is defined here in the workflow blueprint that is implemented in under the CDS controller design studio as part of the on app component and as part of this orchestration project right so we have actions that are responsible for the infrastructure provisioning we for connectivity for deployment of the edge infrastructure for provisioning the Kubernetes cluster into the orchestrator onboarding the cluster itself, defining the service based on the and the composite application based on the health charts and then actually deploying sorry and actually deploying this application on to the edge edge cluster. So, for the demo, we're actually doing, you know, this actions that I've outlined, right, and we really kind of show in our project that we deploy MCa2.0 which has become a new Linux foundation networking project, we add CDS controller design studio and install controller blueprint archives in order to be to implement those things like Terraform, health charts processing, cluster registration application deployment and so forth. Based on, you know, this information is actually stored in Git so we'll show you where the Terraform plans are stored, what they look like for Azure, for ECOMICS to connect for ECOMICS metal, what the application health charts do. How the cluster info is also stored. And we basically, you know, design this infrastructure in the, in the design infrastructure design studio so we pretty much topologically build the edge cloud, the public cloud, the interconnect, and we trigger provision of this infrastructure with the API calls that are that are happening in the background, you know, based on this interaction between CDS and Terraform to bring up better metal servers to install Kubernetes to activate provision infrastructure in Azure like ExpressRoute, we do that by registering VNAT, VM, and IoT Hub, and then we connect the edge and core with the ECOMICS fabric. And after that, once the infrastructure provision step is complete, we actually deploy the end to end, the public cloud edge application which is Azure IoT edge, we do that by dynamically registering the edge Kubernetes cluster to EMCA, we are onboard the health charts to EMCA and then deploy this application. And the last step in the demo is end to end verification. So at this point, I will provision, I will transition to the demo, and we'll show you the, the video of the demonstration. At this point, I would like to transition to the demo. What you see in front of you is the, is the Git repository for the, for this project. So in this repository we store the various templates, including the actual code for the CDS blueprints then we, you know, we have in the repository a collection of templates. Like I said, we have the cluster config file itself that we basically pushed into the repository after the deployment of the Kubernetes cluster on the edge compute. And as you will see later in the steps, we have the Kubernetes health charts, right, for the application deployments, and the Terraform plans also stored here. For example, if we look at the template Terraform plan that we use for Azure, we can look at the structure here and look at the actual syntax of the, of the resources and connections and all of this is real information with the IP addresses you see later on being deployed into the infrastructure. So, at this point, we will log in into our infrastructure design studio. So, it allows us to create topologies right first we create a new topology where we name it public cloud edge interface, just to be in line with our other project. Here we have the description we can put in and then pick the edge location which happens to be in the Apple Data Center in Dallas right and the PCC location which is US West Azure. So here we specify various parameters that we need for example the operating system we need for the edge node, the view lens for the interconnection, the type of server that's we need and then what we need in terms of the authentication in Azure, like the connectivity 50 megabits per second, we have the expression on circuit name resource group and BGP parameters that will be used later on to link the Azure core and the edge. At this point our topology is ready. So we can click on for example interconnection and add specific parameters that are required for the interconnect fabric for example the port names, the VLAN IDs for interconnection, primary and secondary, as well as where we send the notification. So at this point, our interconnection topology is primed right and we are deploying the infrastructure. What's happening under the hood is that this software front end is now sending API calls to CDS and CDS is starting the workflows retrieving access and other parameters from Git and executing the flows. So at this point, we, you know, just take a look under the hood what's going on like I said, the Terraform is executing. And what we're seeing here is the view from the equinex metal portal where this edge server in Dallas has been created. So you see it's going through its provisioning boot sequence. And the server is up and running. We can SSH to it so we see we can get to it from public IP address, but we also send private IP address, we see the server characteristics. Also, we look at the network that was created, including the VLAN for local networking within the equinex metal fabric, but also connected to the global interconnection of the fabric through the port and a connection we specified for the fabric VLAN 543. So at this point, let's see if we can SSH to the server right so we get the SSH access information just to see that the server is up and then the Kubernetes has been deployed and think of that so we SSH to SSH to the server. We see that the it has interfaces and has the correct IP addressing on the on its interfaces and the routing information that was specified, you know, for for connectivity to Azure. So also checking the availability, you know, the actual Kubernetes cluster has come up right so we see just the system pods, nothing else is running right now on this on the server. So at this point, our cluster on the edge compute server is up and we can show the the config file for that cluster that the orchestration has already imported into the cluster registry in the in the MCO orchestrated. And we'll also look at what's going on in Azure right so the orchestrator is continuing its work in terms of creating the infrastructure. So what's what's happening right now is that the express route circuit within Azure has to be created and we see this is coming up right now. It's not yet fully provisioned right so the Terraform plan started executing against against Azure and it has to create the circuit and the private period within circuit but you know we'll also look at the interconnection fabric what's happening inside the interconnection fabric, or those connections actually created so you see this two connections on the land 542 and 543 so look at the 543 is our primary villain and we're making right now the the the the orchestrator is triggering the connectivity within equinex. Once this is done, we should see that the express route circuit has been provisioned. And now the orchestration flow of transitions to create in private private peering BGP. So, was this was to confirm that the circuits are fully operational back at the at the equinex equinex fabric so right now, they are there but they're pending BGP complete BGP configuration so we have one side provision which is in Azure, but the other side on the edge server hasn't been hasn't been provisioned yet, but at this point we have the the circuit itself enabled and ready to proceed with the BGP configuration, but for that we need to make sure that the application has been deployed and put in the cube router so here in the MCO orchestrator we see a composite application has been has been defined which is consistent three parts there. And we'll look back at the cluster, right, so we'll see that no application parts as yet running. So we're going to deploy this application at this point. Using using user interface just to show how this is done so we create a service instance, we give it a version number, and then we will associate this application of service instance with the edge cluster, which we deployed in the in the first step. So basically consists of three parts the Azure custom resource definition for it edge it edge parts themselves in a cube router to complete the BGP connectivity to the Azure cloud. So we just triggered the application deployment we see that the cube router part is up and the application it edge parts are coming up as well. So this point what what is supposed to happen is that this application pods will use this BGP connectivity between edge cloud and the core cloud to register with the public cloud or it back in. And this is what was going on so we deployed all of the spots, we deployed cube router, we initiated the BGP connection and now we should see that BGP coming up. And for that we'll go back to Azure, look at the route table under the private peering on that express route circuit, and what we should see there is the server that should be established so we'll see some of the prefixes right so from the private vnet and also from the express route circuit. And if you look at the route table we should see actually all the pod routes advertised from the edge server right through this express route connection orchestrated all the way into into the vnet route table and you can see the pod prefixes, and also the vnet prefix that should be known back at the edge, and then the edge server we look at the status on the cube router and we see that the peer with Azure has been established, and we see the routing table that's been propagated. That's the end. This point we can look at the status of the BGP peer itself or BGP neighbor from the from the edge server. So this is a just for illustration purposes we ran BGP using cube router. There are other methods of how to do this we see that the neighbor has been established, and we can paint between the edge server and in Dallas and Azure cloud in Silicon Valley you see latency is about, you know, between 3032 to 37 millisecond as we had on our topology definition application. At this point, we can also look at the Azure so there's a test VM that we spun up just to be able to to look at the connectivity. We are, we will confirm that we can reach between the test VM and the, and the edge pod so for this edge pod is listening on the sport 50,000 and five, and we're opening a connection, and we're just making sure that we have direct connectivity. At this point we will be able to, we should be able to send it messages from the device itself. And for that we haven't simulated it client connected to the 5G network. It's running on a on a device right that's connected to the 5G network in our case or 4G never. So we see that we can send messages to this port 50,000 five from the it device on the access side of the network. So we're going to do that to send it to the it edge application running on the edge so in this case what we're seeing is that the client is sending compressed it messages right so they are actually encoded if you look at the sequence numbers and the PDU that is sent this is encoded temperature pressure and humidity information and on the logs of the application pod that's running on the edge server we see we see that it's been decoded right and also been published into the into the it back end so to do to verify that would go to the it back end look at the it edge definition and we see that the shadow of this edge device that's running on the edge server in Dallas is is reflected here right so we see it status it's pods right and then we'll look at the metrics of received it messages that that have been decoded by the edge and published today to the cloud so to do this will look at the metrics screen and we'll look at the combined number of messages received and this is this is where we are at this point we see the messages from the it edge application all the way to the public cloud or through the intersections that we created