 Hello, everyone. Welcome. Thank you for joining the session today. I'll be providing an introduction to the Integrated Cloud Native Blueprint, also known as ICN. Brief introduction to me. My name is Todd Malisbury. I'm an engineer at Intel. I began my career about 20 years ago working in embedded systems. Since then, I've slowly been working my way up the stack, and I joined this project a little over a year ago. All right. What is ICN exactly? Let's break it down a little bit. ICN is a blueprint focused on meeting the needs of cloud-native edge use cases. What is a blueprint? A blueprint describes the complete stack needed to realize those use cases. This includes all of the hardware, all of the software, and all of the tools for deploying it. Also very important here is that the blueprint is tested on real hardware. ICN testing, for example, starts with bare metal servers, installs and configures all the software, and validates that the use cases are working properly. The ICN blueprint, along with many others, lives inside the Acreno LF Edge project. Acreno provides a home for ICN and other blueprints. Next, let's take a look at some of the goals of the project. We'll take a closer look at the whys of these goals in a minute, but for now, this will serve as a brief overview of some of the functionality ICN includes. Zero touch provisioning. We want to automate as much as possible. Once configured for the target environment, it should be as simple as a single button click or command to deploy the stack. Coexistence of VNFs, CNFs, VMs and containers. We want all of these workloads to work together under one orchestrator, Kubernetes. Advanced networking support. With Kubernetes, we want to be able to place our workloads on multiple networks, automate the creation of the routes and VLANs needed, and have a programmable CNI to enable service function chaining. Multitenancy. We want to support different quality of service per tenant, multiple VLANs, and per slice network functions, or per slice configuration of shared network functions. We also want to support varying levels of trust for tenant workloads, from soft to strict multi-tenancy. Multisite scheduler. We need mechanisms to automatically register the edges and distribute our services across them. Service mesh acceleration. We want to maximize the service mesh performance by taking advantage of, for example, hardware accelerators and BPF. Security orchestration. The combination of multiple tenants and multiple sites requires orchestration of the credentials needed. And finally, as mentioned earlier, all of the above will be tested and validated on real hardware. Why these goals? Recall that the intention of the ICM Blueprint is to address the needs of the edge use cases. Today, Kubernetes clusters are not meant for network functions in telcos. Let's take a closer look at a traditional enterprise deployment and see what's needed for a telco-grade platform. This slide shows a typical environment with the user working with a few bare-metal private cloud clusters on the left and a few public cloud clusters on the right. Some points worth mentioning here. The number of clusters is typically small. Installation and upgrades are mostly done independently in each location. Similarly, deployment of applications is also mostly done independently in each location. The clusters are used for normal applications while the network and security functions are deployed outside of the clusters as physical or virtual appliances. What do we need to turn this into a telco-grade platform? We already have a rough idea from the goals of the ICM Blueprint. Let's look at the needs in detail. Support for large number of clusters. The most obvious addition in edge use cases is, of course, the edges. From a handful of clusters earlier, we now have potentially hundreds or more of clusters. It's no longer effective to individually manage the installation and upgrade of the clusters and applications in each location. ICN's answer to this problem is to simplify the cluster management by introducing infrastructure orchestration. With this, the provisioning of clusters becomes similar to the deploying of applications. In order to do this, we use cluster API, Metal Cubed, and Ironic. A quick side note, the use of cluster API is in the roadmap, not complete yet. ICN is currently using its own controller for Kubernetes provisioning. This shows another important feature of ICN. Reuse rather than re-implement. Whenever possible, ICN reuses existing projects to meet its goals. The value of ICN here is in the integration and validation of the entire stack necessary to enable the use cases. Let's take a closer look at ICN's infrastructure orchestration, beginning with the high-level view of the components. There are three main pieces. First, the compute clusters themselves. Each location contains a variable number of compute clusters which will be running the application workloads. Second, the infra-local controller. One at each location, this controller does the actual work of provisioning and upgrading the compute clusters. It runs inside a bootstrap or management cluster at each location. Bringing it up is a part of bootstrapping a location. Last, the infra-global controller. Centrally located, this controller communicates with the local controllers to orchestrate the provisioning. It provides centralized configuration and provisioning of compute clusters at each location, i.e. a single pane of glass for managing edge location infrastructure. Note that the infra-local controller can be run without the infra-global controller in place. The infra-global controller is on the ICN roadmap. Now, let's look at the infra-local controller in detail. The left-hand side of the diagram shows the components of the infra-local controller in the dashed box. The right-hand side shows provisioned clusters. The steps for provisioning are, first, install an OS into the node. This is the job of MetalCube's bare-metal operator. It reconciles the state, the OS to be installed in any cloud-init data, of the bare-metal host custom resource describing a node, using Ironic, the IPMI network, and the provisioning network. Second, install Kubernetes into the set of nodes composing the cluster. This is the job of the binary provisioning agent, or BPA operator. It reconciles the state of the provisioning custom resource describing a cluster. Using the Kubernetes deployment, KUD, from the multi-cloud Kubernetes plugin of ONAP, it installs Kubernetes and additional ICN components into the cluster. The user, by specifying only the details needed in the bare-metal host and provisioning resources, then ends with a fully functional Kubernetes cluster. One last note. These steps, as described, are the current ICN implementation. The use of cluster API will augment and or replace these components. Geodistributed application lifecycle management. Now that we have a solution for the orchestration of the infrastructure, what about the orchestration of the applications and services themselves? As with the infrastructure, it's no longer effective to individually manage the installations and upgrades in each location. There are simply too many. ICN's solution is to include the EDGE multi-cluster orchestrator, or MCO project. MCO operates at a higher level than Kubernetes. Instead of creating individual deployments at each location, the operator or subscriber creates a deployment intent to describe the deployment across multiple clusters. In the diagram here, we can see MCO acting as the central orchestrator using the deployment intents to automate the deployment across the multiple private, public, and EDGE clusters. I'd like the next look at the MCO logical cloud feature. MCO provides other multi-cluster controllers in addition to this one, but this one is particularly relevant to the need at hand. From top to bottom, we see the users, the central MCO orchestrator, and the clusters. The users on the top left would like to deploy some microservices to EDGE 1 and EDGE 2, and the users on the top right would like to deploy some other microservices to EDGE 2 and EDGE 3. In order to accomplish this, the EDGE clusters are registered to MCO with labels, X, Y, Z, and A by ABC when they are created. The users can now define a logical cloud composed of the labeled clusters. For example, logical cloud 1 is composed of clusters labeled XYZ, i.e. EDGE 1 and EDGE 2. Logical cloud 2 is composed of the clusters labeled ABC, i.e. EDGE 2 and EDGE 3. Now, the users specify in a deployment intent which logical cloud to place the microservices in. MCO's distributed cloud manager DCM takes care of the rest, creating the needed users, access controls, and namespaces in each cluster of the logical cloud, and finally installing the microservices into each cluster. I do want to emphasize that this only scratches the surface of what MCO can do. I will include a link at the end of this talk to the MCO project for anyone curious to learn more. Cloud Native Network Functions The transition to a fully cloud native platform brings some unique requirements for cloud native network functions. A traditional pod will contain only a single network interface to the pod network. The network function brings with it the need to support multiple virtual and provider or external networks in a cluster, and the corresponding need for multiple high-performance interfaces inside the pod. Also, support for service function chaining becomes important when network functions are introduced. The ICN solution is to include the notice project to support the creation of multiple networks and service function chains. To support multiple nicks inside a pod, the multis project is included. And finally, support for high-performance SRIOV-capable nicks is provided via the SRIOV-CNI. In this slide, I'm going to show an example of what cloud native network functions look like when using notice. I'll show how the traffic flows through the service function chain to reach the application microservices. The diagram shows a corporate network on the left-hand side, a Kubernetes cluster running the microservices in the middle, and connectivity to the internet on the right-hand side. Inside the Kubernetes cluster, the microservices are sitting behind an ingress gateway. The microservices are communicating with the gateway and each other over the default pod network. Below the default pod network are the network functions. A load balancer, a firewall, and the SD-WAN-CNF. Note that the network functions are connected to the default pod network above and also the provider and additional virtual networks below. Each network function contains multiple network interfaces. Let's look at some of the packet flows next. A packet from the internet to a microservice is first routed into the SD-WAN-CNF via provider network 2. It then flows to the firewall via virtual network 2, load balancer via virtual network 1, ingress gateway via the default pod network, and finally the application microservice. A similar path exists from the corporate network to a microservice. The packet enters via provider network 1, then to the load balancer and to the ingress gateway, and finally to the application microservice. Lastly, a path between the internal network and the internet flows through the network functions without reaching into the default pod network. This is all possible when using declarative configuration with the notice controller. CNF and VNF coexistence. Not all network functions will be immediately available with a cloud native implementation. During this transition we will need to support legacy VNFs together with the CNFs. This presents an orchestration problem. We do not want to support one orchestrator for VNFs and a different additional orchestrator for CNFs. To solve this in ICN, the KubeVirt project is included. KubeVirt allows us to manage virtual machine deployments as Kubernetes resources, similar to Kubernetes deployments and pods. This solves the orchestration problem at the cluster level. We are now using one orchestrator, Kubernetes, to manage both containerized and virtual workloads. And at the multi-cluster level, MCO can be used. Additionally, the features needed for cloud native network functions are also accessible to KubeVirt. This brings us complete support for the coexistence of VNFs and CNFs in ICN. High performance applications. In the cloud native setting, careful management of shared resources may be required for deterministic performance, high throughput, and reduced tail latency, particularly for the performance-sensitive network functions. Included in ICN are topology management for NUMA aware placement, CPU manager for Kubernetes, or CMK, for CPU pinning and isolation, and Intel QuickAssist technology, or QAT, for hardware acceleration of cryptographic algorithms. Additionally, the ICN roadmap includes support of Intel Resource Director Technology, or RDT, for control of cache allocation and memory bandwidth. Connect edge locations securely. With the addition of multiple geo-distributed edge clusters, comes the need to connect the edge locations securely. With that comes some challenges. The cost of assigning a public IP to all of the edge devices may be too expensive. Low bandwidth links need to be accounted for and overlay network support. ICN's solution is to provide a robust, software-defined WAN network function. Built upon open WRT, the SDE WAN CNF provides declarative configuration, support of SDE WAN hubs for traffic sanitation and gateways to private networks, centralized control of WAN and security policies, firewall, NAT, deep packet inspection, IPsec, and load balancing functionality. The advantage of this approach is that no changes are needed to existing application microservices or edge configurations. Let's walk through how the edge locations are connected securely for inter-application traffic. The first step is the edge initialization. The edges will connect to the central secure WAN hub through an IPsec tunnel. The edge may have a public IP or be assigned an overlay IP address. In this diagram, edges 2, 4, 5, and 6 will register with edge 1. Next, the edges will use the public or overlay IPs assigned to create IPsec tunnels within the clusters. In this diagram, edges 4, 5, and 6 will then create tunnels to edge 2 to register with MCO. Then the same edges may create tunnels between themselves to securely send inter-application traffic. Note that in some cases, edges may need to traverse the hub to communicate with each other. Strict multi-tenancy. An edge service provider may assign varying degrees of trust to a tenant's applications running in a cluster. The soft multi-tenancy provided by existing Kubernetes mechanisms does not provide protections against kernel exploits. And the hard multi-tenancy provided by giving each tenant an isolated cluster or node is expensive to implement. ICN provides a third solution, strict multi-tenancy. In this scenario, the CADA Secure Container Runtime is used in place of Docker for untrusted applications. CADA uses the same images as the non-CADA case, but uses hardware virtualization and a dedicated kernel to provide isolation. Note, CADA support is part of the multi-tenant Secure Cloud Native Platform Blueprint, a member of the ICN Blueprint family. A blueprint family is a set of related blueprints. In this case, blueprints built upon ICN. Putting it all together. Now, let's take a look at the complete stack provided by ICN. On the right, we have infrastructure orchestration provisioning clusters. On the left, we have the centralized controller for application orchestration and SD-EWAN control. The rest of the diagram shows the individual clusters in public, private and edge locations, containing all the components necessary to meet the needs of the telco-grade platform. Cubert for VNF and CNF coexistence. Notice and the network microservices for cloud-native network functions. Platform microservices for high-performance applications. The CADA Secure Container Runtime for untrusted applications. And finally, the SD-EWAN CNF and traffic hubs for securely connecting the edge locations. Creating an edge platform from a multitude of various open-source ingredients is not easy. It requires costly development resources to identify and assemble the components, check that the components interoperate correctly, and stay on top of upstream bug and security fixes. ICN is built with clearly defined components and versions, ensuring that the components work together via the CI-CD test lab and that the integrated solution works for many edge use cases. Edge service providers can take ICN, deploy it, and provide services to their customers to bring their customers' own CNFs, VNFs, and applications to life. It provides similar ease of use and flexibility to a cloud service provider. In short, ICN provides a great starting point for a cloud-native telco-grade platform. Before I end the presentation, I'd like to give just a quick overview of some of the upcoming roadmap items for ICN. For persistent storage, we'll be including the OpenEBS project with a choice of storage engines, CSTOR, EVA, MayaSTOR. For infrastructure orchestration, we'll be replacing ICN's binary provisioning agent with cluster API's bootstrap provider. And with that change, we'll be looking at enabling multi-site infrastructure orchestration via EMCO. We continue to integrate additional device plugins for high-performance applications and network functions. A few items on the notice roadmap are also worth mentioning. Network policy support using OVN ACLs, using the OVN load balancer in place of cube proxy for higher performance, and IPv6 support. Lastly, the SD-EWAN roadmap includes centralized control of overlay networks, improved performance through the use of the Intel QuickAssist technology in AES hardware acceleration, and improved security through the use of Intel Software Guard extensions. Thank you everyone for taking the time to listen. I hope you found it valuable. I encourage anyone who's interested in exploring further to take a look at some of the links on this slide, and definitely feel free to reach out and contact me with any follow-up questions. Thanks again.