 Hello, welcome to today's talk. Today's topic is Designing and Edge Computing Infrastructure for Industry 4.0 Deployment. In a 5G-enabled smart factory, there are machines. There are sensors. They connect to a non-prem Edge Cloud where you can deploy innovative applications. These applications could do quality control. They could be safety-critical applications. They could be running machine learning algorithms, et cetera, to make your life easy. My name is Anurag Rinjan. I'm a platform architect at Intel. My background is mobile wireless computing. I've been working on edge computing. Prior to that, I've worked in energy sector. Let's spend a moment on understanding what is Industry 4.0. So Industry 4.0 is about smart manufacturing. It's the fourth wave of industrialization. The previous three are counted as the steam and mechanization, use of assembly line and mass production, and use of computers and automation. Industry 4.0 combines smart devices and smart infrastructure. It can solve the key challenges of the demographic shift in developed economies. It's getting a lot of attention because it promises to level up the field between the suppliers with the strong supply chain and buyers with limited market power. We have seen this with cloud computing. How a common platform leads to reduction in CAPEX and OPEX while at the same time brings new products faster to the market at an affordable price. The key technology building blocks needed in Industry 4.0 end-to-end solution consists of smart devices, cloud infrastructure, and 5G communication infrastructure. In this talk, we'll focus on the infrastructure side. What are the key building blocks or key platform considerations for Industry 4.0 cloud? We are focusing on infrastructure components. So those smart devices and machines are not shown here. However, it is fair to say that the machine intelligence is distributed across device and infrastructure. Depending on the latency requirements, safety risk, et cetera, these will be the key considerations to decide where the workload runs. In our time-sensitive network data flow QS, there are network settings or floors that divide the compute workloads across three domains, operational technology, edge technology, and informational technology. With that clarification, let's see what is needed to make a cloud platform for Industry 4.0 application. We start with networking. So in a legacy network with miles and miles of cabling, this is a challenge to roll out a new technology platform and be able to troubleshoot. This is the day minus one and the day zero timelines. Mobility of small devices as well as larger devices with robots, et cetera, is another challenge. Therefore, connectivity to the cyber physical systems or devices is shown here as wireless network. It could be a 5G or a Wi-Fi 6 or an LTE over licensed and unlicensed spectrum. This would need a physical function or can we run a cloud data as a virtual run? There are open platforms available that have very defined APIs, and if needed, source code access is also available, such as Intel's FlexRAN based virtual run. Next, we need a data plane. These are some good options available now in open source, such as OVS-TPDK. This is OpenVSwitch with data plane development kit, as well as VPP vector packet processing. For cloud native applications, there are also container networking interfaces, the CNIs that are available each provide their own benefit. Third, we need to provide a proven application framework that simplify the complexity of applications. One common framework that keeps coming up in my projects relates to visual inferencing and visual analytics, and we use Intel's OpenVisual Inferencing and neural network optimization OpenVeno toolkit. Now to manage the applications and platforms, we need a controller. From cloud native world, we can easily pick up Kubernetes to do the job. Next comes the orchestrator. An orchestrator is used for onboarding new applications and libraries, and which it can then launch and request. A VIM is a virtualization infrastructure manager. It manages the virtualized infrastructure, the compute resources. In a bare metal or more accurately, host-based cloud setup. This role may not be as critical because there is an alternate solution, depending on how big is the system. The Kubernetes infrastructure itself should be able to manage that, and that's what we will be using or that we have been using mostly in my projects. To run the cloudlet and the compute workload, we need a hardware platform, and this consists of process, network cards, and hardware accelerators. Last but not the least, we may want to run latency critical workloads in our cloudlet, but it makes business sense while we run non-critical applications in public cloud. And this can be done by cloud connectors or cloud adapters. There'll be specific to devices in the cloud service provider. For example, the digital twin of a machine from one industrial equipment provider will be different from another. Some cloud connectors are available in Openness Edge Cloud Kit that we will see next. In our example, I'd be boring differences from a software toolkit called Open Network Edge Services or Openness. It is an edge computing software toolkit for building edge platforms and cloudless, whether for telco, on-prem, or hybrid cloud. It can be used to onboard and manage edge applications and network functions. This is built on top of Kubernetes, so it provides cloud-like agility across any type of network. On the Openness GitHub pages, you can find references to onboarding and launching multi-access network functions. These are CNS. It provides ability to orchestrate SVGA, ESX, or media accelerator cards, and other hardware accelerators. Some ready-made applications and references can also be found, like multi-cloud connectivity, transcoding AI, video analytics, et cetera. It provides data plane functions, options, and CNS. Select the one that matches your hardware, software, and application needs. There's telepantry information. This can help measure and monetize edge platforms, as well as help in research scheduling. There are other examples that may be useful for your particular applications, so it's worth exploring further. We show here a logical view of Cloudlet built using Openness. It consists of Kubernetes masternode. When using Openness, you can tap into the enhancement that helps with networking and edge application deployments. Then there is one, or probably more than one, Kubernetes minion nodes that run the data plane, that run the hardware accelerator, as well as the application workload. The data plane services, they shift user traffic in and out, and they connect with sensors and physical systems. They collect data, and they send commands responses back to actuators. It's a very simple example. Here, what we show here is from physical systems and sensors, we collect data. We can do monitoring. We can do quality control. We can also do, we can also schedule maintenance, et cetera, based on the sensor inputs. On the Openness portal, there is an edge inferencing Openness experience kit. There are Ansible scripts that help build and install on the target server a small application and edge-in-size software framework for enabling smart manufacturing with visual and point-effect inspection. It uses OpenVenom toolkit from Intel. The easiest way to create the setup is to run the Ansible scripts. There's an Ansible host from where you can run the scripts. This is only needed during the installation. And based on the confection, it will install the Kubernetes master, the Kubernetes worker, and it will build and create containers for video ingestion, video analytics, et cetera, as shown here. The Ansible scripts and the sequences as shown in this block diagram. This is an example demo app that plays a video of a PCB manufacturer and defect detected. And that's also available in the sample application. So in summary, what we've seen, industry 4.0 will consist of cyber physical system that will rely on an on-prem computer infrastructure for latency-sensitive applications. We've also seen smart manufacturing will embrace cloud-native technology within their operational ecosystem for agility and for lowering costs. And we've also seen Intel provides hardware accelerators and software building blocks for such systems to deliver platforms that enable flexibility, agility, and performance optimization that are foundational for industry 4.0. You can find more at these sites. So for Openness Toolkit, you can go to openness.org. There's also a GitHub page, which is also linked from this site. You can get training on 5G, as well as a solution index is available from the GitHub page for Openness. The EIS as edge inferencing software, as we call it, the one that we were referring to earlier is also level from the GitHub page. And then you can download sample application and try it out. With that, we come to an end. Thank you very much.