 Good afternoon and welcome to today's tutorial from the Linux Foundation on 5G Enterprise workloads and how they will migrate from the cloud to the edge, in particular how they'll migrate to the network edge where telecom operators can deploy those services. I'm Sigmat Lozinski, I'm part of IBM's global telecommunications business and I'm based here in the UK. And today I'll be joined by three colleagues and partners who can share their experience. So first we'll have Tas Supercall, who's an architect from the IBM Solutions Lab in Dallas, is an expert on a particular enterprise application that we've developed to deploy at the edge to help with the challenges of COVID. Matthews Thomas is also from the Solutions Lab in Dallas and he's responsible for the design and implementation of the infrastructure that we use to deploy services for operators for field trials. And Suresh Krishnan is the CTO of Coloom, which is an IBM partner and he's based in Montreal and Coloom developed the breakout function that's needed the UPF in the 5G network to allow you to connect enterprise applications or network applications to the traffic coming from devices. So let's look at what we're going to talk about today. First of all, I'd like to share an overall vision for how the edge will evolve. I've been working on edge computing now for about 10 years when IBM did a very early research project with one of our clients that ultimately led to the creation of Etsy Mac. So I'd like to share how we believe the edge will evolve. And a theme that you're going to see throughout this tutorial is how enterprise edge and network edge are intertwangled, that some of the functions that are needed to make enterprise edge services work are also those that make the network edge work. We'll look at the use cases, what enterprises are doing and what telcos are doing with regard to edge. We're then going to look with TAS's help at a real application to give you an idea of the complexity of a real world enterprise app. And we'll then look at what it takes to integrate that application and to manage it in the context of the network. But of course, there are also network applications just as much as there are enterprise applications. So we'll then look at how we integrate those network functions, those VNFs or those CNFs with the edge platform with the Mac. A critical element to making that work is being able to access the traffic coming from the device and user equipment that goes into the network core. And for that, we need a function that's sometimes called breakout, but has been codified in 5G as the user plane function. And then we're going to try and show you an end-to-end demo showing how all of the pieces fit together all the way from ordering a complex enterprise service to it actually being deployed. And then finally we'll sort of sum up where we've got. But the key question everybody's asking is why now? Why suddenly is everybody interested in edge? And something that I think might be helpful to think about is one of the pieces of work that Gartner did last year was to say that they believe three-quarters of all of the data that's generated within the enterprise will both be created and processed at the edge over the next five years, up from a tiny fraction now. There are several reasons behind that. One is there's just an awful lot more data. There are an awful lot more devices generating that data. And you can begin to see some of the things we're talking about, whether they be aircraft, whether they be wind turbines, whether they be kiosks. But secondly, from both a performance and a security point of view, it's attractive to be able to process those applications, not where they're located in the cloud, but actually to do the processing much closer to the point of delivery at the edge. I'd like to start the next section by sharing our vision for what the edge platform has to look like. And perhaps the single most important characteristic is that it be open to a broad range of developers, a broad range of companies to be able to build applications and components that sit at the different edge locations, and that that in turn gives the service provider an opportunity then to monetize those value-added services, those applications in their networks. So let's look at a bit more detail about some of those requirements. Let's think about an autonomous intelligent edge. What would that mean? Already applications are being developed as their starting point for the public cloud, but that cloud is moving to a more complex model, which is not just a single cloud somewhere. It's moving to hybrid cloud, which includes multiple clouds, and that will also include edge computing, where workloads can move seamlessly between cloud data centers and edge platforms. And the edge can be anywhere from actually from the end device, perhaps in a warehouse or a factory in the cloud, but also potentially in the telecoms network itself. And that's the opportunity that a lot of telecom operators are really interested in, because it means that they can offer platform as a service, software as a service, delivered from the locations in which they've already invested in order to bring connectivity to billions of people worldwide. We can look at different types of workload. From the point of view of the service provider, 5G optimization is probably the first thing that we'll see, which is where we're deploying network-specific workloads at different locations in the network. We can think of value-added services that improve the efficiency of the network, like a content distribution network that caches video. We can think about video transcoding that changes the bit rate of video that's being delivered to a mobile device with a small screen from HD or 4K down to something that doesn't take so much bandwidth and means more people can share the experience. And of course, there are also radio access network workloads that are also part of that. And then there's the opportunity for service providers to monetize their 5G network, and that's where they will be delivering services to the enterprise, which is after all the driving use case for 5G. And in those cases, we're looking at applications in the area of the Internet of Things. We're looking at applications that process a lot of data, so both analytics, machine learning, as well as video processing. But on top of that, we also need the ecosystem of partners that actually help to deliver those services. And one thing that's going to be critical because a lot of people will depend on those services is that there's a role for the operators to provide and to ensure the security of those edge platforms. But there's a challenge. The edge is large. We're not talking about hundreds of sites. Ultimately, we'll be talking about tens of thousands or hundreds of thousands of locations. And that means that automation, brutal automation, is essential so that it's possible to run that network and to run all those edge applications in a lights out manner. So we see technologies like containers and the automation technology like operators that help to drive those containers, those become critical to making this vision work. Now, a second key point is that from the point of view of the application developer and from the point of view of the service provider, what you really want is confidence that you can run any workload with any set of resource requirements anywhere that technically it's capable of running in the server, actually in the enterprise location, in the service providers network itself, potentially in the service providers core network. And it shouldn't matter whether that's a workload that's containerized or virtualized or cloud native. Again, from the point of view of the application developer and the enterprise, those are critical requirements. Therefore, from the point of view of the service provider implementing an edge cloud, that's an important feature too. And one of our beliefs is that we can look at the sort of stack that's required in order to do that. Today, applications are being built using containers as their base components. So that container platform then becomes the foundation both of the cloud and also of the edge. You can look at platforms like OpenShift as enabling that. Underneath that, of course, we need to run that in a distributed way that operates across many different instances. So we start to talk about hybrid cloud in the sense of being able to distribute our container platform across as many places as is required. And on top of that, we've got each enterprise may have multiple applications, tens, perhaps hundreds. So they in turn want the management and deployment of the edge applications to be zero touch, just as much as a telco network operator wants the deployment of the infrastructure to be zero touch. So that's the layer where we see the automation of the application lifecycle being key. And then on top of that, of course, we can talk about the types of enablers that application developers need, whether they be libraries for artificial intelligence and machine learning analytics, libraries support for Internet of Things type applications or blockchain to enable trust and security. But then the service provider needs to find a way to attract and to support the developers of those workloads. And that means when the point of view of the software vendors who are creating the new applications, they want a common cloud platform with a common CICD process and with guarantees of security automation, which is why we're focusing on the container platform and OpenShift itself is certified by all the major public clouds and within the NFV. So that's a great start. Many enterprise applications are actually delivered by the global systems integrators. Companies like Accenture, Cap Gemini, IBM is obviously one of those, but there are many others globally. And those companies do a lot of the heavy lifting in terms of building enterprise applications and they will do the same for the edge. And now in terms of enabling the automation, the certification program that exists around OpenShift helps to remove the friction around getting new workloads built. And again, the automation is a critical feature. And basically anyone who's building one of these edge applications that may have to go into hundreds or thousands of locations, they want to build it once, deploy it anywhere and run it without actually having to think about what's going on. And we can look inside the stack in a bit more detail. As I mentioned, people want to be independent of the underlying infrastructure and an important point of the OpenShift container platform is that it is indeed independent of physical infrastructure, virtual infrastructure, cloud or edge. And that's important. And then it works with multiple vendors of hardware because that's important too. On top of the underlying container platform, obviously there are different application enablers, whether they be things like service mesh, like Istio, whether they be graph databases like Grafana, whether they be the management platform like Kibana, Prometheus. Those are all elements that application vendors want to know they have access to. Going beyond that, they also want to know that they've got access to artificial intelligence and machine learning environments. And there's still a lot of work being done there in lots of different places about what the, is it TensorFlow? Is it cognitive scale? What is it? What matters is that there's a choice. And then finally, we can look at this and say, well, the underlying workloads that will run at the edge, and we believe that the same platform should support both the network edge and the enterprise edge because that will reduce the cost for operators and service providers. And so in terms of enabling the enterprise edge, that means you've got to enable IoT applications because that's what's going to drive the business. But you've also got to enable particular types of edge application. And the things we see there are that the first wave of applications at the edge will be doing things like processing video. They'll be doing things like augmented reality, potentially be doing gaming and potentially enabling automotive or drones because of the relatively low latency. But equally, that platform has to support network functions, whether they be the existing elements of the 4G and 5G network. And in particular, as we look at the edge, the open-ran elements that make up the future radio network. All of those elements are necessary. And on top of it, you, as I say, need a management platform that makes this stuff zero touch. Every industry is driven by its own particular ecosystem. There are companies that specialize in developing technology, supporting that technology, building products. And one of the critical things about OpenShift is it enables those ecosystems it supports. Enterprise Linux, and there's a widespread community around the OpenShift Commons to help with innovation. And it doesn't matter whether it's machine learning or big data management, whether it be financial services, whether it be the application platforms or the databases that everybody uses. There's a common ecosystem which enables, that will enable the edge to take advantage of the cloud technology just as much as hyperscalers have done with the cloud. We then come to ask the question about, where are these edges? And everybody talks about edge, but what do you mean by the edge? And it's worth just listing out all the different things that are edge locations. I mean, in many cases, we will have edge devices. Some of those will just be smart devices like your existing phone, fitness devices, telemetry devices in cars or mobile equipment. All need to be connected, all need to be able to run applications and send data back to the cloud somewhere. And then we see something that's sometimes called the on-premise edge, sometimes called the far edge, which is the platforms that exist in particular environments. So within warehouses, you have robots that are, and the management system that drives those robots. Within industry 4.0 and the domain of the factory, you have the so-called operational technology that drives the manufacturing lines. Those elements equally, they will have compute nodes there, and anything that runs there needs to be managed. Now, then we come to the network. Of course, there's the radio access network itself, and in the short term, that far edge will only run radio workloads. It's not going to go beyond that. But we can see a time when that will change. And then you get the metro data centers where there is the first wave of potential for deploying edge workloads, either to do network optimization or to offer enterprise services. And then there are, of course, larger data centers in the telco, in the regions and in the core, where, again, there is the possibility to get started that you deploy an edge infrastructure there. Basically, a cloud that you allow enterprise customers to deploy their applications onto, because already it saves the amount of infrastructure and the means that they can run applications closer to the point that they're needed. And of course, finally, we do actually have the cloud itself, and there are some locations within the cloud that try and provide different types of functions, a bit of additional capacity. So that classification takes us all the way from the device on one hand to the cloud at the other. And the critical thing is that in all those cases, the platform enables applications to run where possible anywhere on that spectrum. Now, of course, if you need a very large machine learning algorithm, you probably are going to be training that in the core, but you may well be deploying the developed algorithms much closer to the edge. And then, again, we can look at summary, the network edge, the communication service provider can offer, whether it's in the local office or potentially even in the future at a base station. And then when we talk about how enterprise applications are seeing, they may have gateway devices that manage a whole set of deployed nodes, and you'll see that theme coming up repeatedly, that there is this hierarchy where there are relatively relatively dumb devices, relatively low cost devices, but in large quantity, those devices, whether they be sensors or perhaps a little bit more, a dedicated edge device, those things rely on being able to send data back to edge platforms, whether in the network or in the cloud, to be able to have data processed. Whenever I visit service provider clients, one of the questions that everyone's always interested in is, so what are the use cases that enterprise customers are going to want to deploy to the edge in a 5G environment? Enhanced mobile broadband is great for consumers, but we know that the business driver for 5G will be around machine type communications and around low latency communications, and both of those are strongly focused on business customers. So the question is always, well, what are business customers looking to do? So in this section, what I'm going to do is to share some real world experience that IBM has developed based on working with service provider client and a set of enterprises in one of the early 5G and edge trials. So in 2018, the Italian government decided that 5G was going to become a critical future technology and that it was really important for Italy to develop an understanding of what 5G could be used for and to gain some early experience in the development and deployment of 5G specific applications. So they ran three national trials where they gave one city to each of the major operator groups, so Milan, Rome and Florence. Vodafone was tasked with building a 5G network in Milan and developing a set of partnerships with companies who could bring applications that would run on that 5G network, and IBM developed a set of use cases which we then implemented and deployed to demonstrate what's possible in a 5G world. I'll go through the six use cases and then I'll do a deep dive on one of them and actually show you the implementation architecture. So the first use case was around just using the capability to deliver high bandwidth services to support medical students by giving them access to much better information through their mobile device. A second use case also in the medical field was where we used a robot from one of the Italian technology institutes but enhanced that robot to be able to support interaction with both patients and staff in a hospital context. The third use case about patient monitoring is actually quite interesting because it begins to demonstrate what you can do when you have low cost sensors that you can deploy into either the patient's hospital environment or into their home environment and then couple the data that's collected with machine learning algorithms. So for example, if you can deploy electricity consumption monitors, temperature sensors, carbon dioxide and carbon monoxide monitors in someone's flat after they've left hospital, you can use that to develop a model that says, is this person getting better or not? Are they getting up at the same time each day? Are they making a cup of tea? Do you measure by looking at the electricity consumption? Is their breathing normal or are you seeing abnormal levels of CO2? And you can also then spot patterns like somebody who's getting up perhaps a little bit later each day or is not as active as measured by the contents of the air. And then you might then use that to trigger a clinical intervention where you say let's get someone to go and look at what's going on, why this person isn't recovering as well as they need to. You can go further and actually spot situations that might require acute care if you have a patient who perhaps doesn't get out of bed one day. If weight monitors in a bed are pretty cheap, you can connect those up to a machine learning algorithm that recognizes what's happened. And again, that gives you real value in determining that someone might need help when they may not be able to someone help themselves. Rehabilitation security I'll look at in more detail, but then we can look at, for example, agriculture. And clearly, if you have access to micro level data around temperature, soil humidity, air humidity, what's actually going on? You can use that to, you can synthesize that data from individual sensors, from drones, from aerial photographs, from weather company, and you can use that to help you determine how you deal with individual crops. And for high value crops like olives, like wine, micro, the application of micro watering where you actually meter the amount of water you give to each individual plant is both very efficient. It doesn't waste water, but at the same time you can actually respond to what the individual plants need. You can use a machine learning model to tell whether or not food is in good condition or not. You can learn what food that deteriorates looks like, and you can therefore remove it from the supply chain early so that it doesn't have an adverse effect. Now, I said I'd look at one of the use cases in more detail. So this was deployed for the central railway station in Milan. Now, everyone's aware that you have in many countries video surveillance systems that at least keep an eye on areas where there are perhaps people, large numbers of people. But the success of those does require on staff looking and spotting when anomalous things happen. Now, there are places like railway stations, sports stadiums where there would be a lot of cameras and you knock out a lot of staff to monitor the CCTVs. An alternative is to say, let's connect those cameras up to an intelligent system that processes the video and recognizes interesting things. So we can, for example, train a video analytics engine to recognize ticket machines. We can train it to recognize vending machines, things that normally appear on a railway station platform. We can train it to recognize people, not facial recognition, but just that's a person. People are supposed to be on railway stations, no problem with that. We can train it to recognize luggage. And then we add an extra level of sophistication where we train the model that says if a piece of luggage has been unattended, it's sat on a railway station platform without an associated person for say five or ten minutes. At the very least, somebody should go and have a look at it. That's the sort of thing that we can do. And this is a characteristic of a lot of edge applications because what we're doing is we're using the cloud to train the machine learning model. But we're then distributing that model out to the edge so that we don't have to move large amounts of video data through the network, but that it's all processed locally. That has, again, potentially real benefits. And our experience suggests that we believe edge computing is actually going to reshape a number of industries over the next decade. In financial services, potentially it allows us to avoid moving sensitive data out of the places it's generated. Clearly in the transportation area, if we can communicate between vehicles and between vehicles and roadside infrastructure, that potentially helps with traffic flow and to reduce costs. Automated kiosks are everywhere and the more we can do to make those things work, the better. I've already talked about healthcare. In manufacturing, everyone's interested in smart manufacturing in industry 4.0 and clearly there, predictive maintenance models that tell you when manufacturing machinery is perhaps starting to act out of tolerance and therefore needs to be serviced before the quality of the goods that you're making deteriorates is important. In the distribution sector, automated warehouses are now a thing and you want to be able to process the information from all those robots that are moving goods around. Of course, with the insurance sector, people are interested in pay as you drive and of course how can you manage things in the retail sector too. And then finally, one really fun example. Now this is actually done by Orange in France at the Roland Garros Tennis Tournament last year. They deployed a set of edge computing nodes to take the video feeds from the high-end cameras. They basically had deployed a set of experimental 6 and 8k video cameras there. They would take the video feeds from there but then provide an edge application that allowed the outside broadcast director to select which one was used for onwards transmission. So these things were 5G connected but it meant that you weren't totally flooding the network with multiple streams of video that you weren't going to use because it was only the ones that were actually being used as part of the broadcast that actually then had to get forwarded to the production centre. The next set of use cases that are of interest to service providers are actual network use cases, not enterprise use cases but functions within the network itself. And we can break those down into two groups. There's a set of functions that are just part of building out the future network. And for example we can see as we build out an open RAN that the decomposed units of the radio network, so the CU, the DU, the radio interface controller are all potentially components that can be moved around, separated. And moved quite close to the edge of the network and they will probably be, some of those will be the first things that actually run in the actual cell site, which is where the radio runs today of course. But as we go forward we can also see that a set of elements that previously were deployed centrally in the network can now actually be moved closer to the edge. And those are what I would call optimisation use cases. So for example things that reduce the volume of traffic without adversely affecting the quality of service. So for example video transcoding that's done in order to deliver a lower bit rate than might be available to deal with the fact that there's a lot of load in a particular cell. Creation of a content distribution network which moves content closer to the point it's consumed and therefore means it can be served with a lower latency. But also that then ties into functions like video streaming which can be done from an edge location rather than actually being done from somewhere in the cloud. Obviously there's a business relationship there that has to be considered within the service provider and the video content owner. But there's been 20 years of people like Akamai doing that in the fixed internet. So there's no reason why you can't see something like that working in the mobile world equally well. And then in the edge locations that we talk of that are the 200 or so metro data centres that an operator might have. There you can see some of the functions that perhaps exist in the packet core running in some of those locations whether it be IMS, whether it be the packet core itself. We're seeing that being done now in for example India. But all of these optimisation use cases are dependent on one thing. You do have to be able to access the traffic from the user equipment that's going to the cloud or to the network function which is encrypted before it reaches the network core. And so you need a breakout function in 4G network that was done in various proprietary ways. With 5G we now have the UPF, the user plane function which we'll talk about in a later section. And that element will deliver the ability to access the 5G traffic and to be able to process it without having to terminate, take it all the way to the core of the network. And that's a very important element and you should not underestimate what effect that's going to have. I mentioned earlier that there's a lot of interest by service providers in understanding what enterprise edge applications look like and what it takes to deploy them perhaps within a fixed or a mobile network. So in the next section we actually need to look at a real world enterprise edge application that's been developed. We'll look at a brief view of the requirements, a view of what the architecture context looks like and then one of my colleagues is going to take you through the details of how it's implemented and deployed. Now, we're all aware of the impact of the COVID virus and companies that are trying to open facilities safely in the current environment are having to deal with issues like how to manage a number of workers in a particular area, use or not use of protective equipment, spotting whether people are actually potentially symptomatic in running the temperature, other areas which we know to be infected because they haven't been cleaned and therefore people shouldn't go there. And also being able to update what you're looking for based on the requirements as government requirements change based on science, actually being able to monitor for those new requirements. And based on this, IBM has actually created an application for our enterprise customers to try and allow them to do these things. It's called worker insights. Its main interest here is as an example of a complex enterprise use case that can be deployed at the edge and it includes a whole load of elements. It measures using cameras and video analytics, whether people are maintaining an appropriate distance. It uses biometrics like, for example, temperature measurement to determine if somebody's unwell or showing any signs. You can remark the occupancy level of a particular area and measure for that. You can check the density of crowds. All of those things rely on a variety of sensors and in turn are then processed in an application to be able to generate an alert or not if something's perhaps not going well. Tass Supercool who's one of the architects in the IBM Dallas lab and she's going to take you through the details of that enterprise application worker insights. Hi, this is Tass Supercool, a solution architect and specialist from the IBM Dallas Telecom Media Entertainment Solution Lab. There's a quick overview of the IBM Maximal Worker Insights or MWI. MWI helps assure worker safety as people return to their workplace during or after the pandemic. It combines sensor technologies with analytics, monetary potential risk, and help reduce safety-related issues and costs. MWI addresses several critical use cases such as social distancing, understanding overall health and well-being, occupancy monitoring, restriction of no-go zone, managing crowd density and body temperature, detecting face mask, and providing a report of social distancing violations. These use cases utilize data from optical and thermal cameras as well as wearables, Bluetooth sensors, vegans, and mobile devices. MWI leverages IBM Edge technology which brings enterprise applications closer to data sources, allowing processing and analysis to be done at the edge in near real-time to deliver faster insights, improve response times, and bandwidth optimization. Alert notifications, real-time and historical analytics data are also available to the user based on their roles. Bringing AI to the Edge allows you to act on insights closer to where the data is created. Video data captured from the cameras can be sent over to process and analyzed at the edge in near real-time using IBM video analytics and maximum visual inspection deep learning models. For the no-go zone monitoring, once the individual entering the off-limits area indicated as the red light here, alert will be triggered. Both the worker and supervisor will be notified. Face mask detection uses a deep learning model to determine whether an individual is wearing a face mask. Occupancy monitoring, counting number of people, and keeping occupancy levels in a defined space to a logical maximum. Crowd density management, monitoring the density and average proximity of people in a specific space based on people counting in the cameras field of view. For specific hotspots with many crowd density hazards, it is a clear indication that such areas often create unsafe conditions. Elevated body temperature monitoring, monitors body temperature using a thermal or infrared OIR camera. Typically, add the entrance to the site and generates alerts if temperature exceeds the threshold. In this view, the white body line means normal. While red indicates a person may have high temperatures. Social distancing shows how IBM video analytics can automatically detect people who are not meeting social distancing requirements. A green box indicates acceptable distance between people. Yellow means they are within the distance threshold, which is set to 6 feet. Red means they have gotten too close to other for longer than a set period of time. The social distancing duration and number of people are all configurable to fit the workplace needs and situations. Health monitoring passes if insights on general wellbeing of workers, leveraging health and biometric data collected by IoT sensor and variables. To detect heart rate, temperature, identify body stress and fatigue indicators. And as the heart rate is going up and above the threshold, the worker will be alerted. From the maximum worker inside dashboard, you see information for your environment such as how many alerts were generated for a given criteria. Each hazard alert is collected and stored in the dashboard, where we can see them display in different ways. The Hazard's heat map could be very important for finding places in the workplace that might be more unsafe than others. You can filter by certain criteria or drill down to a specific location and go directly from the heat map to view the floor plan. And see the number of hazards by day and type from the bar graph by location from the donut shot. The hazard trends can help see what time of day there are more hazards than others. You can see and compare the hazard happened at certain time interval and get total counts of each hazard as well as see more detail by scrolling down to see the table showing all individual hazards that have occurred. You can also filter, look for hazards for specific shield. Hope this helps give you a better understanding of the maximum worker inside and how the solution can enable healthier and safer interaction in the workplace. The solution can also be used in many industries such as construction, retail, utilities, stadium, insurance and more. For additional information, please visit the IBM Maximal Worker inside or Watson Works or the IBM Edge Application Manager. Thank you. So let's think about how we integrate the Edge Platform and the actual network itself. We've listened to some of the use cases. How do we actually do the technical integration? It's first worth considering the different approaches to how you might build that Edge Platform that you'll use to run network functions. And it's important here to highlight how the importance of being open. Now, when we started the work in Etsy Mech, its intention was to allow both network and enterprise workloads. And it did focus on having an open interface to the network. But although that breakout function was identified, it was never specified in a way that was implementable. And so as a result, what you ended up with was a set of edge appliances from different vendors which were vertically integrated and each had a different way of performing breakout, what's now the UPF function. And of course, in both cases, they don't really support the development community very well. And they are typically using relatively old software technology, vendor-specific platforms. They don't support, they don't enable a large ecosystem of developers. Now what we've seen in recent years is the focus on building an Edge Platform that is open for developers that does use modern software technology like a container platform and that also supports modern software development tools and processes, like the CI-CD process to enable continuous integration, continuous delivery of new functions, even if they're being deployed within the network. And so you have a wave of edge-based cloud platforms that have come from the hyperscale community. But those are still lacking in an open network interface. That's the one thing that they're perhaps missing. And so finally, what I would propose, which is what we're going to be talking about in a minute, is the approach that says we combine the best of all of these worlds. We take the good ideas out of Etsy Mac, we take the good ideas out of the cloud and we combine them to use open source for the container environment, for Kubernetes, for open horizon, for managing the Edge. We look at having open interfaces for the user plane function, open APIs to enable applications to access other functions in the network and to support a broad ecosystem of developers. And that's basically what IBM, Red Hat and Caloom have been trying to do. We think that's an important idea. We can then look at what it takes to deploy the Edge. We talked about the different locations on the left-hand side. We can see at the top that there are a small number of clouds. Most operators will have some number of nationwide data centers, but it's a relatively small number. But the latency from then to the Edge is relatively high. There are then a set of locations and the numbers and the topology vary by country and by operator. So you may have some number of local data centers. You may have a second tier of market-related data centers. In some countries, those two tiers are brought together. But you're talking about tens or hundreds of locations, a relatively short distance from the end-user's device, and therefore they do give benefits of latency. And then of course you have the base stations themselves, in which there are many tens of thousands, which have very low latency, but they also have restricted space and power. So we can't put all of the applications we might want at the base station. We're probably going to restrict the base station to being used for radio applications and maybe some of the optimization applications. And then of course there will be some on-premise Edge locations as well. And if we then look at the right-hand side, we can see the importance of that enterprise mech, which is actually running on-premise in the warehouse, perhaps connected to operational technology, where you're managing robots, you're managing a manufacturing line. That of course is connected via an SD-WAN or an SD-LAN into the branch offices. But the top half is how the operators can provide support here. Because firstly they will obviously have VNFs and CNFs that run in some of those locations, but they can also provide a platform where they allow enterprise customers to deploy under policy control portions of their workload to some of those perhaps local data centers or market data centers. And so the service provider is basically creating a hybrid cloud that enhances what's available in the existing cloud with nodes that provide much better latency. And in general we'll use this simplified diagram in a lot of the demonstrations that follow. We see on the left-hand side the network edge, which is typically where the radio processing works. We see the network cloud, typically some of those central market data centers where we have the packet core, the virtualized control units, the voice platforms there. And then the central locations typically run the OSS and VSS. And we can see that there's an assurance and an orchestration function. And there's also an edge application management function that run centrally, but that control all those edge nodes. And in the case of the edge assurance, edge application manager, it actually goes all the way out to the end point. Now, I mentioned earlier the importance of 5G in defining the user plane function that actually allows you access to traffic that's coming from the user equipment after it's left there encrypted to tunnel and enabling you to have a platform to process that stream of information. And UPF has said it's really critical, so I'm now going to ask Suresh Krishnan, who's the CTO of Kaloom, to share his insights on what it takes to the requirements for that UPFR and how you might actually build and deploy it. Suresh. Hi, my name is Suresh Krishnan. I'm the CTO of Kaloom. And I'm here today to talk along with my colleagues at IBM about like how we built a platform for efficiently running edge applications along with 5G. So I'm going to start off by talking about how we built the infrastructure, the lower layer of the system. And then my friend Zig will continue and talk about like how the applications are layered on top of it and how the workloads come on and how they're handled. And we'll go through a demo to show you like how like everything falls in place together. To start off, I'll give you like a brief recap of like, you know, what's kind of different in 5G. And then we'll go a little bit further out. I'm pretty certain that most of you are aware of all these things, but these are the things I just wanted to highlight as new in 5G. So 5G is like a latest generation of like mobile network technologies. It's got some stuff that's new in the core of the system along with the radio as well. And some of the key differentiators are it promises like much higher throughputs when compared to like the current generation of technologies, which is like based on LTE. And so for example, right, we can do like in a multiple tens of megabits per second of throughput like for a single device in current generation of networks. But with like 5G, we can like hit even multi gigabit speeds. And these have been demonstrated not just in like demo networks, but in live networks as well. And 5G also supports like much lower latencies when compared to like 4G networks. So for example, like in a network today, you'll be lucky to get like a round trip latency of like 20 to 30 microseconds. But with 5G, we can go much lower than 10 millisecond and even down to one millisecond. So this like opens up like the wireless networks to a lot more use cases than where previously possible. So if you look at stuff like factory automation, public safety, AR, VR, which have like very strict like either bandwidth or latency constraints, they're capable of running in these kind of 5G networks. And next is like, you know, 5G uses like a large set of frequency bands to run to provide all these characteristics. So there's like low frequencies like and high frequencies. And a lot of the new frequencies that are coming on board, like out in the millimeter wave range, like those are the ones that are going to provide a lot of the throughput that's like promised here. So the downside of having a lot of the new frequencies in higher bands like 30 gigahertz plus is that the coverage area is like much lower than we're known to today. So we are talking about like, you know, about 100 meters of coverage or so. So like you're going to have stuff that's like, you know, much more distributed out to the edge, like getting closer to the user so we can get all these like phenomenal high speeds. So along with 5G, there's also like a raise, like, you know, like a significant increase in applications that are moving to the edge. So part of it is like because of 5G, part of it is like, you know, the need to get closer to the user. And so one of these like, you know, kind of like snippets that we heard from Gartner is like Gartner predicts about like, a person of the data is going to be processed outside traditional data centers like the centralized data centers and it's going to get much closer to the edge. And if you look at it, like, you know, it kind of like correlates at the same time to like, you know, a large number of 5G connection. So, you know, the hyper local nature of like 5G, like with, especially with the higher frequency radios kind of like measures pretty well with this mood to edge computing for like other reasons as well. But this is also like a precipitating reason. And the edge economy is like growing quite a bit. So like, if you look at it, there's like, there's going to be a small component of this. That's going to be like, you know, the, the connectivity, the hardware and the platform. But there's going to be this huge application economy that's going to be really targeted towards the edge. And this is like where we are seeing like a lot of the movement in. So like right now everything looks like, you know, the infrastructure is starting to get built up. But at some point, like we expect like maybe in the next year or so, we see we're going to see like a huge uptake in edge applications, like that are like specifically tailored for the edge and like, you know, trying to be closer to the user, like, you know, be very performance sensitive. And I'll try to get into like some, some of the kind of applications that we're seeing right now. But like really like everything is just limited by your imagination. So there's like a lot of applications we can even not think of yet. And that's probably going to come in once this rolls around. So some of the stuff we are seeing really is like, you know, we are seeing like VRAR. There's like a common thing, like, you know, you really need low latency to like make sure these things work well, because if there's a huge lag, then like, you know, it kind of like disorients the person if you're doing something like VR. So like, you know, there's like some lot of videos on the web you can go look at which shows like, you know, how lag affects all these things. So like real low latency is like key feature here. Similarly, like, you know, industrial robotics, like industrial control is like something there. Mainly related to like, you know, there's like a lot of stuff like control data that like is coming in. So like, there's like, you know, cameras monitoring stuff, a lot of sensors sending things over. And then you need to make a decision and communicate it very quickly. It's a similar thing with connected cars. Like if you look at like some keynote presentations, like from, you know, companies like Intel, you're going to see like a connected car produces like a lot of data and this is going to like get streamed up. Like, you know, most of it is probably going to be like this digested and thrown away, right? Because it's like probably impossible to store all the data over, but like it at least trains some kind of models that are going to get used later. So like, you know, we need to be able to support like really high throughputs. Gaming is another thing. It's both like bandwidth intensive as well as like latency sensitive and similarly public safety. So that's stuff like we are seeing like, you know, looking at like, you know, temperature monitoring, right? Like you have a set of people. You can look at them and see like, you know, somebody's exhibiting signs of having higher temperatures, like mainly for like, you know, for example, like, you know, if there's like an outbreak of some disease and it's kind of identified by temperature, you could use that. Similarly for like stuff like mass compliance, if you look at like some factories, whether like mass needs to be there or some like public area where it needs to be there. So those kind of things can be really handled at the edge, like so you can have like very quick detection of stuff going through. And of course, like remote medicine is like, you know, if you're going to do like robotic surgery, you better like, you know, have like pretty good feedback on like what you're doing, like by getting a visual feedback of what's happening. So generally, all these things are characterized by like need for like, you know, very high performance, whether like, you know, it's about high bandwidth and secure, like high bandwidth connectivity, low latency, or like, you know, stringent security needs and ability to scale. And also like try to keep the cost low because like a lot of these things, the actual revenue per bit is going to be much lower than like a lot of the things that are happening today, which is like mobile broadband and things like that. And given the stringent conditions is going to be very difficult to handle all these applications just with the public cloud. So like, you know, people use like AWS Azure, like Google compute and so on. So to like do a lot of the applications, but like a lot of the times the, you know, the nodes are not really close enough to handle these kind of limitations, right? So for example, like, you know, we like if you start doing like a ping from like downtown New York towards like one of the nearest like AWS availability zones or like, you know, Azure, you're going to see like, you know, like two digit millisecond delays. And obviously this is not going to meet stuff. This is like, you know, single digit millisecond limitations. And so if you look at like, you know, we talked to like a lot of operators like we actually took like a company like consulting company to go talk to a lot of operators about like where they're going to deploy the workloads. And if you look at it, like, you know, there's a whole bunch of operators, they're like really like going out very, very far out to the edge. And like the number just like flowed me, right? Like, you know, about 97% of operators want to do some kind of workload execution very, very far out in the network, right? This is like something like a smart C or like, there's like thousands of them around like any decent sized country. And so like, you know, what's really special about the edge, right? Like there's like a couple of things that differentiated from regular data centers. So first of all, there's like a limited amount of power available, right? Like so a lot of these like, you know, locations where like these, the edge data centers and computing systems are going to go and they don't have that much power to feed. Whether it's like, probably like someplace that's like really close, like to like the cell towers, for example, there's really like not much power. But there's also like larger data centers close to the edge, but they have like a bunch of legacy equipment or like they're old so that like, you know, they don't have the same kind of power density for a rack that we assume today. So like power is like a significant constraint. Similarly, there's like a space constraint. Again, going very far out, you probably have room for like, you know, two or three servers. Or like, you know, if you have like some kind of central office, you probably have room for like about 20 servers or 30 servers like best case. And also latency is like a key thing. So you really want to go as far out as needed to meet the latency requirements of the applications you want to run. So like how do we optimize for the edge? So we started like, you know, tackling this problem from the ground up, like so how do we build something that's like optimized for the edge. And there's a couple of things like, you know, there's like themes that like that we run through. So first of all, we wanted to have a unified platform that can combine like a lot of the functionality together. Right. So like, you know, you can take the computer networking and storage and try to like put them into like a similar kind of platform, right. It eliminates duplicate functions and it allows like stuff to scale down because if you have the same kind of hardware and same kind of platform running all these things, you can scale down as needed because if they're disparate pieces, then you need to like have like one of every kind. So like it's really difficult to scale down. And the second thing is integration. So like, you know, there's a need to do access termination. So if you have a 5G connection coming in, you need some kind of processing of the 5G packets before the application gets to it. Right. So like a lot of a lot of the times what happens is like, you know, this kind of processing is getting done by the same kind of resources that need to do the application. So I'll get to it in a bit. So but our idea is to say like, hey, like, can we combine this into like the networking stack. So the networking stack and terminate the access directly without using payload resources. And the third thing is really to reduce the complexity. So like, you know, if you have a few data centers that are like, you know, more centrally located if you have like 10 across the country, it becomes easy to manage like even something that's like reasonably complex. But once you go to like hundreds of locations or thousands of locations, we really need to cut on on the complexity and try to make everything as similar as possible as automated as possible and so on. And so like complexity is one thing and similarly to save costs. So like this integration and kind of homogenization of the platform reduces the cost quite a bit. And in addition to this, like we also tried to do something for workload orchestration. So like we have like some kind of platform that like does like networking at the edge, like, you know, does provide some kind of edge computing features, but also workload orchestration for applications that come in. So if you can integrate all that, like we can cut down quite a bit of the overhead and then move on to something that's like very, very efficient. So this is like an illustration of like how like the traditional is looks like. So you're going to have like a management switch, like really for out of band access, like, you know, you want to make sure the system comes up and like boots and it gets to the note that it needs to boot off, get the connections out. And then you have some kind of networking fabric like the control plane that needs to run for it. And that's going to take a bunch of servers. And then you're going to have your fixed function fabric switches, like, and they're going to take you probably want like two of them for redundancy. So, you know, you can wire them with the lag or MC lag so that like any single note failure or link failure doesn't affect connectivity into the compute side itself. And then you run some kind of workload orchestration. In this case, like if you want to run like, let's say containers as workloads, you need to run Kubernetes masters. You take like a bunch of servers like three, for example, right to run a reliable Kubernetes system. You take like some more servers. And then finally, like, you know, the piece like, you know, I talked about, like if you want to do access termination, if you want to run a 5G UPF on x86, you're going to take a bunch of servers just for doing that. Right. Like, you know, to terminate the GDP packets, like perform the the 5G processing before you hand it off to the server. So you're going to have a bunch of servers left over. Like in this case, like on a 20RU system, like you're going to have like 8RUs left over pretty much for application servers. Like, you know, this is the stuff that the operators can actually monetize. And so like, you know, when we try to unify these things, we're going to take like all these like, you know, stuff like identified on the left as like overhead really, right. Like the network controller stuff, the Kubernetes masters that are running and also the access termination and try to squeeze them all into the networking subsystem. So we have optimized switch. This is based on like an open compute reference design, like edge core manufacturing switches. And we run like the access termination directly in there. The 5G termination happens directly on the switches and they're the switches like use the Intel Tofino ASIC in there, which is before programmable. So we can actually update these things pretty much like very, very frequently and add like newer functionality. Like, so that's how you're able to add the 5G processing directly into this. And also we run OpenShift cluster, which is the Red Hat OpenShift cluster, which is a Kubernetes distribution on top of like these switches. So the switches themselves run the OpenShift masters in there. And like all the workloads like that we do on the control plane are actually running on the switch itself, like on the CPUs that are sitting in the switches. So that gives us like pretty much like the whole of the available servers to be monetized. So if you have like in the system, for example, you're going to have like 16 servers that are available to be monetized by the operator with the same amount of space and same amount of power. You're going to have like almost double the number of servers that are able to be monetized by the operator. So that's kind of like the basis of the platform. Now I'm going to hand you over to Zig for him to continue and talk about like, you know, how the applications are actually instantiated, how they are managed and like, and so on. So to you, Zig, thanks. So thank you, Suresh. Next, we'd like to look at how you have how you integrate the edge platform that's deployed within the network with the enterprise. And this is very important for service providers because this is basically what you need to build into your network to be able to enable enterprise customers to make use of your network and to deploy their workloads. So let's think about the management challenges that exist at the network edge. In the world of the cloud, you have a relatively small number, tens or hundreds of centralized locations, and they are relatively strictly managed. When we go to the edge, we're now looking at thousands of servers driving tens or hundreds of thousands of devices in thousands of locations. And those locations and those devices are continually subject to change. So the management overhead is significant. And that requires autonomous management to just reduce the cost. You can't manage an environment like this if you have to raise a trouble ticket every time somebody adds a device. It just doesn't work. So that leads to the requirement for autonomous management of the lifecycle of these workloads that run at the edge of the network. And so we have a product called the Edge Application Manager that does that. That's its dashboard that's showing a particular view of the nodes that are actually currently deployed in one example. And this is based on the LF Edge open horizon technology, which has been under development in the community for a number of years. And so clearly one of the things that that management has to do is basically manage the lifecycle of all the Edge applications without human intervention. So it needs a set of rules and policies it can apply. It needs to be secure because we know that one of the risks in 5G is the lifecycle of unmanaged IoT devices. And so we have to be able to integrate with firewalls. We have to know what the common attacks are. And we have to encrypt all of the communications messages to stop them being spoofed. And it has to be flexible. It's got to be able to deal with this complex environment, which may be sometimes disconnected when network circuits drop or whatever. And so it's important that it be able to operate in that environment. So if we look at the sort of building blocks that we might need, we need some sort of application management hub where we centralize all of our administration. And then we have a device agent and a cluster agent, one of which runs on top of Kubernetes, the other of which is integrated for very low footprint environments like actual Edge devices. So we now have a distributed network of management nodes. And what we do, the functions that are important is in essence that the centralized management node can determine what's running at the Edge and what needs to run at the Edge. So we're going to have hundreds of thousands of potential nodes. Should this have a particular version of, for example, a machine learning model deployed there? So there's clearly a requirement to deploy or manage those models. And we need to keep track of all the nodes as they come and go. There's a requirement for a registry into which we can track all of the nodes within the network. We can understand their status. We've got to be able to allow all of the different components to connect together through a service mesh, but one that's secure and encrypted. And then I mentioned the importance of policy. I need to deploy this particular container, but it doesn't require very much compute resources. It can go even on something with only a handful of cores versus I need to deploy this workload, which actually needs a much larger amount of it. So there's an agreement mechanism by which Edge nodes advertise their capability. And then the application manager determines where to place workloads based on the capabilities the workload advertises and what's available. So to give you a brief overview of the flow, the agents register with the exchange and then they say we'll be prepared to participate in placing workloads. This is now then the node can actually then make decisions on what it will do and it negotiates for applications with the centralized functions. And we think of these as contracts and you can put them in a ledger so you can actually see the systems behaving correctly. That's basically the underlying technology. And if we finally then look at how we tie all the pieces together. And this is the sort of whether you take a platform like this from IBM or from somewhere else. It doesn't really matter the sorts of functions that you need at the foundation a container platform. We then need what it's the NFV calls the life cycle manager, but which actually has to be updated to deal with containers as well. So we call that the Edge application manager. You need an orchestration platform, which in turn will actually deploy many of these edge nodes onto the locations that will potentially deploy them into slices as slices become part of how networks are actually deployed. And then on top of that, there are potentially from IBM from many other companies edge enabled applications. And there will be systems integrators providing services around the edge to help enterprises design, build and deploy those workloads onto those different locations, whether it be devices at the edge, the gateways within the network or even the cloud itself. So finally, let's try and bring all the pieces together and show you an end to end example of what it takes to deploy workloads from both network and enterprise together into a service providers environment and operate them in a common way. And so we're going to see, we're going to see now an application based on the edge application manager being deployed into a number of locations. And this is a sort of high level picture that you'll see repeated. You can see on it, you know, the network slice with the various elements in it, the network edge itself, the cloud, the hybrid cloud at the center and then the systems centrally that manage everything. But for the next section, I'm going to hand over to Matthews Thomas, who's the architect for our service provider lab in Dallas, who's going to actually show you what it will take to deploy the worker insights application we talked about earlier into a service provider. Matthews, over to you. Thank you. Hi, this is Matthews Thomas, lead architect at IBM's telecom and media labs. I will be demonstrating a use case that illustrates a lot of the things that Zigg discussed. We will show how a service can be provisioned. To provision the service workloads need to be deployed and configured at the network layer. Workloads need to be configured and deployed at the application layer. And then the service will be available. The specific details of the service are around worker safety when an individual approaches a video camera. The camera detects that there is a person and then and only then does it start transmitting that video to the Mac. Further analysis is done there to determine if the individual is conforming to safety regulations. Let's briefly talk about infrastructure we have available. There are a series of endpoints including cameras, IoT devices and various other edge devices. There is your edge network which will have the Mac running the application. And on this also there will be network components running. There will be the core network with the 5G core and other network components. There will be the data center which runs a lot of business functions. And underlying all of this is the transport layer. So what you're going to see is someone will come, place an order in the service catalog. The order is then sent to the manual layer which then deploys the network. In this specific case an actual network slice is going to be created across the network. The application layer also needs to be deployed so the applications will be deployed by the edge application manager to the Mac itself. And the device itself will also have AI models deployed to it so that the service can be fully enabled. With that we will go into the details of the demo. So let's start off by looking at the initial state of things. We have a camera and the Mac and the network nothing is being deployed. Let's look at the camera and you see Shorad from our labs in front of the camera. Nothing is being identified or recognized. Just a simple camera streaming videos. Let's look at what the setup is right now. We have a set of edge devices. Those could be cameras and other devices. We have a cluster, the application running on the Mac itself. And let's look at what the state of some of these things are. And we will go ahead and look at the details of the specific edge nodes that we have. And so we have multiple edge nodes and you will notice that this specific edge node has certain characteristics in terms of memory. Whether it has a camera associated with it and there are no services right now deployed on it. Let's look at the services that we have. Services are applications or models that you can deploy onto these devices themselves or onto the Mac. So for example, you could have an object detection mask service that detects if a person is wearing a mask or if a person is wearing a hard hat. Or a simple object detection service that detects if a person or some object of interest is available. And then we have policies. And what policies do is they show and enable what can actually be deployed onto those different devices. And we'll come to this in just a moment. So having said that, we have established that right now there's nothing running on the camera. We could go to the Mac devices and do the same thing. But let's go ahead and kick off the service. So we are going to create a service for our 5G environment, which is going to create a network slice and an application that will run using the slice. So we go through the process of submitting the order. As part of this process, we use blockchain under the covers, Hyperledger blockchain. And as you can see, there are multiple parties that are involved in this specific transaction. So all of them need to get visibility on what's happening, where are the services, if they're billing. The speeder rises will know when exactly those services were created. And all of these run under the covers, but we're illustrating this to show that there are multiple additional components that are running to make this possible. The service is provisioned. As soon as the service is provisioned, you'll notice that this specific 5G gold slice that we saw earlier has started running. Let's go ahead and take a look at the details of this. And we'll notice that there are four different components to it. There is the 5G core that we mentioned earlier. There's a transport area. There is the slice, the IMS, and the edge management, which is what deploys the application to the edge layer. And we can go ahead and take a further look at some of the details. And we'll take one specific example. Let's look at the edge layer. And what that does is it through the APIs, it deploys the actual application. It is easy to use GUI interface as you can see. Let's go ahead and see what the status of things are. You can see we're going through the installation process. Not only are we going through the installation process, we can go to our open shift environment and we can see that the containers are coming up. So all the necessary components to make sure that the specific services is going to be available are slowly coming up. We see the installation process is going through and there are multiple steps to this process. So as the installation goes through, we will soon come to a point when the installation is complete. And we will see that the colors change, which indicates that the installation is complete. What we also have is our service assurance dashboard. And the service assurance dashboard is constantly gathering events and other items of interest that's happening. We can do a drill ground and see what precisely is happening and get more details about these events. And one of the things that we can also do is we can get a topology view of the underlying network. So what we will do is select the topology viewer and specify the specific topology viewer that is of interest to us. And we can now see the network components are up and we can see which we can get into details of each of the components. We can look at this from a time series perspective when things started, when they ended, we can do a drill down. So what the service assurance dashboard is giving you is a good view of all the events, a visual perspective of what precisely is happening. If something went wrong, it would be labeled in different colors, but you're fundamentally getting a full end to end view of your network. And also how the deployment of the application layer that was just started is. Now let's go back to the application layer. We're coming back again to the edge application manager built upon open horizon. And when we started off, there were only three policies early running. Let's do a quick refresh. And when we do that refresh, we will see a fourth policy. This policy was called object detection. What does this object detection policy do? What it does is it looks for any devices out there that fulfill the specific characteristics. It has a certain amount of memory and has a camera. And then what this policy does is it goes and head and deploys this object detection model to all those devices. So let's go ahead and take a look at one of the specific nodes. And to see if anything was deployed there. And what you will see is this specific device has a certain amount of memory. It has a camera. And now you will see that an actual service has been deployed there. This service was not there before. Now that we know that there is a service deployed on this, let's make sure it actually didn't deploy to everyone. Let's go ahead and look at another device. This does not fulfill the criteria specified in that policy. So in this specific case, the memory was not sufficient. So there are no services running on this node at this point in time. So let's go to the camera itself now. If you go to the camera, you will notice in the past, nothing would happen. It was just a piece of video streaming through. What happens now is it is actually able to detect a person. So this is because a model has actually been deployed on the camera and it is doing the recognition. Now let's go to the actual neck itself. There was nothing running in the cluster. Now there is an actual application running in the cluster, which you see right here. And what you will see is this application will identify alerts when something spurious happens, something that does not comply to the safety regulations. So what we did was an individual now just came in front of the camera. When that individual just comes into the front camera, everything looks good. They are complying with the safety regulations of the company. They are wearing the hat. Everything is according to rules and regulations. But then they remove the hat. The moment the hat is removed, an alert is issued. This alert can notify someone. We happen to show it visually, but we've done a lot of things in terms of what someone could be notified, the alarm could go off, and a variety of other things could happen so that we can ensure that the safety of the workers is paramount and implemented correctly. So just a quick summary of what you saw. We went ahead and provisioned a service. In the process of provisioning the service, we showed how the network layer was configured with workloads deployed as necessary. We also showed how the application layer had AI models distributed to the edge devices as well as applications sent to the MEC itself. And finally we showed how that service was actually used when someone would come into a zone with or without a hard hat and if the individual did not have a hard hat, the appropriate alerts were issued so that the appropriate safety regulations were implemented. With that, I'll hand it back to Zig to conclude. Thank you, Matthews. So we've seen over the course of the last hour and a half what the use case is for edge computing both for the enterprise and for the service provider are. And we've seen what technology is required to enable you to deploy those edge applications within the network. There's one final thing I'd like to say before I wrap up. Based on our experience, this is a relatively new area and we recommend that any service provider who is considering offering edge computing as a platform to build enterprise applications on should consider creating a business capability that allows you to work with the development community that builds the applications that you're interested in. I think one of the lessons that was learned from the Italian 5G trial is that that's actually a very important thing to do. You have to work with the developers to onboard, in the same way you have to onboard BNFs, you have to onboard edge applications. Once they're onboarded, they can be managed automatically. But you do have to work with the developers of those applications at least at this point as you start. And I think that that agile way of working as a starting point, that's what I would recommend that a service provider creates some form of agile program, IBM, we talk about the garage method, that you use to manage the deployment of that edge platform and equally importantly, the engagement with both the developers of the applications and with the enterprises that are going to adopt it. And with that, let me say thank you to Tass and Matthews and Srinish and to say thank you all for listening, for participating in this and asking questions in the chat. If you have any further questions, you can find us on LinkedIn and with that, thank you very much. Sorry, Zeg, I'm not sure if we can hear you. Thank you very much. Hey, Zeg, I'm sorry, Zeg, I don't think anybody can hear you or hopefully folks can hear me. I can see on my head too. Okay, great. Sorry, Zeg, we can't. Zeg, I think what we're saying was, hey, thanks everybody for joining the session. We really appreciate you taking the time. If there are any additional questions, please do let us know. We will address the video issue and we are available on LinkedIn and other venues for us for any additional questions. Actually, I was just looking, Suresh, there was a question for you on the chat. Would you be able to address that one? Sorry, Suresh, we can't hear you either. Can you hear me now? Yes, we can. Thank you very much. Excellent. Yeah, so for the monitoring of the UPF and the CNS itself, there's like two real angles, right? One of them is the real-time packet monitoring. So we have in-band telemetry functionality, which can pretty much track all the packets going through the system in near real-time at like microsecond level granularity. And, you know, we send, we can collect like the information going through every switch, every network node in the system. This is really for like debugging packets. So if you're building a function out of microservices with like a lot of small pieces, you really want to be able to identify if there's a service degradation where it happens. So that's something we can do on the packet level. But then we also support like the cloud-native functionality. Like, you know, we could use something like Prometheus, right? So for doing the higher level of data collection, see how the CNS are doing and like the health of the CNS itself. And then like, you know, manage them straight from there. Does that make sense? Yes. And I'll just add a couple of comments there also. Thanks, Suresh. From a management monitoring, we also use the closed-loop automation where we're able to get all the logs, do the in-depth analysis of it using our AI tools, make predictive insights, and then using our AI systems we're able to determine what the possible resolution is. Our AI systems have been trained to it. And then we use the manual there to actually make corrections. And I think there's an additional comment about making the videos available. Our plan is to make it available on YouTube. We're looking into it right now and we'll let you know. Yeah. For the next question, M.D., right? So the switches, right? Like they are made for running packet processing functions. So we are able to do like even more than this, like in the switches. On the control plane itself, like, you know, they're like fairly limited in the functionality. So we expect some part of the 5G core to be running outside the switches. So for example, the SMF and the AMF and like any distributed parts of the 5G core will probably be running on a server that's connected to the front port of the networking switches. But like most of the functions can be directly sitting on here because we have a reasonably sized CPU complex on the switches. The host complex is like a Z on D class 8 core CPU, right? And we have about like quite a bit of RAM in there as well. So we are able to run a significant amount of workloads on the switches with no issues. For example, we can run like multiple instances of BGP, for example, like with multiple VRs, everything runs on the switches, right? And so we can talk a little bit offline if you want, like, you know, send us an order and we'll talk about it. But there's like quite a bit of horsepower in the switches. I believe there also was a question about the tools used for the slicing. So in our specific case, we were using items to help our network cloud. So depending on what the specific VNF or XNF that you're using, we did provide a different functionality. So for the transport layer, we had to create a tunnel. For the 5G core, we had to actually deploy it and do certain configurations and something similar for the rerun. But ultimately, the actual tool that provisioned and the set of the slicing was IBM's telco network cloud that we demonstrated here. That's good. And Steve, like, you asked a question about like on-app and this is something we are explicitly addressing. So there is a Linux foundation project called Virtual Central Office 3.0, VCO. And we have a demo during O&ES itself, right? I don't know the exact time of it. There's like, we actually show how these things are integrated together with on-app. So if you cannot find it, like, you know, shoot me a note and I'll send you a link. But it's Heather and Brandon from Linux Foundation will do it. And that uses the same system we are talking about here to put together CNFs and on-app and orchestrating everything together. So if you take a look and if you have further questions, we can talk. Okay. I think we're done with questions then. And we can probably conclude the session. Thanks, everybody, for joining. We appreciate it. And if you have any questions, please do get in touch with us and hopefully this helps you get a better idea of what we're doing with the service providers with 5G and edge computing. And just one more comment. We do have a virtual booth. Please do visit us there at the virtual booth and you can get more information about what we're doing.