 Great. This is fantastic to be here. Good to be on stage again in ONS. I think a fabulous conference both here in Europe as well as in US. And now we also extended with Edge for the next year. That's looking fantastic. So yes, as you said, RP, that would talk about open source, cloud native, Edge and orchestration, and why that's essential for 5G. And I will start with a slide that talks about 5G, actually. And I think we all see now 5G being deployed. It's being deployed in US. It's being deployed in Europe. It's being deployed in Asia. And we see on this slide here at the bottom Enhanced Mobile Broadband as the first use case, as the first set of use cases that 5G brings. And I think you have all seen on YouTube the scales of the happy 5G users where you go beyond one gig and the scales just take, there is an end to the scale. I think that is, of course, what will be the first thing that 5G brings to the consumers, to the community, to the industry. But I think we all understand that 5G is so much more. It's actually not just one another gig. It's actually taking it into being fundamental for the industry, being fundamental for the society even. And we see that in the use cases that we have to the left and the right on the top. We have the massive machine type of communication use cases which builds on sensors and it will help digitize and revolutionize and disrupt a lot of different industries like logistics, smart agriculture, smart metering, et cetera. But we also see critical machine type communication coming into the next level of use cases, which also will help digitalization and disruption and evolution and innovation in a lot of different industries. And this is why the combination of the hands mobile broadband to really bring you the speeds and the feeds together with the massive MTC, together with the critical MTC, those use cases, that's why 5G is so important and such a different event compared to previous Gs. But I'm here not so much to talk about 5G because it's not a 5G conference. I'm here to talk about what kind of technologies you need to really be able to develop, deliver and deploy 5G. And I will start talking about Cloud Native. This is a slide that we also used in our tutorial earlier today. Talks about why Ericsson sees Cloud Native as a key part in bringing 5G to live. And when we say Cloud Native, we mean, of course, running in containers. We, of course, means the software being structured according to a microservices architecture paradigm. And we, of course, means the possibility to scale these individual microservices independently, lifecycle manage them independently. And we do that because we need the speed. We need the speed to have the fast and low-cost introduction of new services. All these different use cases, you cannot roll them out take 12 months. You need to be much faster. We, of course, need the scale. We need the possibility to scale from low-cost deployments and small deployments up to serving hundreds of millions of sensors, hundreds of millions of consumers of industries. Efficient operations. With all of these use cases, and I will talk more about the orchestration and management parts of that a little bit later here. With all of these use cases, it's not possible to manually handle those. You need the automation. You need the orchestration. You need to be able to lifecycle manage built on microservices technologies independently, scale out, scale in, et cetera. So that's why we also need Cloud Native to really succeed with 5G. And we need the performance and the capacity, of course. We need to go all the way with containers on bare metal to get the maximum capacity out of the hardware. We also need to be able to scale the different parts of the applications so that the user plane can scale out while we keep the control plane small for certain types of use cases. All of this needs Cloud Native. And then, of course, we need to be quicker as a vendor. We need to have the right technology, the right software development paradigm to really do software development in a fast-moving, independent, and with empowered teams. So this is why we do Cloud Native. And we do Cloud Native for all of our applications going forward. We're starting with our 5G dual-mode Cloud Core that's being deployed. And we go on and cover basically all of our applications and our workloads from network functions to OSS to BSS going into Cloud Native. So I've talked about the applications. And I'm going to continue to talk more about the orchestration and the edge coming back to. I will also actually run a demo here at the end. So it will not just be me talking about. It will really show what we do from an orchestration perspective to support Cloud Native and 5G. But this is one example of what we also get with the Cloud Native paradigm. We'll be able to delegate downwards functionality to the Cloud Platform to really be Cloud Native. You need to delegate functionality for the lifecycle management of the infrastructure, for the lifecycle management of the workloads. Delegate that to Kubernetes while you keep the policy and the control in the VNFM and the service orchestrator. So going Cloud Native is not just about the applications. It's equally much about the platform as well as about the orchestration and management of those different parts of the stack. And without that, you will not be able to do Cloud Native. You will not be able to do the speeds, the scale, the efficient operations that Cloud Native promises if you don't think about the complete stack. And this is one example of what you can do with Cloud Native platforms. So then coming to Edge, think 5G is all about ultra low latency, ultra high reliability, supporting these different use cases. And that's not possible. I think you had an excellent slide earlier, Arpit, about the Edge. And we are replicating that here and saying that it's not possible to deliver all of these use cases without a distributed Cloud. And Edge is not just about putting a small form factor hardware with an operating system and a Cloud platform at a certain base station or central office. It's actually much more about being able to distribute and orchestrate applications across that complete continuum from the central to the Edge all the way into the enterprises. As well as being able to orchestrate the total distributed infrastructure, so handling transport and connectivity, physical infrastructures, VMs, and containers, as well as that you from a service orchestrator and network management or BNF manager perspective look at resource orchestration, data center network automation, et cetera. So to enable to really deliver on the Edge, you need to think about the complete thing. You need to think about how to distribute the Edge and the different data centers, how to distribute the workloads, whether those are user playing functions for the packet core as shown here to the left on the Edge, or whether it's third party applications that are being deployed together with the user playing functions to really deliver on the use cases. All of this you need to be able to handle then on the Edge to really be to succeed with use cases. Then another thing that we also talked a lot about, and that is super important from a cloud native perspective, from a 5G perspective, is well network slicing. And it comes back again and talking about the same thing, that cloud native is so much more than just a cloud native platform. In order to be able to deliver on 5G, in order to utilize and enable all the promises that both 5G brings and cloud native brings, you need the possibility to scale. You need the possibility to distribute. And you need to do that. You can see here we have exemplified with three different slices. And it's important from an efficiency perspective that you're able to, as an operator, deliver these use cases on top of the same telecom platform that you have established. So you cannot have different networks. You need to be able to spin up and control lifecycle management, SLA-wise, et cetera, all of these network slices independently of each other, quick and automated. And therefore, it's also important to see that you can see. We see the different examples here with the enterprise local connectivity. Those SLAs are probably about distributing the workloads, the applications, all the way maybe even into the enterprise, close to it, while you keep some of the control nodes in the central office or aggregation site. If you look at the public transportation slice here, you will see that this is really optimizing for costs. So you bring basically everything back to the central cloud just to run it as cost-efficiently as possible. Because the bandwidth is low, the request for low latency is not there. While for the mobile broadband slice, you go somewhere in the middle. You place the user plane function centrally in the central office, so halfway to enable the high capacity while you spend the control plane centrally to maximize the cost efficiency. And you need that orchestrate that from a cloud perspective, connectivity perspective, as well as from an end-to-end service perspective across this complete distributed cloud. So then I'm coming into a slide, which I think is maybe my favorite slide this young chair of the industry. I think we spent the last 12, 18 months talking about cloud-native and cloud-native applications. We spent a lot of time talking about microservices containers. But we also see that there is a little bit of a gap between the technology side of the industry and the operations side of the industry. And we will not succeed with cloud-native and the promises of cloud-native. Quick updates, canary testing of a new delivery, et cetera. All of these things, you will not succeed if you don't look at the complete technology stack. So you need to start at the infrastructure, need to deploy your Kubernetes infrastructure cloud platform there, need, of course, the cloud-native applications, because otherwise you will not see that. But you also need to look at the orchestration and automation all the way down and make sure that you realize and control the container orchestration environment because if you hide that, you will not be able to get all the benefits. But that's not enough. You can just not just deploy the technology stack. You actually need to extend that as well into the operations and way of working side. And there, I think, there is a little bit of a gap in the industry today. So we have exemplified it here with a continuous integration delivery and deployment chain that Ericsson is putting together today that we are testing with some operators with a fantastic outcome, actually. We are shortening software delivery into operations network, operation networks from weeks to days with this. And we need to do more there. We need to do more together with industry as well. But this is really an important part. So really to take a look at this whole segment from the technology stack to the operation side and do that as a joint effort, I think that is a call for action from me in this presentation. And then coming back, finally, to why we are here and why it's so important that we are here. Because I definitely agree, Arbit, that open source, it's really an enabler for 5G cloud-native applications and infrastructure. And Ericsson definitely realized that. Of course, we also see standardization as an important part. And I think I could almost have done the speech that he said that standardization and open source both needs to happen and both needs to be there. Because otherwise, we will not succeed together. And you cannot do the, you need both to really succeed in the networking industry. But if you look at the open source situation, then Ericsson is, of course, a long-term member of LF Networking, Engaging LFX, LF Edge, Cloud Native Compute Foundation, Deep Learning. And we're also adding now O-RAN and stepping into O-RAN to take a good lead there together with the industry, working both in the specification as well as in the open source side. And with that, I leave my part. And so I talked about orchestration and the need for that. And I think we need to enable not just talk, but show that Ericsson is doing way more. So therefore, I invite Kiran Johnston, our chief architect for the OSS portfolio on stage here. Please, Kiran, come on board. And he will do a demo of network slides and automatic workload placement. Thank you very much. Thanks, Anders. Hi, everybody. So we've seen a lot of information today about the complexity and the flexibility and the dynamism of the networks that we're building for now and going into the future. And what this complexity brings is a need for more intelligent management capabilities. So what I wanted to do was just to run through a quick demo of how we're working towards coming up with more enhanced puristic algorithm-based placement calculations in order to enable operators to manage their networks based more on the business need rather than on managing the specifics of the infrastructure into which they're deploying. When an operator wants to manage a service, they want to look at things like the regulatory requirements, the geolocation, the cost of running that service, what latency and throughput they'll have, and any existing node that might be in their network. So in this demo, I'm going to talk about the architecture on the left-hand side here. You can see we have a service orchestrator, resource orchestrator, a service design component, and then an inventory and topology component. And what we've done is we've created a microservice-based placement recommender using a heuristic ranking algorithm in order to make very quick, very fast decisions in terms of where certain resources will be deployed in the network in order to optimize the service quality. So from that, maybe we can run the demo. OK, so here you can see we have a number of logical resources in our network. And whenever you start to look at the geographical position, you can see it's quite important to understand what we're placing and where we're placing it in order to get the best quality of service. We have centralized data centers, the knock one in the middle, some regional data centers, and then some edge clouds. The first thing we're going to do is we're going to instantiate a service which has basic quality of service requirements. And it needs to put some stuff centrally and then push some stuff out towards the edge. And here on the right-hand side, you can see that we have a YAML-based descriptor which focuses mainly on those parameters, the quality of service parameters in terms of bandwidth and latency that the operator wishes to get from this service. When we go into instantiating the service, we click on our service design deployment, which will trigger the placement algorithm. And when we do so, down at the second from the bottom, you'll see we define an anchor point. And the anchor point is important because that's the point in the network in your infrastructure where you want that quality of service to be made available to your customer. And this is the only infrastructure-related input that the operator will need to provide. When you click on the Execute button, the service designer will then go off and figure out what are the resources that are best selected in order to make this quality of service characteristics to be met. And you can see where those resources then are going to be placed within the network. And if we look at that from a geographical perspective, you can see that we've run some stuff in the central data center in the middle of the country and some other stuff out there on the east coast close to the end user. So now we're going to go and talk about another couple of services that we're going to instantiate. And here we need better quality of service. So we're going to distribute some stuff down towards a regional site. But we can also share these resources between multiple services in order to get the best utilization out of them. And you can see here the layout of those services in terms of the logical overlay. And then we'll jump into the map view in order to see how those services are being laid out in terms of the geographic location in order to optimize the throughput and characteristics requirements. And lastly, we'll join together the east and west coast. And here again, the algorithm will use two anchor points to define the connectivity that's required between the two sites. And it will join together those data centers on the west coast or the data center on the east coast. And I think that wraps it up. So it seems like I'm reiterating a point that's been made several times already. We've built all this largely on top of lots of open source software and with open collaboration amongst communities enabling us to provide these value add capabilities. But of course, we need industrialized standards in order to bring these things into production and make them run in your networks in an efficient and robust way. This brings us to the platform for innovation that we need in order to enable what we're calling the next generation of advanced automation use cases. Thank you very much. Back to you. Thank you, Kiran. Fantastic. Hands up for. So that concludes the keynote. The message is, basically, you need cloud native. You need cloud native platforms, applications, orchestration, and actually a holistic way and view across the technology stack and the operations to deliver 5G and to deliver cloud native characteristics and to enable 5G. And we see open source and standardization as important tools to deliver cloud native implementations. So we welcome working together, collaborating on this. Thank you very much.