 Hello, everyone. Thank you for joining us. My name is Jonathan King. I lead Cloud Strategy and Portfolio Management for Ericsson. And today, I'm going to be joined on stage a little bit later by the one and only Susan James, who leads our NFE product line. And today, we're here to talk to you about digital industrialization and the interplay between software-defined infrastructure and NFE. And just to open this up, where we sit here today, all of us with cell phones connected and other devices. I know I have two cell phones and an iPad. And who knows what else with me? We're in that world today where billions of humans are connected. And we've built systems that have allowed this to happen. And it has not been an accident. It's involved billions, even trillions of dollars of investment. And what this slide is pointing out is that actually Ericsson, there's very impressive and powerful players out there that have worked on building these systems from an IT standpoint and the internet providers. But that Ericsson has been working on this through a standards mechanism. And today, we connect and manage over 3 billion users through the systems that we've developed. And that gives us a certain perspective on what's coming next, where we're going to go from billions of humans to many, many billions of devices. And you can look at all the various statistics out there. I think that the Ericsson Mobility Report, which you can find online, talks about that in 2020, we'll start to see 26 billion connected devices and then beyond there, really, you can envision getting to 50 billion devices. And the point with this is that, really, today's infrastructure is not going to allow us to scale to meet the needs of these billions of devices. And if you look at where the evolution of 3G and 4G, now 4G to 5G unfolded, there was a time before 4G arrived where people were wondering and asking, well, what is 4G going to be used for? Why do I need 4G? And I don't know if many of you look for the difference between 3G and LTE when you arrive somewhere or that you have looked at the statistics on video usage. But basically, video came along and drove the insane demand and rapid rollout of 4G. And my own belief is that the types of use cases that are going to come with augmented reality, autonomous driving, virtual reality, these are the types of things that really are going to explode demand. And if you start to look at those systems and how do you actually meet the needs of those systems, it's not through the actual access methodology that those systems are going to interact with and drive and consume and meet the needs of consumers. It's through a much more dynamic supply. It's through a much more dynamic computing environment that you're going to be able to achieve the kinds of computing and the kinds of automated infrastructure deployment that is going to be needed for this billions of devices world that we're going to see and a host of new use cases. So what we talk about is that that's going to really require a digital factory, that we're going to have to have a capacity to fully software define and rapidly provision infrastructure in a way that we don't today. That it's going to have to both involve not just entirely public cloud, it's going to have to involve on-prem infrastructure as well. And that similarly, it's not just going to involve on-premise infrastructure. It's going to have to heavily involve and take into account public cloud. It's going to have to be a dynamic supply where you're going to see both of these things happening at the same time. And what this also means is that we start to look at how do we think about efficiency. So if you look at the hyperscale cloud providers today, the massively scaled internet companies, they have an advantage in terms of both on the left. Their operational efficiency, which is very high, and their asset efficiency is very high. Earlier today on a panel with Intel, they were talking about how actually the capabilities of the underlying hardware in Intel chipsets that are coming out exceed the capabilities of software that most companies are consuming today. Most companies are still consuming back in that static supply infrastructure, where they've rolled out and built an ERP environment. They've rolled out and built maybe a VMware environment and an OpenStack environment. They have static infrastructure supply for different waves of software. The way that these hyperscale cloud providers work is they have a common system that is constantly operating and being able to be updated in a way where it doesn't negatively impact operations. What this does is it gives these providers the ability to access things from players like Intel and others. They are constantly accessing the best of breed technologies. They're the first out with things like GPUs or graphical processing units. They're the first out to explore new kinds of power configurations. There's the constant velocity that happens and benefits from that. So another way to look at this from a factory standpoint is that the factory is going to require that it behaves in a hyperscale way and that it also accesses and takes advantage of those who have already achieved a kind of hyperscale. Another way of saying that is that hyperscale is going to be hybrid. And if you look at how the markets evolved over the past four to five years, there was a time if you wind back to 2011, 2012 when operators themselves thought that they were going to be one of the primary players in commercial cloud. So in the diagram up there in the upper left, what you're looking at is you generally saw companies saying, we need to have a network cloud or a telecom network cloud to do our network function virtualization or software to find networking. That's going to be a network cloud. Then in the middle, we're going to have an IT cloud. We're going to consolidate our data centers. We're going to run our IT environments more efficiently. And we're going to have a commercial cloud, where we're going to run and offer public cloud services because players like Amazon and Microsoft and Google and others are never going to scale to meet the global needs of security and things on prem. And what you've seen happen over the past three to four years, driven by that high asset efficiency and that high operational efficiency, is that you continue to see the public clouds expand and expand and expand because of that underlying advantage. So much so that where it sits now is, I think, a worldview more on the lower part of the diagram, where the operator cloud has started to converge, where you're seeing a network and IT both coming closer together because they're software defined. And you're also seeing more and more services avail themselves of the hyperscale players. What our belief is, is that the left there, that telecom and IT cloud, needs to become hyperscale in and of itself. And that's where we are taking making investments so that we can help our customers with products and services achieve the kind of operational and asset efficiency that these other players have on premise. But that also means that we think that it's appropriate and fitting to take advantage of the hyperscale players and their operational asset efficiency when it makes sense to place workloads there as well. So this is really where we're coming from, is that Ericsson sees an opportunity, given our global scale, given the kinds of systems that we've built, getting to this billions of human users, that we know the kind of distributed systems, the software systems and hardware systems that need to be built. And we feel like we can help operators and enterprises industrialize themselves to remain competitive in this transformation as we go to billions of devices. And what we see here is depicted on this diagram is sort of our philosophy of entry. And first at the top there is that we are a very big customer ourselves. So the systems that we're offering out, the hardware and software systems that we're bringing to market, we are running ourselves because we want, just like our customers, open, flexible solutions with industrialized automation capabilities. So what that means is that at the top, we are going out and very actively participating in open source efforts. We are involved in dozens of open source efforts. Susan, who we'll talk here shortly, is on the board of OPNFE, for example. Here we are at the OpenStack Summit. We are genuinely and very actively involved in open source. And we view that as really a core part of our strategy. Additionally, we are busy forming partnerships and using public cloud services. We announced our AWS partnership earlier this year. And we think that that really is a table stake that coming in as an entrant into this cloud space that we're thinking very heavily about open source and we're thinking very deliberately around where is the appropriate use of public cloud and what are the reference architectures that companies should be thinking about as they look to transform. Down below represent product areas that we're investing in. So on the lower left, and Susan will give some examples of this a little bit later that talk to NFV, we have brought to market a software-defined infrastructure offering called HDS8000. This is based on Intel's new rack scale design architecture. Intel came out with this architecture, really from a common philosophy of thinking about how do you get to hyperscale economics on-prem. What the rack scale design enables is a disaggregation of infrastructure so that you can pool components. It gives much higher efficiency and performance. It allows you to software define and use what you need across a disaggregated pool of infrastructure. And it also gives you better visibility because you have that information pooled. On the other side there, from a cloud and data platform standpoint, is we have been very active in container orchestration. We have invested and cultivated APSERA, which enables multi-cloud orchestration. So we believe that as we've been talking about that hyperscale is hybrid, that we have to have a means by which and tools so that companies can develop in a responsible way and be able to place workloads in a multi-cloud environment. And really where we are now in this journey of going from billions of humans to now billions of devices and Ericsson coming in and entering the market and bringing products and services to bear for operators and enterprises to help with this transformation, is we see five journeys that are coming into focus. The top are the ones that we're most active in right now and the lighter blue represent ones that we're seeing increasingly as well. So first is the transition to NFV, which actually in today's talk will go into more depth on as Susan comes up to stage in just a minute. But we also see a lot happening in data center and infrastructure consolidation and modernization. In some instances, this would be represents greenfield initiatives where companies are looking to how can they build and start with really the next generation of technology today. An example of this would be the Estonian government where we're working to power their digital industrialization and form a consortium to help them as a government leader. And then also workload migration automation, cloud platform orchestration automation and ultimately the enablement of digital business. So these represent common journeys that we see our operator customers and our enterprise customers going on. To bring this more into focus, what we'll do is I'll pass the baton to Susan and she'll talk specifically about our transition to NFV journey. There you go. Okay. But before you go, Jonathan, I'm gonna ask you a question. What hat are you wearing? Oh, who in the audience knows what hat I'm wearing? Cubs. Chicago Cubs in the World Series tonight. And since I'm here with you, I had to wear the hat. So 2 a.m. I think is game time. What, where is it being played? It's played in Cleveland and the Cubs haven't been in the World Series for 70 years. My mom was three months old when they were last there and they haven't won the World Series in 108 years. How many people does the stadium hold? Oh, geez. Now I'm sitting down. It was, I was gonna get to a point. Oh, okay. How many people does the stadium hold? Wrigley Field, approximately 45,000. The time stadium holds. Go and sit down. I've got a bit of story. Okay. Sorry. I come back to that in a minute. So when we talk about NFV, we started with this journey a number of years ago and we have a whole portfolio of products. So we had a number of different products in different parts of our organization. And when we started to look at this, it seemed rather daft that we had different things all over the organization. So we have put it all together now. And now I like to describe it as a little bit more daft punk because now we're faster, better and stronger having created an organization where we group all of this together and can actually integrate and provide a solution to our customers. Okay. Oh, sorry. The reason I asked Jonathan about the stadium is to link back in to how we see the world evolving. We talked about IoT. We talk about 5G. And where that becomes important is when you look at the kind of devices that you're going to connect and look at the kind of things that they're going to be doing. So we see today that you have both ends of the scale in terms of mobility. You'll have devices that are actually sitting in your home and they won't be moving. You'll have devices that are sitting in trains or in airplanes. And of course, they'll be moving quite rapidly. You'll also see a huge range of volumes of data that's collected from the home security device which hopefully never sends an alarm to the station to say that you're being burgled to again the jet engine that's creating terabits of data. One thing we know about that is that it's not always the most efficient thing to move the data around to the processing. It becomes more efficient to actually move the processing to the data in some cases. So if we look at that network and say, okay, we know that we don't know what the traffic profile is going to look like over a period of time, how do we build a network that you can then tune to how you need it to perform at any given time? So this is one of the things that we're looking at when we talk about the journey to NFV. To be able to build a network where you're not sure how things are going to look, you then need to have certain characteristics in place. And one of those things we need to have as an enabler, I would say, is virtualization of the functions that you're running in that network. So one of the key steps then is actually what we're doing around the NFV, which is the virtualization of the compute and virtualizing those applications. The more you virtualize applications and the more VMs that you need to support, it becomes logistically impossible to be able to manage that unless you introduce things like software to find networking, to provide the networking in an automated fashion. And given that this network is going to have different traffic patterns in it, you may then to decide to move workloads to different places in the network. So getting back to the conversation I had with Jonathan, there was a sporting event in Melbourne on the 1st of October. It was a football game, the Australian rules grand final. We have a stadium there. It's 100,000 people. That's Australian for stadiums. And in that stadium, that basically holds 2% of the population in Melbourne. And when you think about 2% of the population of any city, at a huge event where everyone's going to be updating their Facebook status, taking videos, Instagram, the amount of data that you have and the amount of network connections that you're having in that stadium, it's enormous. But this only happens three days a year at the most. So do you really want to have all of your network resources for 2% of the population located in that small space, which is actually in the middle of a park? Probably not. So you want to be able to move the resources around in your network to where it's the most appropriate place to have them at any given point in time. That means you need to have virtualization, it means you need to have a software-defined networking, and it means that you need to have the ability to locate that capability at different places. The other thing is that this needs to be about how do we build more business for operators? It needs to be how do they make money going forward? And that's where we talk about network slicing. How many connections do you think your car's going to have going forward? Just off the top of my head, I can think of at least three. And they have different characteristics. You're going to have the entertainment system connected in the back so your kids are not driving you crazy while you're driving if you're going on a long distance. You're going to have some kind of software update capabilities to be able to fix small things in the car so you don't have to take it back to the shop all the time. And the third is going to be along the lines of an emergency services link. So today, if you crash your car in Sweden, you actually get a phone call from the emergency services to say, are you okay? And if you answer, then of course, they at least know you're conscious. But they're also checking in that to see how many seatbelts are actually activated because if you don't answer the phone then they need to know how many ambulances they need to send to sight. So those three different connections actually require quite different characteristics from the network. And you probably want to have different payment models associated with those connections to your car. So for operators to be able to look at how to build the right kind of business around the network slices, they need to do it as separate slices so they can create different business models around them. So we have another example in Sweden where one of the operators was asked to provide connectivity for a security company. And they thought, okay, well, the easiest thing is to give them some prepaid devices, prepaid SIM cards. So they did that. Three years later, they call up the security company. We haven't heard from them. What's the problem? Nothing's the problem. We've still got 25 crowns left on our SIM cards. So it's cost that operator significantly more to be able to provide that service to the security company than it did for them to sell those SIMs. So you need to be able to create a business model that's tied to the capabilities that you're providing in the network. This was a survey that was done last year to ask, you know, what do operators expect from NFV? And, you know, none of this is really surprising. The order has probably shifted. I think Ros has produced some really interesting information for the OpenStack event here. But this is the sorts of things that operators are actually looking for. But I would say that this is more the consequence of the NFV rather than the driver of why they want to take the journey to NFV. If you look at what we as consumers are looking for, when we go to an operator, we don't expect to be told that, yes, it's great that you want this new service. We'll have it provisioned for you in 24 days. Yes, we're lovely to have your VPN business. We'll have that connected for you in 21 days. So for operators to be able to address their key customers and their business needs going forward, they need to be much better at actually addressing what you expect as a customer and what you expect. Well, you don't expect to have to go into a shop. You probably don't expect to have to call them and wait in a queue for however long it's gonna take them to answer the call. What you really wanna do is you wanna be able to log into the website. You wanna be able to specify what it is that you want and you want that service delivered to you now. So when operators start to look at what is it the expectations that their customers have on them, it is to be able to consume the services that you want in the way that you want to consume them. And for the operators to be able to do that, then they need to go through a number of different steps to be able to do that. And I would say the consequence of that journey is to then be more agile, to be more efficient and actually to be faster. So if we look at the NFV journey, if you like, and what does that enable? Well, it's not just about technology. What we see in a number of our different engagements, and in fact, all of our engagements, the technology part is just a small piece. It's not an easy piece, but it's just a small piece. If you're going to be able to address the kind of expectations that your customers have, then it needs to be about how do you transform your organization from a people perspective. Because today, there's no one organization you can point into and say, you're responsible for actually the NFV transformation. Because the IT department has great knowledge around how to do virtualization and what that involves. The networks organization has great knowledge around how to run networks. The OSS, BSS organization, are great knowledge around how to do provisioning and these sorts of things. But there's no one organization that can make this journey on their own. And what we find is that in many cases, the way they start this journey is they're saying, we don't actually have the competence to take this journey in one organization. Let's create a cross-functional team. They start to pull people in. We were at the presentation from Andre at the KPN two weeks ago and he said they actually asked for volunteers from their organization to join the team. And what you do is you start to grow the pool of resources very much like the rings on a tree. And once you understand what you know and what you don't know, then you pull in more resources. And then you start to build the pool of resources going forward. And what you find as you start to grow that pool and start to look at what you need to address, you find that there's a whole range of processes that are actually barriers to what you need to do going forward. Now you have a cross-functional team. So then it becomes a great time to actually start addressing how you fix those processes. So we have a customer that they did their NFV transformation and found out it still took them 24 days to provision the service. I said, well, hang on, we need to fix that too. So it's about doing these things together while you have that cross-functional team in place and really tying it then to how you address your customers going forward. But of course, in any successful project, it has to be tied to the business that you want to be in the future. So it comes back to the logic around any successful project, tying it to customer needs, tying it to future business, making sure you have key stakeholders involved. So really we see this as a way of a catalyst for operators and service providers to really transform not only their technology but their business going forward. We've also looked at a number of different journeys, if you like, that they take and we see a number of different examples of different journeys that operators and service providers choose to take. We see more and more taking the path of scenario one and that is really just a first step. Once they have taken that first step, their second step is to jump step number two and go directly to step number three or scenario number three because it's not really a step model. But we have, I would say, customer examples in every one of these scenarios today. So if you look at the contracts we announced earlier this year at Mobile World Congress with Swisscom and Telstra, they're very much sitting in the scenario one and you're still within that scenario you can take different approaches. So we have one that's taking very much an infrastructure approach and then how do you onboard different applications? Or you can take the approach where you take very much an appliance style approach and then look, okay, as a second step, how do I transform that into an infrastructure plane? We have Telefonica sitting very much in the scenario three where they are building then out their infrastructure and then looking at how do you then onboard V&Fs from different applications from different vendors. And then we have examples of how AT, Docomo and these sorts of are doing more, the scenarios four and five. So really the message here is that there is no wrong journey to take. It's really a journey that you feel is comfortable and gives you the right kind of support in that journey because it's not necessarily an easy path to take. The reason we see more operators taking the scenario one is that the coordination of getting multiple vendors to work together in a very complex environment is actually challenging. And this is why we see it's extremely important to be very open in our approach. So what's different from what I would have talked about three, four weeks ago is that we have now launched what we call our multi Vim approach where we believe that it's really important to be open and give operators a choice. So we don't want to be someone that says that you must use this distribution and that this, this, this and this. We want to be able to build an environment where operators have choice. We want to move away from doing post distribution development and work purely in the upstream. So the product that we have looking based on OpenStack at the moment that our cloud execution environment, we're taking a distribution and then we're adding content on top of that. And that's a legacy of where we were in the business sort of three, four years ago, where OpenStack really was not ready for the kind of workloads that we're working with real-time demanding workloads. Over a period of time, I think we've come along away with OpenStack. I still think there is some things that we need to work on and we are working on those with our partners. But we see their shift to actually moving that development directly into the upstream rather than doing that as a post distribution development. And then we will put our focus on how do you actually manage a multi-vim environment? Because we see this will be happening more and more and more. And even in every operator today, you see that there is not just one cloud. There will be multiple clouds and over a period of time, we believe that those clouds will become closer and closer together. So if we talk then about changing pace slightly, moving to the data center and looking what we're doing in the software-defined infrastructure perspective, what we're building around here is looking very much at how do you not just automate the workloads but the actual data center itself, looking at things like how do you create an environment where you can do self-learning, self-organizing, get into that continuous integration and continuous deployment and really end up with a automated data center. I gave a vision to our research organization a couple of years ago. I want us to get to a position where we have the no-power and no-people data center. And I think that that's ultimately a goal that we'll never achieve, but it gives us the right mindset on where we want to be going in terms of how we want our data centers to be run. What we're doing within the HTS-8000 is to really provide an environment where you have a total view of what's actually going on. And we do that then through our command center. But we also want to be extremely agile so that you're only defining the resources that you actually need. And as a consequence of that, you only need to buy the hardware and infrastructure that you need. And that really is the beauty of having a disaggregated system because it's not how you disaggregate it's actually important, it's actually really important on how you aggregate it again. And creating then the concept of virtual pods where you only define the amount of infrastructure that you need, you can scale it out and include more infrastructure as you need it and then back in when you don't. So you really get a very efficient utilization of resources. This matches very then nicely with our concept of multi-vim because what you can do is create a VPOD around each of these vims if you like. So if I need a certain amount of resources to be able to run my Red Hat OpenStack, I can do that. If I need a certain amount to run my Moranthus OpenStack, I can do that. If I need a certain amount to run my VM where I can do that. And as the needs go in and out, you can actually scale the resources accordingly. And we do that by the command center to provision the actual physical infrastructure and also then locate the workloads via the cloud manager. And of course, these are synchronized so they do this in combination together. So I'm gonna call Jonathan back up here for a minute and just give you a quick run-through of what you can see down on the floor. We have a number of different demos down there. We have a sled from the HDS-8000 if you're interested in hardware. But we also have our BSP, which is our ruggedized hardware for running at any location. So we have our system that runs up to 50 degrees and you can stand beside it and talk. So it's not your typical data center hardware, but we also have the data center hardware there. Do you want to, we also have then what I call the multi-app on OpenStack where we have a number of different applications running on the one instance of OpenStack. We also then have the cloud SDN, providing different use cases. I think they're showing the glue on demo on how to be able to provide a really open view on how to provide SDN. Do you want to say something around what we're doing on the containers down there? Yeah, we're able to demonstrate a container and production environment that's spooling out to on-premise and off-premise infrastructure. Would love to demonstrate it to you if you're able to come by the booth. I think we'll be there in the mixer from five to seven today. And we also have time to take questions. We do have time to take questions. So I think we have five, 10, five minutes, five, 10 minutes to take any questions. There is a mic over there. If you're so inclined to make it that way. Or you could shout it out. Or you could just shout it out. We're pretty informal. So are there any questions? Any brave soul to ask a question? Going once. We've got one down the back. Oh, there's one out there. Yep, you've got your hand up. That'd be you. So what new services are Telco's providing on NFE was the question? I think they're not starting necessarily with so many new services. What we typically see is they're starting with things that can be deployed in a pool. So very much the early use cases we see are things like virtual EPC where they can deploy EPC into a pool. They add a virtualized EPC into that pool. So that gives them the ability to see how things are going. If that virtual EPC falls over, no one will actually notice the difference from that perspective. I think we also see some use cases around early IMS, a virtualized IMS. And many of those operators are choosing to skip going from a native IMS directly into a virtualized IMS. So they tended to be sort of late movers in the voiceover LTE space. The other thing, of course, is if that system goes down, they always have a CS fallback alternative. So I think it's really about picking applications where operators can get their feet wet, feel comfortable and then continue to go forward as they see they get more comfort not only in the technology itself, but in their own organization's ability to deal with the kind of situations that will arise. And I think we're quite far down that path with many different operators. So we have seen the first level of soaking going on and now many of those operators saying, yes, we feel confident enough to move into phase two. We're also seeing a number of operators adding additional applications onto the infrastructure now and start to really build out. So I would say this year has been very much about getting your feet wet from a number of operators perspective. And we're now moving in the second half of this year into a much more robust deployment phase right now. So it's interesting to see Roz's analysis in that many operators still feel the technology is quite immature and that they're not 100% confident in it. I think, as I said, we still have some additions on top of our infrastructure to make it robust. We're in the process of upstreaming that. So I think over the next year, you really will see it being able to be based on open source, what I call clean open source. So direct distribution without too many additions going on behind the closed doors. Any other questions? Heaps of people coming in, you've missed the best part. Okay. All right, well, we'll be up here afterwards and we'll be down at our booth later on. And I won't give a round of applause to Susan. Thank you. And Jonathan, thanks. Thanks for coming everyone. Thank you all. Thank you.