 Thank you very much, Arpit. I'm very pleased to be speaking to you today. I want to, as Arpit said, I want to give you an overview of who we are and what we're doing in this space and what value that we bring. And I'd like to say that we, you know, we didn't invent the concept of NFV, but we make it more efficient. And hopefully at the end of this talk, you're going to really get a sense of what we mean by that. And I want to, and I want to put that in the context of sort of the big picture here. Let's skip that title slide. So the, the growth of the ARM architecture in the industry is rapidly increasing, as you can see here. It took us nearly 22 years to ship 50 billion devices based on ARM. And again, if you're not familiar with the ARM architecture, we have over 400 partners that build chips. We don't make chips. We make the IP and over 400 partners make chips for all sorts of markets. And so, but that is really rapidly increasing into this next decade. You can see the growth that's, that's coming here. Everything from energy harvesting sensors to servers. And you're going to be able to see some of that hardware on the floor today. Now, our vision is that by 2035, we believe that there's going to be one trillion intelligent devices connected to the network. And we also believe that by 2035, roughly speaking, the, the sort of the, the market opportunity in that year will be about one trillion dollars for IoT devices. And ARM expects to play a significant role in delivering on that vision and being implemented in those one trillion devices. And we say that the infrastructure necessary to support all of these super intelligent devices is going to have to be different than it is today. It's going to be, it's going to have to be highly efficient, space efficient, power efficient, dollar efficient, and that's what we do. Before I move into the next slides, I just wanted to, in case you missed it from yesterday, we generate a lot of IP. We generate cores. We design cores. We deliver interconnects, memory controllers, all sorts of different IP. Last month we announced Project Trillium, where we're delivering a whole new suite of IP for highly efficient machine learning and object detection processor technology that can be embedded into the devices of the future. And we've talked a lot about deep learning, machine learning, and this is an open program. And so yesterday at the GPU Technology Conference, the CEO of NVIDIA announced that they're going to be bringing their open source NVDLA technology IP contributing into the Project Trillium program to add value. And that's one of the points that we make about the program is that it's very open for anyone to bring in their unique IP and really help deliver highly efficient, highly capable, accelerated AI machine learning, deep learning, and this sort of thing. So this is a really exciting announcement. I'm glad to see that. So let's talk a little bit more about this efficiency. So our view is that with the race to 5G, 2020 is going to need a much more efficient telco cloud infrastructure. And we use the term intelligent flexible cloud. I'm going to talk more about what that means on the next slide, but it's not an unfamiliar concept. A lot of people are, you know, fog computing and other monikers, other approaches. The idea is putting that intelligence deeper into the network, driving it to the edge where you can get that quality of experience, where you can get that efficiency that's required. And for ARM, I talked about efficiency, space efficiency, dollar efficiency, power efficiency. The compute density, we believe that by 2020, we're going to be able to deliver machines that deliver three times the compute density of conventional servers. And today, even today, we have plenty of examples, one that was published last month on Forbes where we're delivering north of 2X compute density. What does that mean? That means it starts at the very lowest silicon layer where we can just fundamentally lay down more cores in a smaller area. And it manifests itself in the server level where you can essentially get consistently twice the compute density out of a 1U server. And our view is when we look at the challenges that the network operators face, they've got physical buildings, central offices they still have to maintain while they generate new point of presence clusters and new intelligence out at the edge. They still have to reuse these existing facilities. They've got fixed real estate, they've got a fixed power envelope, and fundamentally, we're going to help them pack more compute capacity into that existing space, either at the same power or at lower power. So you can sort of balance that as you want to see. And as you'll see more detail in the next slide, a lot of what the innovation and value add of the ARM partners, the over 400 ARM partners deliver, is they take our IP and they add GPUs, DSPs, FPGAs, other sort of optimization for traffic management, packet processing, graphics, machine learning, they add that IP to what we deliver and create workload optimized SOCs. That's the whole basis of our ecosystem. And when they do that, we've got plenty of examples. You see some of the numbers here of the kinds of efficiencies, the kind of acceleration, the kind of compute density that you can get from these sorts of solutions. So let's look at it a little bit more closely as to how we envision the network looking in this intelligent flexible cloud. So the idea here is, again, we're going to need to push more intelligent out towards the edge. And the ARM ecosystem is uniquely positioned to deliver extraordinary value in this migration. Because again, the whole idea, the whole business model of ARM is that our partners put together the optimal combinations of compute, storage and acceleration into workload optimized SOCs for running the workloads at different points in the network. And they're all competing and delivering flexibility and choice in that entire range from the sensors to the servers in the cloud. And so as we go towards this race towards 5G, where we have phenomenal demands on capacity, on latency, you know, we're seeing five to 20 millisecond latency requirements, you're going to, we believe that the future of compute is going to have to be secure and highly efficient. We're not going to be able to realize these kinds of latencies, these kinds of quality of experience, this kind of capacity with just taking only conventional solutions. The future of network infrastructure, cloud infrastructure in this world, AI, AR, ML applications is heterogeneous. And it's going to be workload optimized. And ARM is going to be delivering this highly flexible, highly efficient intellectual property along with our partners that help this vision be realized. And ARPA asked me to sort of give some use cases and some examples of this. And so I'm going to go into some details of how we're actually working on this today. And so we are a committed member of the Linux Foundation Network. We've been involved in many of these projects and growing in these projects. And mostly what we're doing is, first of all, making sure that we're in the CICD loop, all the frameworks that the operators are asking for the community, the applications to run on, we want to make sure that they're available, stable and highly optimized for ARM. We do a lot of special focus on helping the community make sure that everything is cleanly multi-architecture. We do a lot of work in the data plane optimization. And we're doing a lot of work. We're actually driving in FIDO and OWNAP and OPNFE. We're driving and participating in several projects on the whole movement to containerize NFE, Kubernetes, Docker. And we want to explore and assess and demonstrate what are the efficiencies of using this cloud native, going to this cloud native approach. And so I've got some examples of that and you've seen it. If you came to our ARM mini summit yesterday, you got some deep dives on some of that. But I want to go through a few use case examples of exactly what we're doing. So we drive the product in OPNFE we call auto for automation. And it's really just taking some select components of OWNAP so that we can manage the infrastructure and manage the life cycle of VNFs. Now we support, you know, we're working on OpenStack and Heat with VMs, but we have a particular focus on Kubernetes, Helm and some of these technologies that allow us to get really efficient life cycle management of cloud native VNFs. And we've put together some use cases for Edge Cloud, for Enterprise VCPE, and Resiliency Failover, managing all of this. So we're working with a wide range of ecosystem partners, operators, OEMs, ISVs, and we are showing how this can work between the two projects. How can we manage these life cycles? We've got really good broad participation in that project. So one of the examples of where we're driving these efficient concepts. In our booth near the registration desk we're showing a reference design for a highly efficient, highly cost effective universal CPE solution. We're working with Telco Systems and NXP and some of the VNF vendors that you see in the slide and a white box vendor to deliver compelling value for this kind of Edge solution, this customer premise type application here. And I don't want to belabor the details of this so much as I want to emphasize why are they choosing ARM? What does Telco Systems and their desire to diversify and add some unique TCO benefits to the operators? What do they see? What do they getting out of working with an ARM-based SOC? Well, again, as I talked about earlier, we consistently deliver compute density, lower power, better performance per watt. So these offload engines for offloading OBS and powder packet processing, these things are being leveraged. And so the TCO value proposition, the security equation is just better. And so they feel like, in this case with the NXP semiconductor example they've built, they're going to deliver an alternative, an option for these kinds of devices that's going to be more compelling from a TCO standpoint. Another example. This was a really cool one. We participated in the Mobile World Congress in a CORD demo with ONF, very complete end-to-end 5G edge cloud, end-to-end infrastructure solution or demonstration. And you can see here it's got all of the CORD infrastructure, it's got the ONOS infrastructure in it. There was a lot of different use cases. They had a video streaming slice, they had a facial recognition slice. That's, by the way, available and being shown here again at ONF. So it's over in the ONF booth, booth number 10, but I guess they're not numbered. We're particularly focused on, I mean we can run all the CORD infrastructure, but in this demo we're doing the facial recognition part. We're using ARM plus GPU plus storage to show, I mean you can do that a number of ways. You can take a conventional server and you can take an expensive PCI card that's highly capable and you can plug that into that server and you can do it that way. But it is not the most cost efficient, power efficient way to do it. We just wanted to demonstrate another way. We built our partner here, built an amazing prototype server to demonstrate if you think outside of the conventional box and do something different, not only with the software, but with the hardware, you can really get some tremendous advantages. And we want to demonstrate how thinking outside of that box and how leveraging the efficiency of the ARM ecosystem, the hundreds of companies that are delivering SOCs and doing the software in a little bit different way, you can really do this. So one of the key things about this was we essentially, he's got really some small examples of these little microservers out there on the bench, but he also built this full 1U enclosure with all commercial off the shelf components to demonstrate how you could leverage a C of ARM plus GPU plus storage, put it in a 1U rack with all cost components and get a server that when compared to a conventional solution is 4.4 times the performance, 30% the power and 40% of the cost with a handmade solution. So you get incredible sort of performance per dollar numbers. And what we're doing here is we're also proving another point. We did the slice in such a way that we don't go through a conventional PCI card through a big host server and then try and distribute and load balance these images where we're basically taking four gigabyte photos. We're breaking them down into a 200 byte sort of metadata and we're doing pattern matching, casting it off to the GPU, but we don't go through a host server and go through all of the infrastructure getting rid of all the benefit of getting right to the computation. You could have hundreds or thousands or hundreds of thousands in a real system of these images coming in. And we basically have designed it such that we drive it right to where the storage ARM and GPU processors are. We do the pattern matching and give the result with incredibly low latency. And we're doing it for on these little servers that are basically two watts, two or four watts on each one of them. We're just trying to show what can be done again as you, if you want to think outside the box and choose to get better efficiency, get better TCO. So you can see this demo in the own F booth and it's really an incredible thing. Another example of where we wanted to sort of demonstrate what can be done here is we've been involved in the OPNFE project for since the beginning. And we do have conventional six node rack pods around the world in operator labs where people are doing POCs and lab trials and we support the building of the software. But we wanted to demonstrate what if you could take really small, cheap $200, $300 boards, ARM boards with 10 gig interfaces and a modest amount of memory and put it into a one cubic foot package and run it as a full pharaoh's pot. Well, we did that. We demonstrated this in Beijing last summer and we've brought it here again today. And again, the idea is we wanted to show that if you think outside the box, you can actually do quite a bit with really low power, low cost, highly space efficient systems by leveraging the power of the ARM ecosystem and the innovation there. So in this sort of short overview, I'm hoping that you're seeing the point that I'm trying to make. ARM is committed to delivering the sort of hardened and optimized frameworks that the operators expect for VNFs to run on. We're putting a great deal of focus on how do we exploit the unique space efficiency, power efficiency and dollar efficiency of the ARM ecosystem to deliver compelling value as we get to go to this race to 5G and all these new applications that are going to be lining the edge of the network with traffic. It's going to have to be more efficiently done and we can deliver that. It's what we do. So please come and visit us in our booth, come and see some of the demos that I've pointed to, and I thank you for your time.