 Good morning. What a nice conference this Open Networking Summit. There is so much energy and enthusiasm. It's infectious. I look forward to coming here because I learn so much every year. So are we ready for 5G and Edge Clouds? Well, we better be because 5G is ready for us. So on a more serious note, what I actually wanted to do today was to talk about our journey through network transformation. And in particular, talk about some challenges that lay ahead of us as we look at the 5G transformation. And maybe even suggest a few things that we can do together as a community to solve some of those challenges. Before I start, though, I want to actually talk a little bit about our journey through network virtualization, network transformation, bringing the data center economics to the network infrastructure. And boy, have we made really good progress. If you look at this end-to-end picture of network infrastructure, we have great proof points all over. We have applications such as virtual CPE and virtual SDVAN in the enterprise network. We have virtual Evolve packet core EPC and IMS in the wireless core. In the IT cloud, we have virtual security appliances, load balancers, firewalls, and even radio access network, which was actually always thought of as difficult to virtualize. We are actually seeing some field trials and early deployments for the virtual RAN and CloudRAN kind of deployments. Now, all this actually sets up really nicely for 5G, because with 5G what we really want is a programmable and scalable network infrastructure. With 5G, actually, what 5G delivers is 10 times or more higher bandwidth, 10 times lower latency, new paradigms, new technologies such as network slicing, which deliver end-to-end quality of service, and even the use of unlicensed spectrum, which allows us to actually penetrate deep into the enterprises. And so 5G, I think you're hearing a lot about it, it's actually going to transform how we live. Now, of course, with 5G, there is always a discussion about edge. And so what is this mythical edge? I like to think about edge in very simple terms. To me, it's really bringing Cloud closer to where the data is, where the applications are, because the location matters. As we look at the next set of innovative applications, autonomous driving, industrial control, real-time video, 360 video, video analytics, the decision-making that needs to happen cannot tolerate a transaction that goes all the way into the Cloud. And so I think what I'm actually seeing is discussion around a three-tiered hierarchy. So traditionally, we have thought of client-server model, but now I think we have to look at distributed applications and how do we actually really attract the Cloud developers to come and actually build applications at the network edge? So Intel actually has been a pioneer in our 5G contribution, so we've actually been a part of standards definition and development for 5G. And over the last couple of years, I think you probably saw it at the Winter Olympics in Korea as well, we've been actually driving a number of field trials. In fact, I think we have done 100 plus field trials over the last couple of years. And so later this year, when 5G begins to roll out, it's actually going to be powered by Intel in large measure. If you look at some of the key requirements of 5G, its performance is definitely key, but I like to think of this as the workload-optimized performance. To me, actually, 5G is the intersection of compute and communications, and all of the artificial intelligence and analytics, and that's why the importance for, how do we actually have efficient compute married with the network side of things? Second, quality of service. It's actually all about fine-grained quality of service for the next wave of innovative applications at the edge. Third, as we distribute the applications, as we distribute the data to the edge, security and trust models are paramount. And then the whole discussion around cloud-ready and cloud-native software. I think if we have to attract the next generation of developers, these are cloud developers, and we want to actually provide that easy button for them so they can actually bring those innovations to the network edge. Now, I often get asked, actually, hey, what are some of the key tenants of how do we evolve the network infrastructure? And from my perspective, I actually like to capture everything on a single file. So this is my attempt at how I look at things. First, it's scale up. With all this data, the talk about workload-optimized performance, our goal has been, how do we actually drive a standard server to become a best-in-class network infrastructure server? That's quite simply what I've been driving at Intel. That's what Intel stands for. And so as a result, what you actually see is the evolution of standard server architecture with new instructions, new kinds of accelerators that actually convert a standard server and bring together this computing communications into a standard high-volume server platform. Scale up. Scale out. The vision here really needs to be, how do we drive? How do we look at the infrastructure as pools of compute, pools of storage, pools of networking, pools of accelerators, pools of memory? And how do we actually get to a point where we can compose an application or a service by drawing resources out of that resource pool? And then the last one here is very important as well. This is orchestration automation. And we have actually talked about this morning. There's a lot going on with automation, the work that we have done with OpenStack and Kubernetes and also working with VMware, and then at the service orchestration level with own app. But I believe actually still there is more work to be done here. So far, I've actually talked about our progress with network virtualization, bringing the data center economics to the network infrastructure and the progress that we have made there. But I said that there are actually challenges, particularly as we look at 5G transition, deployment of Edge Clouds. And when I talk to the community, when I talk to all of you guys, I hear three consistent themes. And I actually wanted to spend a little bit of time talking about that. And then what can we do about this? So first, cloud native applications. And there is a lot of discussion around how cloud native immediately gravitates towards a container microservices approach. But to me, actually, it's a lot simpler than that. To me, cloud native is all about how do we drive hardware abstraction, device abstraction, and how do we actually allow for workloads and applications to move around in the cloud. Second, network automation. There's been a lot of talk about network automation. To me, actually, if you want to create the self-reliant, self-organizing, self-correcting networks, we have to get to a point where we can apply AI as a key element of how we drive closed-loop automation. And I'll talk about that. Third, edge services. I talked about how it is important that we attract the cloud developers. And so to me, actually, it's about how do we drive, provide that easy button for those developers to build standards compliant with standard interfaces, edge applications? So let me actually talk about each one of these and what we are doing here and how the community can help. First, the cloud native network functions. We've been on this journey where we started with fixed functions, Intel's been a pioneer in this, and we've actually helped migrate to general-purpose platform with workload optimizations. Along the way, we actually built DPDK. We put it out in the open-source community and it's become a standard now for how you actually do high-performance data plane on standard servers. And we are at virtual network functions in this journey. And it's great progress. But along the way, we have actually made some choices that depend on the hardware, like single-root IO virtualization, as our IOV as an example, that actually prevent you from achieving the cloud scale and the flexibility to move things around in the cloud. And so we really need to get to this last step of how do we get to eliminate those hardware dependencies, yet we do that in a fashion where you can still achieve the performance and make use of the capabilities of next generation hardware. So that's the whole discussion around cloud-native VNFs. And what this allows us to do is to do the device abstraction and the decomposition of applications into microservices and overlays and fabrics and so on. And so I want to actually talk about what we are doing or what we have done so far and how we can actually evolve this to achieve the full potential of driving to a cloud-scale data path. So first, like I said, DPDK. It continues to improve performance. We deliver about 40 million packets per second per core, which if you think about it, on the latest generation Xeon platforms, it is 600 gigabits per second of throughput on a two-socket Xeon platform. What we are trying to do now is to evolve DPDK with device abstraction. So what that simply means is we want to actually provide for the applications to use accelerations for crypto, for machine learning, for baseband, use of smart mix, but do it in a way such that there is still a layer of separation between the application and the underlying devices or accelerators that are being used. And so we're driving that as a key thrust in DPDK. Second thing what you see here is AFXDP. This is a new technology. I think some of you might have heard about it. XDP stands for Express Data Path. And it already exists as a socket layer in the Linux kernel. What we're trying to do is all that learning that we have from our work on DPDK, we want to actually bring it into a standard Linux kernel networking stack. And XDP and the optimizations that we are doing in terms of how the DPDK style driver and some of the learnings from DPDK, that's actually going to actually position us really well. So my first call to action here for you guys is if you haven't actually looked at XDP, I would like to call you to take a look at XDP, come talk to us so we can actually work together to evolve the XDP as the way to sort of drive a cloud native data plane optimization. And then of course, we have a new technology called application device queuing in our standard NICS, Ethernet Controllers, that combines nicely with AFXDP to drive a low latency. Think of it as basically a dedicated freeway lane that allows you to connect to containers or VNFs and provides that low latency high performance path. Second, close loop automation. To me, this is actually extremely important because this was a key tenant of NFE that NFE was going to deliver. So things like fault detection, fault prediction, dynamic resource allocation, demand prediction, and automatic resource allocations. Unfortunately, this is one area that's actually fallen behind. And I like to actually parse this as an engineer into three areas. First is actually real-time telemetry. We have actually tried to address that problem by providing real-time telemetry in a standard compliant way with a tool called CollectD. OP NFE actually has the barometer project. The second area is orchestration. And I think we have made some progress there. Intel is actually a key contributor to ONAP and ONAP is actually doing a lot of work in this area. But in my mind, what is missing is the in-between part, which is actually how do we drive AI? So how do we drive trained models behind this real-time telemetry that allows you to draw real-time inferencing and close the loop? And so we have actually done some prototypes for applications such as EPC and a cable modem termination system. And we are seeing actually great results. The results actually correlate within 2% of what actually you would do with static policies. And so my second call to action is with respect to close-loop automation. Would love to actually share what we have done and really energize the community around this and drive this with a sense of urgency so we can actually drive, get to a close-loop automation and this would actually be a heavy contributor to TCO and also actually realizing the full potential of 5G networks. Third, the software easy button. And I realize, sort of like, you know, as I was looking, hearing the keynotes, nobody talked about this, but this is going to be very critical. How do we provide that, you know, standard-compliant APIs, set of libraries that are optimized, that the cloud developers, they just have that easy button. They say, hey, you know, give me a connection, make it secure, and well, actually, they get that from the infrastructure. And so we have to actually get to that point and what we have done here is, at Mobile World Congress, we announced Openness, Network Edge, Services Software, and what this is is a collection of libraries and well-defined APIs that are compliant with HCMAC. And so if you haven't actually heard about Openness, I would, this is my third call to action. Take a look at Openness. We have some collateral at Openness.org. What we're going to do is actually put the software into OpenSource at 01.org with a very light governance and we'll see actually how the community evolves around that and then we'll figure out actually what we want to do with it. But like Intel always does, we actually like to lead with OpenSource and this is going to be in OpenSource. And an example of how this works in practice, we have actually worked with a key partner of us, Tencent, I think Robert is here in the audience. So Intel and Tencent actually worked together to drive a V2X blueprint in Acreno with some of the initial capabilities of Openness. So it's a great example. So this is my other call to action. Come and talk to us about Openness and see what we can actually do with it, how we extend it and how do we actually make it useful for the next wave of 5G-enabled edge applications. So I talked about our journey through network transformation and some of the challenges ahead of us. What I would like to do next is we've been actually working very closely with Dell EMC. So I would like to invite on stage Kevin Shashkammer, a good friend, he's the VP of Enterprise and Service Provider Strategy and Solutions. Kevin, would you like to come and share your perspective? Thanks, so it's been a good morning for me having an opportunity to listen to a number of the presentations. I think we had Andre on this morning sharing an operator perspective. Obviously, Arpit came up this morning and shared and kind of set the stage for how the day will go. Rajesh here having a great conversation about some of the technology evolution that's happening at the server layer and inside of the silicon of the server. I think that Armagan from KPMG up this morning having a conversation about the business side of this and how we're starting to see the consulting and the business model start to develop around the technologies here. And I think that when we really look at this and we use this term digital transformation quite a bit, now I'm gonna pull the conversation up a level from where Rajesh was from a technology perspective and really recognize that we are talking transformation. And I think that we as an industry tend to think that the problem space that we're addressing is easy. I think we look at it and say that a lot of the problems have been solved. I think we look at it and say that there's a homogeneous way to go solve the problems that are in front of us. There's a homogeneous way to go solve the problems that exist and simultaneously supporting very, very high bandwidth video and very small packet size infrequent communications over for IoT and sensors and massively scaled high speed data communications with vehicles and motion that we can do this with one answer. And I think we tend to get lost in the fact that the transformation that we're going down is not a path to a single end platform. I think in the world that we've lived in before when we thought about cloud and we thought about massively scaled centralized data centers, homogeneity was the path to efficiency. In this digital transformation and as we move closer to the edge the problem space is very different. It's not about deploying tens of thousands of things at one location. It's about deploying ones of things or tens of things at tens of thousands of locations. The operational problems and challenges are different. The technology problems and challenges are different. The logistical problems are different. So when we start to think about the future of the edge I encourage everyone to think about heterogeneity as the new world that we live in. That how do we take heterogeneous technologies and heterogeneous platforms? How do we pull from the best of open source and the best capabilities that exist inside of vendor technologies? How do we leverage x86 architectures and smart NICs and GPUs and merchant silicon and start to bring them together in a common way and expose them in a common way so that we can start to put applications and services and workloads on the right platform leveraging the right capabilities at the infrastructure layer? And I think that more and more over time when we think about this platform enablement and you probably see slides like this from just about everyone in the industry at this point that this IT, OT, CT convergence and the need to be able to find a common way to modernize infrastructure, abstract that infrastructure, build a software platform on top of it that can be both containerized as well as virtualized and then transform applications themselves to be able to support a cloud native environment simultaneously to trying to adopt and adapt to a data-driven world in which we're leveraging AI as the foundation for closed-loop automation and trying to upskill a set of capabilities that exist within the industry, both at the vendor side as well as the operational side that are so rooted in a telco mindset, I think that the transformational forces that you see on the right really lead to a very long path to get to where we want to go. Now, I think that the great part about where we are is that we have proof points. We saw the 10-cent proof point, Andre talked to proof points today. There's a number of proof points that we continue to demonstrate that we can get there. And I think that we're gonna continue to see a shift from conversations at the abstract level to more and more conversations at the true execution level of demonstrating use cases. And I think where Intel and Dell EMC have really been partnered is on this joint innovation, not just with each other, but with the industry in open-source projects in our own CTO labs across multiple different lines of business, inside of the service provider group, inside of our artificial intelligence solutions organization, inside of our OT solutions organization. And what I like about this slide is that it shows the continuity that exists between edge and core and cloud. And I think that the reason that's important is a couple of fold. Number one, I think that as I said earlier, the cloud world has not solved all of the problems that we can expect to see at the edge world. And I think that we very often tend to view ourselves living in an either or type of environment, right? Do I do containers or virtual machines? Do I use X86 or FPGAs or GPUs or smartNICs? Do I look at open-source or do I look at vendor proprietary technology? And I think more and more what we're tending to see is a trend towards this closed loop here that says it's not an or world. We live in an end world and we need to find the best capabilities that exist within the industry. We need to continue to evolve at the infrastructure layer. Infrastructure, evolution and innovation is not dead as Rajesh just shared. We need to find the best capabilities that exist at the software layer and pull from open-source projects and recognize that even holistic open-source projects have bits and pieces that are really strong and areas where the open-source project is just not as strong. And the great thing about open-source is that we can pull technology from multiple open-source projects, put them together in a way, operate them in a way that gives us the ability to really close this loop and not think of the world as an either or type of environment, but an end environment that brings together the best of what we've already learned in operating hyperscale public cloud, what we've already learned in operating private data centers and what we've learned in operating telco networks and start to put them together in a uniform way that gives us the ability to really exploit the set of capabilities that we've been talking about all morning that you see at this show floor. So with that, I want to hand it back to Rajesh to share a little bit more and then I think we'll kind of close out. Yes, thank you, Kevin. So before I summarize, this has been a great week for Intel. We actually made several new announcements, particularly around our second generation Intel Xeon scalable platforms. What's special about this is actually for the first time in our history, we have announced Xeon-N, N for NFV infrastructure. So Xeon-N SKUs, which actually provide, which are fine tuned, optimized for network infrastructure and they provide up to 58% better performance as compared to the first generation Xeon scalable processors. The other thing about this platform is that all the 5G requirements I talked about, workload optimized performance, quality of service, cloud native software and security. There's a lot of that that's implemented, integrated with the Xeon scalable second generation platform. We also announced a game changing new memory technology that offers a persistent mode and reduce the total cost of ownership by 34% as compared to DDR. And then the Intel Ethernet 800 series that I talked about with technologies like application device queuing and the next generation FPGA Intel Agilex. So a great platform, a lot of new technologies that coming to the market. These are all actually towards creating a best in class network platform for the 5G networks. So just a quick summary here. So I think many people have said this, 5G is here and now, edges at the epicenter of new services and innovations. My call to action is I talked about three challenges and I suggested some ways of solving those. I would like to actually look to the community to come collaborate, work with us and see how we actually address these challenges. Of course, Intel is investing 5G from an end-to-end perspective and we are committed to network transformation, converting a standard server into a best in class network infrastructure platform. So let's collaborate and realize the full potential of 5G. Kevin, anything you'd like to add? Yeah, I agree 100% here and I think that collaboration is really key, right? So if we start to think about the world as a heterogeneous infrastructure and a heterogeneous software stack world, I think that it's important for us to find the right ways to collaborate to be able to progress this heterogeneous environment without creating, I think, a stalling function in the industry, right? And I think that very often, because we live in this ore mentality and not an end mentality, our leadership all the way up through the C-suite and a lot of the companies that I speak with and even sometimes inside of Dell and other companies, kind of look at making technology decisions as a long-term and binary decision. And I think that we have to break that binary decision and recognize that the only way to get to the end of this donation is actually to start moving and not be paralyzed by trying to make the right three-year, five-year choice and instead collaborate as an industry to continue to evolve the entire ecosystem forward. So with that, thank you and have a wonderful one-us. Thank you.