 All right. I don't even need my notes to introduce our next speaker. One of the things that makes for a good open source project is good code. But what also makes for the success that we see in open source is good people. And when I talked about sort of good projects becoming good products, creating value, I think Linus Torvalds is a good example of a great leader in open source. And our next speaker, Ahmad Susu, is a great example of an open source leader in business. Ahmad has been at Intel Corporation, working on open source since 2003. He was one of the first business leaders to recognize the power of open source and used it to create new innovation in the telecommunication sector, and essentially created the majority market share for Linux in telecommunications as early as 2003, 2004. Today, he continues to be a leader in open source. He runs the open source technology group at Intel. He's on the board of the OpenStack Foundation. He's on the board of our core infrastructure initiative on the OCF group, and many, many more. It is my pleasure to welcome someone who I've known for more than 10 years, Ahmad Susu from Intel. Good morning. So it's always awesome to be here, back in China. I speak, I come to China. I speak at conferences in China at least once a year. And this specific time is special. It's special because for the first time, my family came with me. So I have very tough audience. I have my two children, my wife and two children, my 12-year-old and 13-year-old sons here watching me tonight. So I do have very, very tough audience. And regardless of how comfortable I am speaking in China, again, forgive me if I'm performing for my kids. So with that, I want to start with something, build on some of the things that Jim talked about. So I've been managing the open source technology center at Intel for over 17, 18 years now. I've created that group way back when we were trying to enable Linux in the telecom market. And one of the things that is not very well known, maybe known here in China, but not very well known globally, is that in the open source technology center, my group is a very large group. We have a lot of open source engineers. But what people don't realize is that 600 of the Intel open source engineers in my group are here in China. In fact, it makes me kind of mad when people start talking about China and not understanding open source. We have some of the best open source engineers here in China. Almost 90% of the virtualization leadership, for example. And I can give you many examples of the amazing open source engineers here in China. So in today's talk, I am always humbled to talk about some of the projects that the Intel open source engineers are working on. And some of the things that they are creating and some of the real world problems that they are solving. And everything I will say today is really trying to represent this work and to talk about it. And hopefully you can join us in working on some of these really cool open source projects. So let me start with something that all of us probably know by now. So all of us know that the cloud software defined infrastructure is now well-established. All of us are familiar with software-defined compute, software-defined storage, software-defined networking. It's really just the evolution of the modern data center and the automation of the modern data center that we live in today. And the truth is that none of that would have been possible had it not been for the open source projects that creates the cloud software-defined infrastructure. Cloud native, Kubernetes, KVM, OpenStack, Linux, all of these projects are the projects that made the amazing software-defined infrastructure that most of our businesses run on actually viable. Now, I believe that this model is going to be extended. Extended well beyond cloud. So we think that this model is going to extend it into areas like automotive, industrial, edge, beyond the existing usages, beyond the existing using Linux to run the entertainment system in the car, and so on, beyond any of those things. So in fact, if you take the car example, if you take the car example, the car of the future is going to be a data center on wheels. And that data center on wheels is going to be software-defined. But it's going to be a very special way of software-defined. There's going to be many operating systems. Your instrument cluster will run an operating system. Your middle panel will run an operating system. Your mirror will run an operating system. The back media and video playback will run. All of these will run different operating systems. Some of them are Android-derived. Some of them are Linux. Some of them are microcontroller and real-time operating systems. But this is what we see as what will be the future and how industries like the automotive industries will evolve into this software-defined world. Now, these type of changes in the industries and these type of usages requires new technologies. And in new technologies, because if you go back to the example I've given about cars, now you have a situation where you have to accommodate what is called a critical safety system. At the same time, you have to accommodate the normal systems, the video playback, and so on. So now you have a mix of a safety critical and non-safety critical system within one environment. And that extends to things like industrial and in other areas. And some of the things that I'm going to be talking about is some of these new projects that will enable these type of usages, the software-defined car, software-defined cockpit, software-defined industrial automation, and so on. But also, by the way, even in the cloud software-defined infrastructure, things are not completely done yet. There is a lot of evolution. There is a lot of work. There is a lot of definitional work that is happening in that end. And I will also, such things like securing containers. And some of these I will also talk about a little bit. So let me start with talking about virtualization and containers. When you look at the isolation continuum, on the one hand, you have containers. And containers are awesome. Containers are very, very quick, blazing fast. You can bring up a container and a microservice really, really quickly, turn it down, and so on. Virtualization are great because they're very stable, very secure. And this is how most of the world runs today. And you're able to run an entire environment. Containers share a kernel, which effectively means that if one container is compromised, the security of other containers within that same environment are also compromised. And virtual machines are great. They're stable, but they're also slow. So it's always been a choice between speed or security. And one of the amazing projects that our engineers at Intel started working on a few years ago is to find a way to bring in both speed and security. And this is what we try to do with what we did with Intel Clear Containers. And Intel Clear Containers basically takes containers and allows you to be seen to use virtualization technology to add security to that container. And this is what we call Cata Containers, where multiple projects came together that were working on similar technologies and merged together to create the Cata Containers projects. And Cata, by the way, means trust. But the point of Cata Containers is that to have hardware secure containers, that it still have the container property, meaning it does conform the container interface, and it does have the fast properties and so on. But it is also used as a virtualization technology, so it is very, very fast. So Cata Containers 1.0 was released. And it is done in partnerships with a lot of companies, from Google, to Microsoft, to Huawei, to a lot of companies, to Hyper, who came together to bring this together. And it's very simple. It's complete runtime that is from a scheduling standpoint, like from a Kubernetes standpoint, because Cata Containers comply with the open container initiative and the APIs. It sees it as just a container, but it still you get that same security. Now, we think that we have made a significant improvement by this, but while we were doing this, we realized that virtualization technology is also becoming really stale. So virtualization has a very, very long history. Virtualization has been around in some form or another for 50 years. So it is very, very stable. And people love that it is very stable and they run their workloads on. But it's also very, very stale. And so we started looking at what is it that we can do to improve virtualization technology in just not just in the cloud, but also in the data center as a whole. And what we found is that you take probably the mainstream virtualization infrastructure, KVM, and you look at it and it's something like 2 million lines of code. And you look at what do these 2 million lines of code do? And it turns out that over a million of them are dealing with emulation, emulation floppy drive and all sort of things that are completely obsolete from the modern usages. So one of the things that we've worked on and we've been prototyping is to bring this virtualization and to create a much smaller footprint KVM virtual machine that is a fraction. And just to give you an idea where it is at today, it is at something like 200,000 lines of code instead of the 2 million, just the amount of drop by just separating the emulation from virtualization. And by the way, most hardware platforms don't need because modern hardware does not need that emulation. So we feel safe that it's good to do that. Now I talked about cars and I talked about how cars are going to be software-defined. Well, how are they going to be software-defined without the right virtual machine? So this is the virtual machine that we've created under the Linux Foundation. It's called Acorn. And Acorn is a functionally safe virtual machine. A virtual machine, a hypervisor that can run functionally safe workloads. So you are able to mix now, you know, run a completely functionally safe system alongside the same software-defined infrastructure with the rest of the car systems. Whatever other operating systems you are running, and you are able to share and pull things like do the graphics sharing, all the relevant things to the car, Bluetooth, USB, and all of these other things that in media, things that cars care about, into a truly IoT and a car-based hypervisor. And even though we started the Acorn project only a few months ago, we're already seeing a great result. People are already starting to use it. If you look at Harman at the Auto China 2018 show, introduced their Next Generation Intelligent Cockpit. It was built on Acorn. Alibaba is the work that we've done with Alibaba on the electronic cockpit. Also is based on Acorn. So it's amazing to see the evolution and how people are beginning to use these new projects. So I do encourage you a lot to please participate in some of these new projects. Next, I want to talk about Linux. And I want to talk about some of the work that our engineers are doing in a distribution that we call a modern Linux distribution. And the reason we call it modern, by the way, is, yes, there is a lot of things that I can talk about about the features of this Linux. The reason we call it modern is because we've designed this with a new development model that is much more friendly to modern usages. So we're not really encumbered with the complicated packaging systems and dependency trees and the way Linux is typically packaged. And one of the conscious decisions that we've done up front is that we wanted to make it really, really easy for all distributions to steal, to take things from the clear Linux distribution that we've created and to put it in their own distributions. And here is why. In clear Linux, we implement all platform features. And so what does that mean? So one of the things that is not really well known, when you look at not just the kernel, an entire Linux distribution is that for a feature to be useful for an end user, the feature must be implemented top to bottom in the kernel, in the libraries, in middleware, in multiple areas. If it's machine learning in TensorFlow, if it's using Kubernetes and Kubernetes and so on. But the entire stack needs to be enabled and optimized for those features. So we do platform features. And we do performance. In addition to that, we don't make any compromises on performance to lower the bar to a least common denominator. So we really focus on making Linux work really, really well and utilize the hardware really, really well. And you can just Google and search for what's the performance impact. And you will see in a lot of benchmarks that the result of this is 4x performance difference. Not 10% difference. 4x, 400% performance difference in some of these. So again, the other things we are doing with clear Linux is creating a model where we are able to react and respond really, really quickly. Not just for software, not just for hardware bugs, but also for software bugs. To make available those mitigation really, really quickly. And a lot of that is rooted in the development model that we use and how we create and how we are able to be current. Current mean very close to the upstream. Very modern software update. Atomic updates. And all of these other features. So I really do encourage you to take a look at clear Linux. And please feel free. If you have your own distribution, take whatever you want. We will even help you. So finally, I just want to close with a plug for our Edge projects. You know, Acreno is an umbrella, our umbrella Edge projects that we've created with many partners, many partners here in China, in the US, under the Linux Foundation. Acreno is where we feel that all the Edge programs will eventually be hosted under. And we're going to be working very closely with all of you and with the Linux Foundation to make this happen. One of the things that we've done in support of this is to open source our Windriver Titanium Cloud product, a carrier grade Edge product. And that product, obviously, we will integrate into Acreno. So with that, I want you to thank you very much. And I want to thank the Linux Foundation staff. And thank you very much, Jim. And have a good rest of conference.