 Hello, Leon. I would like to start just echoing what Greg said. Keep up to date. One of my good friends has a great line. He says, security is like doing the laundry or the dishes. It's never done. So this is not going away, even from a software perspective, update. OK. So I'm excited today. I got to tell you about how we're doing open source better. Making performance faster, delivering strong results. Before I dive into my story, I'd like to take a moment to recognize Intel's general manager for the Open Source Technology Center, who's not with us today, Imad Soussu. He recently retired, but he spent over 20 years building open-source at Intel, building our strength in the Linux community, and being a major advocate for open source. If you want to see up a little bit about our evolution and how this group grew up, you can check out the Bitly link at Intel Open Source. It's a great little story. The cornerstone of our leadership has been the Linux kernel. Year over year, leadership like this, if we were in sports, would be called the dynasty. In a company renowned for setting the pace of compute technology through Moore's law, the software doesn't always get the spotlight. But as you can see, we've been setting the pace, keeping pace with our hardware the same speed. And we've grown from there, right? That's a cornerstone. That's a beginning. We've morphed into a variety of open source projects and technologies that we either have engineers in, that we contribute to, or sponsor, or that we support indirectly through the many libraries or language optimizations that we do that feed into the broader open-source community. We know that when we invest in an open-source engineer or in a project, we're not just paying for a resource. We're investing in the return that we get from the community as a whole. That being said, I think we also have a pretty distinctive reputation in many of these circles for focus on performance. So in this talk, I'll go through some of the new ways we're using to deliver performance. It starts with this. Many of you may be familiar with this existential question. If a tree falls in a forest and no one is around to hear it, does it make a sound? This is how I think about performance, because if my customer doesn't experience it, even if it's written up in a white paper, does it really matter? Let me tell you how we are delivering performance that you can experience and even give you some tips on how you can boost performance, too. First, it's important to understand what the latest tools for performance are that Intel is delivering. In our data-driven world, it's increasingly difficult to manage and leverage data to gain insights. The two big pain points we hear a lot about from our customers are managing the data and pushing through compute fast enough. In a data-centric world, performance isn't just about micro benchmarks. It's about your workload. It's about end-to-end performance. And the data and the compute are where the bottlenecks are. Good and bad news, because now you know where to focus on. Those are the biggest performance payoffs. When it comes to overwhelming amount of data being produced every day, the challenge is to optimize so that you can access more data faster. And if you doubt the effect of what data can do to impact your performance calculation, well, maybe just take in some of the latest news. I think most of us should be aware of Google's recent declaration of obtaining or getting to quantum supremacy by doing a 200-second calculation that would take 10,000 years to do on an existing supercomputer. I read an interesting article by The Verge, where IBM said, actually, if you properly account for enough storage, it's 2.5 days. I don't know which one exactly it is, but I think the takeaway that having a correct data solution can have 10,000 years of impact, I think that really shows you how important and how critical this is. To combat the challenges we face as data sets become larger and larger, Intel has created a new tier of memory. Intel Optane DC Persistent Memory is an innovative memory technology that delivers a unique combination of affordable, large capacity, and support for data persistence. It's helping businesses get faster insights from their data-intensive applications, as well as deliver consistently improved service scalability with higher virtual machine and container density. To speed up the compute, we are adding a new instruction to the AVX 512. This is the vector neural network instruction, which significantly accelerates inference performance for deep learning workloads. This acceleration speeds up int8 convolution in one instruction that used to take three in the previous generations. Cool. Now we have these tools and capabilities to go faster. Easy, right? All we need to do, we've got these cool tools. We just need to bring all of open source along with us, which is sometimes easier said than done. There's a saying that if you want to go fast, go alone. If you want to go far, go together. But they don't say much about going fast together. And we've got a lot of dependencies and applications here. So going faster in open source can sometimes feel like running a three-legged race, right? You want to enable your latest technology, make it faster. You need to drag all of your dependencies and all of your applications along with you. And somehow, we need to win this race, not end up in a tangled heap. So how are we creating fast performance solutions through open source? Well, I guess we can really reflect on even our recent discussion of creating better CI. Step one is creating a beat for your three-legged race. This is your backbone. It's a regular cadence of automated performance testing across a broad range of workloads and microbenchmarks. With our ClearLinux operating system, we test over 5,300 packages with over 100 daily tests and 300 weekly tests for performance. Job number one is don't regress. And that includes packages and dependencies from other projects in open source, letting them know if their latest updates have impacted performance. The last thing you want to do is take one step forward and two steps back. Now, you can create performance without this infrastructure, but you can't sustain it. You can't continue to move it forward consistently. Step two, easy optimize, right? We like to use the full stack. That's why we've got our operating system. That's why we create specialized workloads because it gives us a whole lot of knobs to be able to turn. The first step in this is to ask a simple question, where does time go? We leverage all the knobs for specific workloads, and we target workloads that we know are challenging. To optimize, we take in the latest and greatest GCC features. We keep up to date. Always take in the newest thing. It gets you the performance. It gets you the security. We use selective enabling of AVX 512. We consult experts for parameter tuning across kernel, as well as runtime variables. And we selectively pick scheduling policies aligned to workloads. We also use tools like link time optimization and profile-guided optimization as needed. Step three, finally, we deliver. We deliver the full stack because handing a data scientist a kernel parameter to tune probably isn't the best use of his or her time. Also, there's inevitable trade-offs in performance, like latency versus bandwidth, settings for things like huge page or scheduling policies. So to provide customers the ultimate in performance, we deliver it all end-to-end in reference stacks. There are currently three reference stacks. The first is the deep learning reference stack. This is created to help AI developers deliver the best experience on Intel architecture. It allows you to quickly prototype and deploy deep learning workloads to production while maintaining the flexibility to customize solutions. This stack is also the best way you can try out the deep learning boost performance for inference speed-up. Next is the data analytics stack. This gives application developers a powerful way to store and process large amounts of data by using a distributed processing framework to efficiently build big data solutions and solve domain-specific problems. This leverages and deploys Apache Spark, Apache Doop, and OpenJDK, and allows you to customize your solution. Last but not least, try out Intel Optane DC Persistent Memory with a database reference stack, also a second-generation Intel Xeon scalable processors. This workload optimized technology is designed to help businesses extract more insights from data. It includes the latest database technologies, including Apache Cassandra and Redis. And should you want to extend your analytics, we have a wide range of additional models that you can choose from and some great use cases where you can see where we worked with the likes of companies ranging from MasterCard to CERN on how they handle their big data and put together Intel and solutions. OK, what happens when you put it all together and deliver performance? World Records, for one, in collaboration with Intel, Alibaba recently published the industry's first 100 terabyte results for the TPC-DS and the TPC-XBB workloads. These are standards, benchmarks, for big data workloads. The previous records had been on 10 and 30 terabytes, so this is a new level of record and a new level of scaling. The next is sustained performance. With our ClearLinux OS, we have kept the pace of performance, kept ahead of the other operating systems for over four years, winning more benchmarks than others by staying on the cutting edge, adopting new Intel architecture technologies faster, being ahead of the pack and adopting the latest versions of open source packages, including the latest kernels, and also having a strict method for maintaining our security and pulling in security patches quickly. This is a blueprint for implementing Intel features in a Linux operating system. Last but not least, we deliver performance you can experience. Your defaults are slowing you down more than you realize. And if you buy new hardware and aren't using the right software, then you're leaving a lot on the table. And if you don't feel like going out and building your own end to end stack, you can use ours. This is a great way to try out new features, to try out performance, or check out the code to build performance into your own product. I invite you to experience performance. I ask you to help keep the pace in building performance solutions because we have to bring everyone along together. And we have to ensure that we don't regress and that we don't build up a backlog of bugs that we have to go figure out. The best way is to stay on top of it. And other than that, I hope you enjoy the rest of the conference and your time here in Lyon. We will have some of our stacks demoed in the Intel booth, as well as a few other exciting demos. So please stop by. Thank you.