 Good morning, Berlin. Good morning, OpenStack Summit. It is great to be here. Do you guys hear that? Boom, boom, boom, boom, boom. I do. That's the heartbeat of the data center. That is what you guys have built. We at Intel feel really proud and excited about this community and its contributions to the stabilization and the growth of the data center. We have some other technologies that we'd like to talk to you about today as well. We are on familiar ground, though. The heartbeat of the data center, powered by OpenStack. We have so much strength in this community. You've worked so hard to enable deployable, stable, mature, secure software that enables private and hybrid cloud models to flourish across the ecosystem. That being said, we are in the reality of a software-defined infrastructure. OpenStack is at the center of what has become our world that we live in today. There are a lot of new technologies, though, that are coming into our ecosystem and are providing the next generation of innovation, and these are all built around OpenStack as well. As Amy mentioned earlier, there is significant opportunity for new use cases, and there's significant complexity associated with what has to be delivered for edge computing. Many people talk about latency as a major challenge towards edge deployment. That's not all, though. The amount of data that's being generated, the amount of data that needs to be analyzed, stored at the edge, becomes significant challenges when we start talking about deploying infrastructure at the edge. You've heard the use cases, right? Be it smart cities, be it distributed networks in the middle of skyscrapers, be it emergency response, you know, being from the west coast of the U.S., we've had a really painful and clear view of the criticality of infrastructure and need for real-time communications to preserve human life and make sure that the right people are notified at the right times. All of these things, be it exciting eSports applications or AR, VR, really require us to work together as a community and get to deployment fast, and that is where StarlingX comes in. You've just heard from the StarlingX guys, significant accomplishment starting an open-source project and within six months getting to a community release. The reason we emphasize that, the reason that folks have worked so hard, is we really want to put this configuration into your hands to be able to do and enable technology advocates to do what they do. You give them software assets, you give them infrastructure, and that is when the innovation really begins. You give them the platform, you give them the baseline infrastructure, and that is when the use cases are truly realized. And entrepreneurs and amazing technology advocates can figure out what they really can do when they actually have the deployment. StarlingX is designed to be scalable, so there's a couple different configurations designed to be very small at the gateway, all the way through a multi-server 100 node deployment. If you want to get that high availability, you need to have at least a dual server implementation, but it can scale all the way up to nine Ceph node clusters, so it can really go and reach to meet the complexity of the different use cases when you're talking about AR, VR across a huge stadium, etc. But this is not the only project that we're investing in. Project Cyborg is really important for us as well. This is part of the OpenStack Foundation projects and the OpenStack projects. It's integration with NOVA to enable FPGA, to enable accelerators to be viewed as devices with nested resource providers for FPGA, GPU, etc. We're really excited about enabling function as a service with the hardware acceleration and as well as devices as a service through Cyborg. This becomes really important in the future with the bare metal use cases, bare metal cloud that you've heard about so much over the last couple days as well. Moving up the stack a little bit, I'm excited to talk to you about NEMU. I don't know if you guys have heard about NEMU yet. QEMU is absolutely the de facto standard for open source hypervisors. It is stable, well known, it's accepted in the industry. It's also really large. And so what we have done with NEMU is to really try to focus what is needed for cloud hypervisors. We don't need a lot of legacy code. NEMU has a footprint that's about 70 plus percent smaller than QEMU. It is in the open source, we're working with the community and this was really developed in response to the major cloud service providers and the community saying we need something new. We need a new technology, we need a new solution. And so I'd invite you to check it out, take a look. We have some of the NEMU experts here at the OpenStack Summit and they will be in our booth so please talk to them about the effort in flight. Moving up the stack further, Kata. Man, Kata's gotten quite the reception here today. You heard about it from Deutsche Telecom. There continues to be very active development from Huawei and Google, Hyper, Intel as active contributors, but the folks who are using Kata, it's just designed to be a drag and drop replacement for your container runtime but leverages hardware virtualization to ensure your security. You can think of most of the major cloud service providers, both public and private hybrid cloud, are using Kata to deliver value to their customers and ensure that security. And so finally, I'm super excited for you guys to get to see these things in action. We have live demos of almost all these technologies in our booth. They'll be on rotation because we have so many demos. So please stop by a couple times, get to see the various technologies. You'll hear from the Kata team in just a few minutes. There's so much going on within the ecosystem and we need all of your help and support to really accelerate and drive innovation forward. Thank you very much.