 Our journey today started by inspecting the world at the nanoscale as we flew through IBM's latest groundbreaking work in semiconductors. Now, for our last stop, we're going to widen our aperture and talk about the world's largest and most powerful computing resource ever created, the cloud. Let's take a second to visualize it. This globe shows the major data centers of the world's top public crowd providers, hundreds of locations in dozens of countries that spans nearly every continent. However, this only paints a portion of the picture. What is not being shown are the massive number of private computing environments that exist in silos across the globe. The cloud has dramatically evolved over many years to what it is today, a massively distributed network of public and private data centers comprising zettabytes of computing power and data storage. For us to fully appreciate the engineering and networking marvel that is the hybrid cloud, which combines public and private environments, we must appreciate the software that runs it. Enter the world of hard tech software. Hey, Priya, how are you? Hi, Tario. How are you? It's so good to see you. I remember a couple years ago when you were telling me, Tario, for all the progress of what's happening on cloud, we got to get to the point where we get the cloud to work as if it was a single, infinitely powerful computer. So what do you mean by that? Well, Tario, think about the simplicity of just working on your laptop. You have a common operating system, tools you're familiar with, and most importantly, you're spending most of your time working on code. Working on the cloud is far from that. You have to understand the nuances of all the cloud providers. There's AWS, Azure, GCP, IBM, private clouds. You have to provision cloud resources that might take a while to get online. And you have to worry about things like security and compliance and resiliency, scalability, cost efficiency. It's just a lot of complexity. When I think about the heterogeneous nature of the cloud, everything from large data centers to the edge, all these complexities that you're talking about, is there a prayer that we can actually address it and realize this vision? Yeah, indeed. I think it's one of the greatest challenges that we should solve right now in computer science to harness this tremendously heterogeneous and distributed system. I think there are two key elements to a good software architecture for this. So first is open technologies, and the second one is the right software abstractions. Now open technology is because proprietary software stacks from different vendors not only add to all this complexity, but they stifle innovation. Key software abstractions start with the operating system, which is Linux. So as you know, Dario, Linux as the operating system for the data center era really unleashed this proliferation of software, including virtualization technologies like containers. And that ushered in the cloud era. Now hybrid cloud is no different. We now need a distributed operating system to provide us that common layer of abstraction across these heterogeneous and distributed cloud resources. And Kubernetes is the open technology that's emerging as the winner in this evolutionary battle. So you have Linux containers, Kubernetes. So these are the open technologies. And, you know, when it comes to enterprise ready supported, the most secure versions of this software, you have Red Hat Enterprise Linux or REL and OpenShift. This is our hybrid cloud platform. And this is the foundation for our cloud computer. So that is the path to be able to wrangle this complexity into something that gives you productivity. But you and the team are also pushing a vision that goes even beyond that. This world of serverless. So what is it and why is that so exciting? That's right. So serverless technologies, that's the key to realizing this vision of the cloud as a computer. So there are three key attributes to serverless. The first one is ease of use, and there's on demand elasticity and pay for what you use. So let me give you an example. So take a simple data prep task on the cloud, which is fairly common. But the data in this case could be coming from anywhere, literally. Edge environments, for example. And to make this as simple as a command you could issue on your laptop, a lot of things have to happen under the covers. And today it's the developers and the data scientists doing these things manually. So I have to worry about, do I have access? Am I allowed to move the data? Where are the API keys? How many containers should I spin up? And this is what I spend most of my time on. But with serverless, you can literally boil this down to one single command, as simple as moving files around on your laptop. And the serverless platform does the rest underneath. So that's the beauty of serverless. And we are pushing this vision forward today in the K-native open source community. And just like with Linux and Kubernetes, there is a supported enterprise-ready version of K-native that's available on OpenShift today. It's called OpenShift Serverless. It's also available on IBM Cloud called Code Engine. So you can try these out today, but we continue to push this evolution of serverless and it's getting us closer and closer to that vision of the cloud as a computer. Yeah. The cloud has an infinitely powerful computer, but working as if it was a computer. I love the vision and how you and the team and the extended Red Hat team and the whole open community is making this a reality. So thank you, Priya, for everything that you do and for the fantastic work. And I look forward to talking to you soon. Thanks, Dario. See you. We are well down the road of executing our vision of making the world's hybrid cloud resources as easy to use as a single computer. When we do, we will finally realize the full revolutionary potential of the cloud, the ability to get what we need when we need it down to the millisecond with the click of a button. Today's Red Hat Open Hybrid Cloud Platform, built on Linux and Kubernetes, provides the essential interoperability layer to seamlessly blend the computational powers of high performance computing, AI and quantum across cloud providers and across public and private environments all the way to the edge. It is a computing architecture that will enable us to discover faster, solve more complex problems and push not only science, but business to new frontiers.