 Right? Bonjour. Paris. Je m'appelle Souda. Don't worry. Je parle mule Kubernetes. Je l'ai français. I am not a French speaker after Solomon. That sounded terrible. And for those of us who don't speak French, it just meant I speak Kubernetes better than French. We almost went through 45 minutes of this conference without talking about AI. That's a lot. So here I am. So I'm part of the CNCF board. And CNCF's mission is to make cloud-native computing ubiquitous. In other words, CNCF tries to drive the adoption of the cloud by fostering an ecosystem of open source and vendor-neutral projects and making them accessible to everybody. Why does this matter? CNCF is built of all of us. We are doers. We work on open source projects. I'm here to talk about Oracle's journey on open source projects. Oracle is part of this open source community. We've been doing open source forever. In fact, we contribute to over 500 projects. This is just a sample of them. Oracle's contributions to Linux, we are the number one contributor to the Linux kernel. And we have made sure that this will remain open source forever. Open JDK, that's another thing that we contribute 70% of the code to. I'm very proud to announce that during this KubeCon, on March 19, we launched Java 22. Great, so we do a lot of open source projects. What does it mean to the cloud? I'm part of the Oracle Cloud infrastructure group. And we began to build generation two of the cloud. We decided to break those walled gardens that Priyanka was talking about in her keynote. We built OCI to be open standards compliant wherever there was a standard. And that is something CNCF drives towards. As you can see here, a lot of open source software and CNCF projects directly contribute to OCI's equivalent cloud services. Whether you talk about open telemetry or application performance monitoring, it's the same interface. This makes cloud ubiquitous. This makes multi-cloud truly possible from the ground up. Again, part of the mission to make cloud ubiquitous. Customers should not have to choose one cloud and be locked in, no lock in, when the neutral. Let's talk Kubernetes, obviously, and AI. Oracle Container Engine for Kubernetes, or OKE, as we call it, has thousands of customers running millions of cores on OCI. Not only do our customers run on OKE, our own services, about 100 OCI cloud services, completely run on OKE. Not just that, we're committing to doing every single service on the cloud in OKE in the next two years. This is huge for the Kubernetes community, as well as for Oracle. GenAI, obviously, when we started building AI systems, in the beginning, it was very custom hardware. So we tried to do a lot of custom orchestration. Very soon, we realized Kubernetes does this better than anyone else. When we started embracing Kubernetes for our AI stack, our performance improved four to 10x just by adding Kubernetes orchestration. We liked it so much that from day one, all GenAI services were built with Kubernetes, not just Kubernetes, but the entire ecosystem, whether it's KDA, Fluendi, Helm, you name it, and we probably use it. We are also on the forefront of asking all of our GPU vendors to make standards that are applicable to all GPU vendors so that Kubernetes can run on them without being customized to each hardware. AI, obviously, is transforming industries. The demand is extremely high. It's being adopted by industries at a rate never seen before. This demand implies the GPU that's currently powering it is also a very short in-demand quantity. Beyond GPUs, can we think of some other innovative solutions that today are preventing all of the AI enthusiasts from going forward, because GPUs are not available at the rate the demand is increasing? We at Oracle have decided to try out CPUs. How about using CPUs for certain types of workloads that CPUs were built for doing? Inferencing workloads. We've experimented and found that quite a bit of inferencing workloads, especially those with 7 billion parameters or less models, can very easily run on CPUs. They're obviously very cost efficient and much more available, again, increasing that innovation power. What if we made the deal a little bit sweeter? What if we started doing the same, but now a little more eco-friendly? Climate change is on everybody's head today. So we, along with our partners at Ampere, started running the same inferencing models on ARM CPUs. And we found that ARM performs better in certain cases than even the other CPUs that we were running on. Now, for CNCF, how about we make the deal even sweeter? How about making it free? So we, at Oracle, announced $3 million of credit in ARM for CNCF projects in November in the Chicago KubeCon. Thank you to so many CNCF projects that are already embracing this donation. Not only do we have graduated projects like Kubernetes using them, but we also have the latest-grown sandbox projects like Kubernetes Control Plane, starting to embrace it. We're very happy to have more projects come and utilize this free grant, run your AI inferencing workloads on it, and please give us feedback. All the credits can be accessed throughout the globe. One data center in Paris included. We have about 48 different regions across the globe available. Thank you. Please visit us at our Oracle booth. And if not anything else, drive our wonderful race car. Thank you.