 Hey, my name is Maciej Kruczacki. I'm a product manager working on GKE. So indeed, my role is more oriented internally on how on the offer that we provide to users with Google Kubernetes Engine. Before I start, actually, I'm really surprised how many people we have. I remember three months ago when we were discussing with the guys here and with our colleagues at CNCF. If we should organize this even for the first time, I think it's worth to recognize the first first co-located event on Batch and HPC. We're concerned if we'll have actually a room where we're targeting around 50 people, which kind of looks like estimating around that group. So really cool to see all of you here. Like a quick show maybe of hands. Who among you is a Kubernetes user? It looks like one third, maybe almost half. And who among you is a contributor to core Kubernetes or projects that run on it? That's actually interesting. And then who's like a vendor that's providing products that use Kubernetes? OK, cool. Who didn't raise their hands then? OK, we need to talk to them later. But cool, that's really excited to see you here. In this short five-minute sponsor keynote, I would like to show you just in five minutes what's our thinking process in terms of how do we embed Kubernetes in the overall offering of Google Cloud so that we help our users and other companies in the ecosystem to either consume services or provide them to end users. And what I would love as an objective here, I mean like as Ricardo said, the objective of the whole event is really to build a community around the topic of HPC and Batch processing in Kubernetes. So I'd love if after the session during breaks, we could spend some time together discuss, like if you had any feedback or maybe set up some meetings later to maybe dive deep into some of the thoughts that you might have on how we're approaching the problem space. So with that then, I probably don't need to explain to anyone in this group, especially, that is so dominated by users that the importance of high performance computing and data processing in general for all of the scientific workloads that researchers are doing is only growing in the modern world. We face more and more challenges that require robust IT infrastructure for scientific workloads and data processing. And Kubernetes comes with a very important offer in this context. That is, it comes with that capability of offering an open API that abstracts a lot of the infrastructure and the differences between on-premises and various cloud providers so that Batch admins or a platform admins are able to offer provider agnostic primitives to consume by researchers. But at the same time, Kubernetes is low-level enough in its abstraction layers that it is possible to offer that platform without making sacrifices or major sacrifices in things like performance efficiency that are critical for many high performance computing workloads. And as we think about what we are trying to offer in the Google Kubernetes Engine and GCP in general when it comes to Kubernetes, colors could be a little bit better maybe, sort of for this. But here, so we are dividing the capabilities that we're offering to you. And we have shipped or we're working on launching into three main domains. One is job management or you could convert it into like ease of use. So there is a lot of investment that also goes from us into the open source community and how Kubernetes orchestrate jobs, how queuing operators and other capabilities in here. But it also comes like integration of the ecosystem, but it also comes with some capabilities associated with how to just manage clusters, like the ease of upgrades in creating clusters, especially in this domain, like FMRR clusters. Other domain is like performance. And here it's both improvements into the open source stack in terms of its efficiency and how it operates. And also launching a variety of capabilities in the infrastructure or other parts of the ecosystem like GPUs, new CPU architectures, better integration of Kubernetes and our Google stack with the underlying hardware. And last but not least, its cost efficiency and all of the capabilities associated with the fact that high performance computing requires massive amounts of resources that are expensive, but then which makes it a very sensitive topic to be able to fit into the budget and the financial constraints of workloads. I'll just flash this to visualize, like what's the level of our commitment to the high performance computing capabilities. This just flags some of the both hardware and software capabilities that we have been launching and we have on the roadmap. They both cover in like not only hardware and not only stuff related to Kubernetes and customers Kubernetes, but we're also supporting those users who would prefer to use some alternatives. When you think about Kubernetes Engine itself, I'll just highlight like the most recent, important capabilities or the most notable ones that we see among the HPC community. So definitely the fact that we commit to supporting clusters of a size like 5,000, free time 15,000 nodes. So three times larger than what is being tested by the open source community. We provide our auto scaling algorithms and capabilities are like custom and internally built to offer the best integration of our infrastructure and they all are built in a way that they optimize the function of the cost for the customer as the primary objective of what they do is of use, upgrade, is of creation of clusters, then performance capabilities integration of hardware, et cetera, GPU slicing or orchestration of spot VMs. So like there's more to come. If you wanna chat and share your thoughts on like where we think we should go, I'll be very keen to talk. And now I will pass to Aldo and the next presenter. One more, maybe just on these cards, you can find the URL and you can find it also here on the bottom, it will take you, we have some small present for you. We didn't know how many people will show up online versus in person. So we as a form of sponsored swag, we are offering training credits that you can redeem by logging to that link in the middle or the one that you have on these cards that are on your table. And with that.