 There you go. Hello, everyone. So my name is William Buchwalter. I'm a senior software engineer at Microsoft in the AI research group. So just to give you a little bit of context, I'm not going to talk about Azure, mostly, just Kubernetes in general. I've been working in the Kubernetes slash ML space for the past 18 months. I've actually been contributing to Kuflo since last July. It wasn't called Kuflo back then, but still, and a bunch of other stuff. So I just want to talk a little bit about why are we interested in Kubernetes, interested in Kubernetes for machine learning in the first place, right? Kubernetes has been developed with microservices in mind, not GPU workloads or anything like that. So why does it make sense to use Kubernetes? Obviously, the strongest point for Kubernetes is the community, right? This community is just amazing and so large that if you're a company wanting to do machine learning, training, for example, and you want to deploy a new training strategy, something, let's say, like population-based training, it's actually kind of complicated to do, but you have a good chance of finding an open source implementation already working for you on Kubernetes. So obviously, this is a stronger argument. But then, it's also because Kubernetes, I think, has really well-designed and clean APIs. So that means even if you don't find what you want and you need to start from scratch, it's actually much easier to do that on Kubernetes than it was just a few years ago. For example, I worked actually on population-based training, which comes from DeepPine originally, with a large customer. And to implement that on Kubernetes, it just took a few days and an end chart. It's actually really easy because the APIs are really nice. And obviously, scaling is important. Kubernetes scale pretty largely. So for example, we have a nice studio with OpenAI. So a few months ago, I think in January, OpenAI released this blog post called Scaling Kubernetes to 2,500 nodes. So they did that on Azure. And it wasn't easy. They had a lot of issues with HCD, Network, DiscoIO, et cetera. But ultimately, they managed to reach that scale with a very small team of engineers. I think there were two, maybe three people. And a single job, in that case, can go up to 10 k-cores. So that's pretty big. And this was the finished disclosure like last year or three years ago. And with every single release of Kubernetes and HCD, it's becoming easier and easier to go even further than that. So I'm really excited to see where this is going. Yeah, that's my Azure slide, I guess. So we have kind of two offering for Kubernetes on Azure. We have AKS, which is the full-managed Kubernetes where you don't have to do anything yourself. And then on the other side of the spectrum, we have ACS Engine, which is open source, where you can really do whatever you want with it. So ACS Engine had support for GPUs for quite a while. But ACS know as GPU support officially. And we are releasing this week a workshop, so kind of 10 module labs to work you through doing Kubeflow on Azure. So we're assuming knowledge, starting from zero, starting from what is Docker. Because Kubeflow is nice, but we have to realize that a lot of people that want to use it don't know anything about containers and Kubernetes. And so we have to make an effort to onboard them, right? And I'm going to finish by just a few thoughts. So I'm not going to talk about everything here. But I want to talk about two things that I think are going to be interesting. So it's a bit far-fetched. But the first one is virtual kublet. So if you didn't hear about that, that's a project, basically do an open source implementation of the kublet that you can then back up with usually something like Azure container instance or AWS Fargate. But for example, someone just made a request to add a provider for Azure Batch. So Azure Batch lets you run basically GPU jobs. And you might wonder why you wanted to do that instead of just using GPU and Kubernetes. The reason is because you can scale very fast in a matter of seconds with Azure Batch. And so for example, it will be really nice for use cases when you want to do transfer learning on very short training times and when you want to keep control of the cost. And another one, which I'm excited about, but is very early, is Metaparticle. So if you were at KubeCon last year in Austin, you might have seen the keynote by Brandon Burns, who basically made his point that Kubernetes is becoming the standard runtime of the cloud, right? And since it's a runtime, we also need a standard library to go with it. So you can directly from your code deploy to Kubernetes without having to go for Docker files and Kubernetes templates. And so I mean, I'm playing with this idea of tailoring Metaparticle to work specifically for machine learning. So for example, you could define a decorator in Python on top of your function to say, OK, I want to train this function using that many agents in parallel, et cetera. And when you do Python, my script, it's actually going to deploy everything, build everything, and deploy it on the cloud for you. For example, using a Kubeflow CRD, something like that. So obviously, it's extremely experimental. But you're just sharing a few thoughts that I think are interesting. And that's it for me. Thank you. All right.