 Hi there, and welcome to another edition of Tuesdays with Corey. We are here outside enjoying the beautiful, hazy Seattle weather. The reason why we did this is because Azure Container Instances are on fire. Yes. Thank you. Can we get someone who's going to laugh at my jokes for the next episode please? Because this is really, anyway. So, hazy outside, but we are enjoying the beautiful sun here on this day, Tuesday, with Corey. With Corey. Here we are. And we are going to show the second round of Azure Container Instances, and this is the ACI connector for Kubernetes. Tell me a little bit what we've done here. We promised everyone last week, and we are living up to this. Waiting with bated breath. That's right. So, yeah, what we talked about last week is that Azure Container Instances are an easy way to deploy containers in Azure without any VMs that you have to provision manage. Within seconds. Yep. But when we were designing the product, we thought, hey, I think people are going to really like this. Yes, as they have. And the feedback has been great. But probably they're going to like it so much that they're going to want to build larger applications on top of it and have applications that have multiple containers that need to talk to each other, which they want to scale and do rolling upgrades. All the kind of stuff, the application goo. Exactly. But we didn't want to build another orchestration API. There's lots of great orchestration platforms out there. And we didn't want to build another proprietary one. We didn't think that'd be cool. So what we did instead is we're experimenting with what we're calling connectors, which are basically these sort of bridges between orchestration platforms and ACI underneath. So it allows you to take advantage of sort of the capabilities of those open-source platforms like Kubernetes in this case. Take advantage of that, build to that. So basically you're going to develop your application to use the Kubernetes API, use kubectl and be able to launch that way. But also get the benefit of that sort of that advanced underlying platform, really fast containers and so on and so forth. So kind of like that split. Is that the right way to think about it? Yeah, because if you think about basically what these orchestration platforms do, they do scheduling of containers, but then they do a bunch of other things on top, right? Enable scaling and rolling upgrades and all that sort of thing. And so basically what we're doing with ACIs, rather than you only being able to schedule containers through those APIs on top of VMs that you own using ACI as sort of a virtual VM or virtual platform. A virtual, yeah, a cool virtual note sort of thing. Yeah, exactly. Well, can we go to the video tip? Let's take a look. I suppose we can. All right, here we go. So first of all, here's the GitHub project. So this is we're going to do all this development out in the open. It's at this point very much experimental, but we'd love to have people kind of come and work on it. And contribute with us. We've already had some external contributions, which is great. And so basically how this looks is if I go over here to kubectl, I'll pull in. So kubectl is of course the CLI for Kubernetes and say kubectl get nodes. And this is a standard Kubernetes cluster that I've spun up with as your container service. And so it's kind of the bare bones configuration. I've got a single master node and then three agent nodes. So those are VMs that are running inside my subscription. I got you. And so normally you just deploy to those and they would start taking up space and soon later they feel you have to add more nodes and so on. Yeah, exactly. So what I want to do in this case is add the ACI connector in here so that I can then schedule some pods on through there, right? So I'll do kubectl create and pass in this ACI connector. The YAML. YAML. Got it. And actually, let's just take a quick look at what that looks like over here. So it's basically just a Kubernetes deployment that's got a Docker image here to the ACI connector Kubernetes that we've posted up on Docker Hub. And what that does is go ahead and set up this virtual node for the ACI connector. So it basically creates a node, but then it's virtual. So it doesn't actually spin up as a VM. Yeah, exactly. So actually, we can just do this in here in the integrated terminal inside BS code. So now if I do get nodes, you'll see this ACI connector over here. 30 seconds old. Yep, got it. Exactly. And so now that's available. And so it's ready to receive traffic. Yeah. So for people that are familiar with Kubernetes, basically what this is doing is it's acting like the kubelet. So the kubelet is the agent that runs on each traditional Kubernetes node. Right. Kubernetes multiple cloud providers and so on. Yeah, but this is now acting as that. In this case, it's acting as sort of a proxy in front of ACI. Cool. So from Kubernetes perspective, it looks just like a node. Any other node. But it can be scheduled. But now it's a node that we'll never fill up. That's right. So I can now do kubectl create. Because the cloud is infinite. Infinite. Elastically scalable. Really zoom in on that while he's typing his command. Okay, let's go back. Let's go back. So now I can deploy nginx, which is our popular. And so this pod, now this pod specifically called out that it wanted to deploy. It would prefer ACI. Yeah, so there's a couple of different ways that you can schedule these things. So the one that I just scheduled actually explicitly. Because it says I'm only going to deploy this. Please deploy me on ACI. Got it, got it. But you could also do preference. The other way that you can do it is we're taking advantage of a Kubernetes feature known as Taints. Yes. Which is basically where you can identify a particular node as being somewhat different. It's not just like any other node. Maybe it has some specific hardware or there's something different about it. And pods can then and deployments can then identify that either they tolerate that that taint. Or that they don't. And so in this case, you'll see the toleration here for this ACI taint. And what that means is basically that this deployment or this pod would say, I'm okay being deployed on that kind of funky ACI connector. No problem. No problem. I can go on a regular VM node or I could go on to. So this is kind of like a preference sort of thing, right? Like, yeah, if there's space, go. Yeah, right. Okay, cool. Yep, exactly. And so at this point we should have, let's see, get pods. And there it is indeed up and running. And if I go. It said engine X1 at the bottom, yes. Yep. And what this has done, we looked at last week how we can provision public IPs for containers as well. And so it's grabbed one of those public IPs for this engine X container. And so if I copy this and hop over into the browser here, paste that. There we got engine X being managed by Kubernetes. Wow, so that's super cool. So what you've shown us today, you basically shown now ACI and the power of ACI, how fast it can deploy, no infrastructure management, no VMs at all. But with Kubernetes sitting on top and the full, actually the exact CLI that currently works Kubernetes running against it, launching this with some options of how you configure this in the YAML, either force it on there or prefer it or okay with it, but maybe use the nodes first just in case if you've got that space. All those options are possible. And there's two ways that we kind of expect people will use this essentially. One is we'd love to provide it as the quickest and cheapest way to get started with something like Kubernetes where you could just have the Kubernetes API deploying a single container through ACI, play with it for five, 10 minutes, shut it down and you're only paying for that period rather than setting up a bunch of VMs. The other one that's super interesting and maybe more interesting for production deployments is to have your sort of default set of VMs that manages your stable kind of what you're considered to be your default workload. But then you get posted on Hacker News or on Slashdot and you get this massive spike and you need additional capacity rather than spin up some additional VMs to schedule those containers on top of it and just immediately go to ACI. And a combination of those things, right? You immediately go to ACI and then the VMs catch up and then you can move things over and you get the sort of the best cost, the best of both worlds indeed. Well with that, thank you so much Sean. And so we have maybe one more show here that we want to show people which is Windows. Is that right? So hold on, we'll show you Windows next week. So with that, thank you so much Sean. Thank you for listening and watching. If you've got questions or comments, let us know. It's hashtag azureTWC and we will be enjoying this beautiful sun and the same clothes one week later from now. And with that, thank you. Have a great Tuesday. Bye bye. Rick, can you hear me? Yeah, I got you. Is it working? Is the sound working, Rick? Screwed up my levels buddy. Oh am I? What if I talk like this? Is this better? A little bit louder now. This is how I talk. This is how I'm going to talk for the entire show. Yes, this is how I get at the end of the day.