 Okay, so this is the Cloud Containers and DevOps track. Next up, we have Rhea Bhattia. She's a program manager at Microsoft, and she's going to be talking about virtual cooblet. Without further ado, here's Rhea. Okay, hi guys. So I'm from Washington. I'm from Seattle. So yeah, I'm a program manager. I work in the containers team. So what that means is we hold all of the fun services that the last speaker just talked about. So this is AKS, like Azure Container Service. We also, there's also competitors, like Google has a cloud-based managed Kubernetes system too. And Amazon's coming out with one. Some people are in preview. But I'm not really going to talk about that because I have all these slides, and we're just going to kind of run through them really fast because I think the fun part is the open source side of all these things, not necessarily the part where you pay us money for us to manage your services. So we're going to go through those things instead. But I do want to note that we have some pretty cool people working in our team. So Brendan Burns is one of the creators of Kubernetes, and he is our team lead for AKS. So we're getting all the expertise of how Kubernetes was built inherently into our team. And then also, if you guys know Jess Rizal, she's also a cool, these are cool people to follow on Twitter. I'm not that cool, but they're really cool to follow. She does a lot of stuff in the container space. Currently, she's working on a lot of container isolation projects within the Kubernetes space. And she's also working on some of our teams, and I'll talk about what services she's also working on. But yeah, okay. So yeah, we have this new service. It's managed Kubernetes. We hold the masters for you. There's no charge for the actual service. You only get charged per VM, but that's not the fun part. So we'll just keep going. Oh, what is important is that it's all upstream Kubernetes. So, and we also actually have all the code. I know you guys want to see all the code. All the code is on GitHub. So the way that we run this service is that every single issue, every single feature we add, goes out in the open first and then into our service. So if you want to see what's going on with the service, it's all out there, especially other. So now really what it comes down to is, okay, you don't want to pay us to hold your cores, but really we're trying to do everything out in the open. Microsoft, I know a couple years ago, wasn't that open, didn't seem really like that. But this team is, we're literally the open source team in Azure. All we do is open source. All we do is Linux, and all we do right now is Kubernetes too. Okay, fun features. Yeah, okay. You can get that online. I'm gonna skip through the demos because I'm really bad at going back and forth. So I'm going through these slides and then we'll get to demos afterwards. Okay, so about six, eight months ago, Brendan, some other people, I wasn't hired into Microsoft yet. I started about seven months ago, so about eight, whatever, months ago. They came up with this idea of, what if we made a service where we just give you a container? So basically what happens is, you say, easy container create in the cloud. You throw it an image name, you give it, you say you want a public IP, and then you have an image running in the cloud. And that's all you have to do. So this could be used for batch workloads, for dev tests, or really whatever you want, if you wanna see how your image is gonna function in the cloud. It's super easy to spin up, and you get to specify the amount of resources you want. So you can specify how many cores you want, how much memory you want. You only build per execution time. And this standalone is pretty cool, but this in terms of Kubernetes gets even cooler. So the way we run this service, I think you guys will be more interested in that, is actually Kubernetes. So we're running Kubernetes in the back end. We're spinning out, we have all these clusters. We're spinning out pods basically and giving you that. So in the future, we will have multi-tenant clusters. Right now they're single-tenant, so you're actually getting an entire VM with each pod. Don't go out and say that, but in the future we're paving everything with Hyper-V, and this is something Jess has been working on. All our Windows containers, all our VMs are being repaved with Hyper-V. All of our Linux ones are being repaved with clear containers or Cata containers. So we will have basically container isolation on each pod, so your container might be sitting next to someone else's on the cloud. So that should be fine from your standpoint, because from your standpoint, you actually shouldn't care about what's running underneath. All you should care about is, oh, I got this container and it spun up really fast. So try it out, if you've never tried out what, if you've never tried a container before, I would try this out. It's super easy, and I'll show you actually in a second. If you want any of these things, you probably want an orchestrator. This ACI, so we call it Azure Container Instances or ACI. ACI will never have any of these features. The way we built this service, it's a core compute function. So in terms of what a VM is, that's what this service is. We're basically giving you a form of core compute rather than a pass on top of anything. So just think of it like that. It's infrastructure as a service rather than a platform as a service. Okay, skip through this. All right, so now the fun part. This is what, this stuff was built without me. I wasn't there. It's cool, and you guys should use it. But this is the stuff that gets really cool, because I get to work on this every day, and it's completely open source. So we were like, okay, we have these two great services. We have our managed Kubernetes, which all the other cloud providers have. And we also now have this new service, which Amazon just came out with Fargate a couple, like a month or two ago. So it's great. When you come out with a service and you see other people repeating what you do, that means you're doing it right, hopefully. So they came out with Fargate, which is their ACI compete. But when we released ACI, we came out with the ACI connector. And the reason we did this was we realized that people don't wanna manage nodes. And that was the whole functionality of ACI. You don't see any of the VM infrastructure. You only see your pods and you only see your containers. So when we combine Kubernetes with containers, just with this pods as a service or containers as a service, you basically get a virtualized cluster. So you don't see any of the infrastructure underneath. All you see is your pods being deployed in your cluster, and that's all you manage. We also hold the masters for you, so you don't see that either. So you literally see no VMs in your cluster. So this is really what we're working towards. The ACI connector is an open source project. It's actually not called ACI connector. This is just Azure's implementation of it. But use cases. So if you have currently, this could be your Kubernetes cluster. You have your VMs, your agent VMs. And you don't see any of your masters in here because you're hopefully using a managed product because you don't want to manage nodes. And then you have this connector. And this connector looks just like a virtual node in your cluster. And then whenever you decide to spin out things to that node, it'll spin out into a completely different service. It won't be in your cluster. It'll be somewhere else. But you still get to manage it. You still see it the way that you would in a normal cluster. So some use cases for this would be burst compute. So for example, say you're a meter reading company and there's a tsunami or, I don't know, the power goes out because of some random weather event. So the power goes out. And now you suddenly have to read millions of electricity meters all over your city, town, block or whatever it is. You need to spike in compute power for that. And that's where the business of is bringing up compute. So we already have all of these ACI clusters warm and ready for you. Spin out into our infrastructure rather than doing any capacity management and thinking like, oh, maybe in the distant future I will need 200 cores or 300 cores. Instead of keeping that warm in your infrastructure, you don't need to. You don't need to think about it. Just be like, oh, when it happens, I'm just going to spin out into this thing. I get billed per second. I get billed for the resources I use. So it's really, you're not at a loss here for going for bursting out into ACI. Rather than keeping all of that in your infrastructure. Because what ends up happening is, if you do capacity management on your own, you're probably leaving a little bit of slack, just in case you need to add more compute to that, more pods. And you probably don't have 300 cores running unless you just have a steady workload of 300 cores that are in a lot of traffic coming in. So when you need the compute resources, just ask for it and you'll get it instantaneously and it'll spin down. So I know a lot of people keep VMs warm all the time for testing or whatever. And they still want to use Kubernetes so they keep them warm all the time. They end up spending all this money on wasted resources on things that they don't use. So do this instead is what I would say, if you want to. Okay, so these are the commands for it. I'll go through a demo with them later, but okay. Now that even cooler part are community projects. So ACI Connector is Azure's flavor of what this means. We're basically creating like pods as a service, containers as a service, serverless containers. But we went back to the drawing board and we were like, wait, everyone else is making Kubernetes clusters. So maybe in the future everyone else will also make ACI-like services or maybe people who are making their own clusters and their own data centers will want this. So we created a project that's pluggable and anyone can plug into it and create an interface into Kubernetes. And this project is actually getting a lot bigger than we imagined because people are creating random interfaces into Kubernetes like IoT interfaces and a lot of other random stuff. But I think it's really cool. So it's one of our newest community projects. We're working within Kubernetes itself to build this project up. We're working with Amazon. We're working with VMware. We're working with HyperSH2 to define what this project means because it's super experimental and it's super new. But this is the driver for the connector. So everything that goes into this project goes into the ACI connector. And you can install it also to know, you can install this on any cluster. So if you have a mini cube cluster, you can install this. If you have a cluster on anyone else's cloud, you can also install it there. Just keep note, we don't have a provider for bare metal yet. If anyone wants to build it, I would love to have that provider. So you can't spin it out on some bare metal, like in your own data center yet. You can spin out to Azure or you can spin out to Hyper's container instances but not bare metal yet, but working on it. This is the infrastructure. This is what it looks like. Virtual Kublet comes up as a node in your cluster. So you can kind of think of the crazy things you can start to do. So every time you create a pod, it creates a pod somewhere else. Yeah, it's a fun hacking project too. Okay, I think I'm gonna skip everything else because I wanna get to demos to show you what this looks like. But this is some random stuff that I think of. So I'm the maintainer of this project too. So every day I'm thinking of, okay, what do we need to add next? What are the bugs? What's the next feature we should put in? So that's why all this stuff is here. It's basically me asking, if you guys wanna help out, please do. We need all the help we can get. And it's really like a grassroots project of like me, an engineer, and then people from all over open source worlds. Feature stuff. Yeah, okay, I'm gonna go out. If you guys wanna see these again, I can definitely go back to it. But let's go to the fun stuff. Okay. And do we need this? Can you guys hear me without this? Can you hear me? Okay, cool. So the first thing I wanna do is just create a container. So the way we do that is easy container create. I also give it a container name. So I'm just gonna say ACI container. And then I have to throw this IP. I wanna say it's public. In Azure, we have this thing called resource groups. It's just where you put all your bits and pieces of what you wanna use in the same place. So I'm just gonna use to have complete O. Okay, that's why. So I'm gonna throw it a resource group. And then the last thing I'm gonna do is throw it an image. So we have this simple ACI demo image that I'm gonna use. Hopefully it didn't mess up anything. Okay, so that's gonna go ahead and create. And that's all I did. That's a container in the cloud. It's gonna start running. It's gonna spit back an IP address. And that's it. And the internet gets a little slow here. So, but this is what it spits out. So there's IP. You also can, if you don't specify public, it won't be accessible to the world. This will take a little while to populate just because we throw out the IPs. It's already running, but we need to populate the IP with the actual image. So it'll take a couple seconds. But until that's going, let's go over here. Okay, let me see that. So this is my AKS cluster. I'm running version 1.7, which I shouldn't be because there was a patch, a security patch that went out like I think in 1.9, so I should really upgrade. Show up. But I have a two node cluster right now. And now I'm gonna install the connector. So the thing I just talked about, the way we can spin out to different compute resources that aren't in my cluster, that's what I'm gonna do right now. So I am running an AKS cluster. So that means I can use the AZ command line to install the connector. Otherwise, you would have to do it from a Helm chart or something else. I won't get into Helm. You guys can ask me questions about that later if you want. But, okay, so AZ install connector. And the namespace is also an AKS, so let me keep a note. So we literally just wrapped up the opens for stuff and just put it in a command. That's all we did. And I really wanted this because it's kind of a pain to do it the manual way. Not because of anything else, okay. So you can give, you have to give the AKS cluster name. So I created this cluster. This is the name of my cluster. And then I throw it a resource group, so same one. And then I give it a connector name. So I'm gonna say my ACI connector. And this is the fun part. I get to specify the OS type. So we have Windows and Linux. So if you have, if you're running a Kubernetes cluster, it's a Linux cluster, you can actually start spinning out to Windows and Linux in the same cluster with this. So I'm gonna install both my connectors. This will go through. We're using Helm on the back end, so it's all just wrapped up. And this is what Helm will spit out. Helm is an easy way for you guys to deploy your applications to Kubernetes. And that's exactly what we just did. So it spits out the Windows part. It spits out the Linux part. And if I do a kook cuddle, get nodes. Which takes a little while to come back. The Internet's the Internet's web. Feel free to stop me too. If you have a question, don't go fast, we'll have a question. It's the Windows Core that we've put into it. So, in your image, you have to specify, like you can specify whatever kind of Windows image you want. So would it matter? What Windows OS we're running for you? Okay. Yeah, yeah, except there's one image that doesn't work. So Windows containers are really weird right now. Where they're not, some of them aren't backwards compatible with past versions of stuff. So you would have to talk to us. There's this one image that never works. And if you have that in your layer of Docker images, it'll completely fail and you'll have no idea why. Other than that, yes, we support everything else. So there we go. So we have two new nodes in our cluster. They look like nodes. They don't, they will act like nodes, but they're not actually physically there. I'm not paying for anything extra. They're just hanging out until I say, okay, deploy resources to it. And a fun thing to note is this is actually the way we created this. And if you want to be involved in the design of virtual cooblet, you can. We have architecture meetings every week. We also go through SIG node almost every week when I'm not abroad. But you can help influence the design of this. This is super early. Like we're just kind of bringing this up and experimenting with it. But the way it works is running as a pod on one of these nodes. So in the future, when we get to serverless clusters or virtual clusters, we're gonna have to find a place to run these pods and it might be on the control plane itself. So these are design decisions we'll have to make. Also networking is a big question. It's like, where do we run all this networking stuff for the connector when it's not actually a full-fledged node? It's gonna work. We're gonna make it work. But we have a lot of design decisions to make ahead of us. So yeah, they're just running as pods. So if you ever have an issue, get logs out of the pods itself. And then that will hopefully help you out. Okay, so I just deployed the connector. So now what I'm gonna do, how much time do I have left, five minutes? Okay. I'm gonna deploy an entire demo. So when I first started at Microsoft, I was like, I don't get how this all works together. So I decided that the best thing to do was just get your hands into it and make something. So that's what I did. And that's what I'm gonna do is install this demo. And I'm in the wrong thing for that. It's called ACI Demos. So if you wanna play around with it, oh, let's take it out of this. If you wanna play around with it, just go to my GitHub, which I can show you guys after two, and ACI Demos. I'm using Helm to install this, which is that package thing. So in one command, I'm gonna install an entire front-end, backend, and some worker pods. So this application basically takes from a backend from my Azure Blob storage. It takes a bunch of pictures, sorts them out. So that's pretty simple. And then it has a UI, but I only have one pod running in my cluster. So I do have a pod running in my physical cluster right now, but it's gonna be super slow. So let's go and see what's happening. If I go here. Oh, okay. So there it is. That's from before. That's it running. This is my UI. It's kind of starting when it says zero, zero. That means it's like running. The containers are running. It's just like not processing because I have one container in my cluster and it's super slow, but let's see what's going on. Hods. I have not talked to OpenStack. No, they haven't talked to me. Yeah. So is it the same provider right now that you're in support of it? It's whoever adds support to it. If you are a provider and you wanna add support, for example, Amazon right now is trying to figure out how they're gonna implement Fargate into this, which I think is gonna be super hard. They're trying to figure it out. So they're joining our architectural meetings and stuff. So anyone that wants to build a provider or like come in, like please do. So yeah. Okay. So these are the pods that are running on the back end, the front end and then my in cluster worker pod. So it's just taking the pictures processing them using OpenCV to process the facial recognition. But let's go back to the UI and see if it's doing anything. No, it's just still not processing. It's literally just like that slow. So, and it's probably also the internet. But what we're gonna do is actually deploy out 10 more pods. So 10 more worker pods into ACI and not in my infrastructure. Cause I don't wanna start another VM. Say I'm running like a one VM cluster. I don't wanna start another one. I don't wanna spin it down later in the future. I just wanna wrap it all up and throw it somewhere. So that's what I'm gonna do. Okay. So the way I do that is just through Kubernetes itself. It's Kubernetes scale deploy. And then I think it's that for replica. So this will scale out the number of pods I have running in Azure Container Instances to 10, hopefully. And it will take, it does take, depending on the image size, we have to pull it down from Docker Hub. So that will take a little bit of time in the internet, it's a little shoddy. So it'll take like probably a minute for my ACI pods to start up. Usually if you have an efficient image and if you're using, we have Azure Container Registry so you can have private images, which I should put in here, which I have before, but I haven't done it yet. It'll be a lot faster. And if you put it in the same region, so like co-locate them, it'll be a lot faster for them to pull down. But we are working on image caching. It's one of our biggest like customer complaints or ideas. It's basically, can you cache my images? And we're like, yes, we can. Let's just go and engineer that. So we're creating that right now. Okay, let's see it create. And while this is working, I have time for questions. Go. Yeah, so we first written it, and this was actually Brennan Burton's pet project. He created the ACI connector and TypeScript for fun because he wanted to learn a new language. And then we got actually, just for Zell was one of our developers on this project. And then, yeah, it's pretty cool. And then we also got Eric St. Martin and some other cool people in the go industry to develop on this. So for a week, we were stuck. It was right before coupon. We were in a conference room together for like a week. And every day we're building this and then, yeah. Okay, so they're waiting, but they'll eventually start. And then we rewrote it and go because we were like, this is probably better for the community and Kubernetes in general. The fact that we want to have this into Kubernetes. We want to put this into Kubernetes in the future. We're thinking. You're going to build your own network. No. If you can write an interface from go to whatever language it is, then you're fine. It doesn't matter. So for example, someone wrote a Rust provider. So it's actually just an interface and it just translates the APIs into that language or whatever. But basically, if you write the provider that translates it, then you're fine. This is just an interface. So people are trying to use it as language translators. People are also trying to use it to translate what IoT means and what other orchestrators mean. So it can, it's super flexible. So yes, you can write in any language you want. Well, one day, okay. So it's starting, some of the pods have actually started. So my ACI pods, if I go and do this again, they're probably all running. So what's happening here is my pods in ACI are handling my load right now. So I have, can I use the questions? Question time, okay, yeah. So basically what's happening is they're going through the faces and they're going through the no faces. Well, I missed one, but I'm gonna blame OpenCV for that and not me. So yeah, and then that's basically what it does. This is what we do. We handle load, we handle bursty workloads, we handle spikes and workloads. We handle whatever you wanna throw at us, we'll handle it. Yeah, thank you for listening. Okay, very good. So she'll be around somewhere if you wanna ask her more questions about this, I'm sure. So can the next speaker, who I think is, Mike.