 Okay, so we can get started. Hello. Hey, so hi, everybody. Thanks for being here today. With Scott and I, we're going to tell you about building Cloud Native applications with containers, functions, and managed services. So first a little bit about ourselves, Scott. Yeah, so I'm Scott Colton. I'm based out of Sydney, Australia. I work in the developer advocacy team for Microsoft and specializing in Kubernetes and container runtimes and all those sorts of things. Been in the Docking Captains project for about four years. So probably Phil was the only one that was older than me and unfortunately bought Red Hat, so he got kicked out. So now I'm like an old GROG Docking Captain. And next is Patrick. You know, Phil cannot joke about that. Oh, no. So hi, everybody. I'm Patrick Chenezon. I work with Scott in the cloud advocacy team at Microsoft. And before that, I spent four years at Docker with Tonys that talked just before about Billkit. So Billkit Container-D, I'm a big fan here still. And so my focus is on containers on Azure. So today we're going to talk about one big topic which is over the past few years people have been using Docker and then Kubernetes to build these cloud-native applications. But nowadays more and more these applications become more complex and typically you're building an application with free components, containers, serverless functions that respond to events and manage cloud services. So in this talk, we're going to try to cover the developer experience for that. How do you package your application when it contains all these components and how do you make it scale? One other thing I wanted to say is one of the reasons why I was super happy to join Microsoft is that Microsoft's mission is around productivity. So it's about empowering every person and organization on the planet to achieve more. And it started with Bill Gates creating visual basics. So it started in developer tools. And so Microsoft always had a strong story around trying to make developers more productive. And I think with the history of cloud-native applications over the past five years, now we're getting to a stage where it's really important to start to take a look at how we can make developers more productive. So these three abstractions that I talked about, containers, functions triggered by events and managed cloud services, over the past three years, there's been a lot of portable serverless platforms that emerge on top of Kubernetes. So you have things like the FN project by Oracle, which characteristic is that it has a nice way of composing functions. So Clio, which is a super high performance and more suited for intensive data processing. OpenFast, which is like the Swiss Army knife of serverless platforms. It can do a lot of things, and it's super easy to use and get started. GalacticFog and OpenWisk. More recently, there's been two projects that are more like middleware on top of which you can build a function experience which are Knative and KDA. So we'll talk a little bit about KDA here. And the CNCF has a serverless working group. So these diagrams that show what a portable serverless offering is are coming from their white paper that I highly recommend if you're new to this space. On the Azure side, we have our own first-party service that's called Azure Functions that we manage at Microsoft. But all the runtime for it is open source. So you can run Azure Functions in containers in your own Kubernetes cluster if you want. It's all on GitHub. So I'm going to talk about the dev experience, and I nuked my slides here, but I'm going to talk through it. It's been the third time we mess it up. The dev experience I wanted to talk to you about is called Azure Dev Spaces, and it's a service that we operate in Azure and that lets you as a developer, and it's tied with the Visual Studio Code extension so that you as a developer, when you're building an application, you don't need to know anything about Kubernetes. You don't need to have the runtime of the language that you're developing in installed on your machine. You just need to have your source code and VS code. So I'll show you what it looks like. And the goal of Azure Dev Spaces is to make it easier for developers to collaborate together when they have, when they're using common services, managed cloud services for their application. So the way the experience works is that you have a VS code plugin, and there's a command line as well called easyDS, and that VS code plugin interacts with a server-side controller that you install in a namespace in your Kubernetes cluster. And so once the connection between the two is established, the VS code extension is able to prime your project. It looks at your project and it recognizes, oh, this is an OGS project. I'm going to create a Docker file for it. I'm going to create a hand chart for it. And I'm going to connect to the server-side controller that's sitting in your Kubernetes cluster. I'm going to pass all the code to it. And on the server side, we're going to do a Docker build. And actually, I wonder if they're using the new build kit functionality or not. I should talk to the team about that, because there's lots of options there. And then it's deploying it with a hand chart. And then it opens a connection for debugging. So you're in VS code. You initialize your project, and then you can start debugging in the cloud right away. And your code can leverage a series of services that are already set up in Azure. And there's an ingress controller in there also that can create URLs. Yeah, in addition to that, it can create URLs that are public so that your colleagues can test your code in a public way, or it can create a tunnel between you and between your laptop and the code in the cloud. So let me show you what it looks like. So I'm here in there. Let me first. So here, in order to play with Kubernetes, I always use docker desktop. So I have my local kubernetes cluster, which is still starting. And then here in docker desktop, I can switch context. And here I'm going to use a path aks cluster, which is a cluster in Azure that's managed by aks that I have created. And so I checked that my context is right. I'm just going to clear that out. And so I'm just going to do a kubectl get knows to show you that this is not the one on my laptop. It's the one in aks that I'm connected to. I opened it. I created it 65 days ago. I hope I won't have any problem with accounting because I should delete my cluster more often. So then I do a kubectl get namespaces. And you will see that I have installed Azure Dev Spaces in my cluster. So there's one instruction with the azure command line to do that. Once I've done that, it creates this azds namespace. And when I do kubectl get all dash n azds. So when I look at what's inside of that namespace, you will see I have a demon set to pre-poll the images that I'm going to use. And I have some services for the controller, a service for the controller, tiller that is used for Helm, and then traffic that's used as an ingress controller. So once I've done that, I can start working on my project. So I'm just going to do a code in azds node web front end. So that's a simple, that's a very simple node.js application just to show you what the experience looks like as a developer. And so in there I don't have much. I just have like a node project, package.json, and some css on javascript. So it's an index.html page. So pretty simple application. I don't have much. And I'm a web developer. I don't know anything about kubernetes. But I want to start, maybe my application is using some cloud services that I have set up. And so here what I'm doing is that I'm going to go in the command palette and I'm going to use the extension azure dev spaces, prepare configuration files for azure dev spaces. When I click there it asks me whether I want to have a public endpoint for my application. So in this case I'm going to click yes. But if it's a really secret application I could say no. And you can see what's happening there is that here I have a docker file that appears. No, I haven't installed the extension yet. So I have a docker file that appears. Some configuration for azure dev spaces that I could modify. And then a hand chart. And in VS code I have a launch configuration, which means that I can start debugging my application right away with that launch configuration. So I don't need to set up anything. And if I go in debug mode there you can see I have a launch server on azds. And if I click there you can see it's launching something. It's executing azds app with some port mapping. And what this azds app command line is going to do is it's going to connect to azure, to my AKS cluster, to the controller, azds controller on the other side. And it's going to push the code over there or sync my changes in my code. Do a docker build. And this is where there could be some interesting optimization with buildkit. And then do a ham deploy of the whole application. And then creates a tunnel between my local machine and the remote end point so that I can start debugging right away. And actually if I go... Okay, so I think this is ready. And if I go in the files, if I go in server.js in there, I can put a break point in there. And when I go in my terminal it tells me that it's available at a public end point. So if I go at that public end point I can give that to some of my colleagues and here I'm hitting the azure load balancer that goes inside of my Kubernetes cluster via the ingress controller to that URL. And inside of it I end up into my application in debug mode and so here I can... If I go in the debug view I can see all my local variables and I could modify this for example. I could say, oh no actually I need to modify it there. So I should have created a variable. But basically I can start debugging my app there. And when I say continue it's saying hello from webform and over there. So that's debugging with azds. The other experience I wanted to show you... So azds in addition to that has a few... Oh sorry. In addition to that has a few interesting features where with the URL that it creates for you you can create different branches so that one developer can have their own version of a certain part of a multi-service app running at a certain URL while the rest of the developers in the team can use the same other set of services. So it's really for teams to collaborate together on applications that use lots of services in common. Now there's another developer experience that's really cool that I wanted to show you which is called LiveShare. So VS Code LiveShare lets you collaborate with someone who has VS Code on their machine. And you can hear I'm able to give access to the code on my machine, a terminal and a host on my machine to Scott here who has nothing from this application without having him to install anything. And the beauty of it is that LiveShare works really well with azds with azure dev spaces which means that he'll be able from his machine to debug the application that we saw in my kubernetes cluster without having access to it just through LiveShare. So we're at 45, maybe I'll just skip that demo because I think it's pretty long. I'll add to the slide a link because I gave that demo at dockercon and so you can watch that demo there because it takes at least five minutes and we have just 20 minutes more and there's more stuff I wanted to show you. But just to know that both are working together. And then the third experience I wanted to show you is VS Code Remote, extension for containers. So that's something that came out very recently. It's an open source extension for VS Code. And what it's doing is that it lets you create some configuration for your VS Code project so that VS Code is split into two parts. The local part that has the themes and the UI and all the extensions and then it can run the server part of VS Code with its own set of extensions inside of a docker container which means that... And so you can use that with your local docker engine. And so what that means is that in your GitHub project if you set up that VS Code configuration with a docker file for your project, every developer who's going to join your team when they're doing a Git checkout or a Git clone of the project, they can launch in debug mode in VS Code right away. And all of them will have a similar experience. They don't need to have any runtime installed on their machine. They use docker. And their code is mounted inside of the docker container. And so you can customize all these containers. And I'm just going to show you what that looks like. So that VS Code Remote extension, that's something that was announced I think a month and a half ago. So it's six weeks old. It came out in the regular version of VS Code two weeks ago. And in addition to docker, it can do also SSH and WSL if you're on Windows. So let me close this up and I'm going to say here... So I'm in VS Code and I go in the command palette and I say remote container, open folder in container. So it's going to show me all my folders. And so I have some examples of a container-enabled projects in there. I'm going to take one where I have done the build already, so the JavaScript node 8. So when I open this one, you can see it's installing the dev container. And let's take a look at what it's doing behind the scenes. So behind the scenes, it's taking the docker file that I have in my project. It's building a docker image with it. And then it's doing a docker run with that code base inside of that docker image. And in that docker image, there's a VS Code server that's running. And plus it creates a launch.json with a debug configuration for that project, which means I can say I want to launch that program and it's going to launch my... This is a node application again. And then I can ask for a new terminal in there. And the new terminal is going to be inside of the container. So here you can see I have VS Code server running and my node process. And if I do a call of a local host on port 3000, I hit the break point that I had in there inside of my code. So here I'm debugging a Node.js application inside of a container and I didn't have anything to do to set up all that logistics. So I'm just going to say continue. And yeah, so that just works. So that's a remote extension. In terms of development, there's just one more... So these were the two experiences I wanted to show you. Let me go to the right slide. Yeah, in addition to that, when you want to debug either functions or containers in a Kubernetes cluster, there are a bunch of other projects you can take a look at. Squash is one from the glue team. There's TelePresence, which is a CNTF project. There's Ksync and Tilt, which is pretty recent. I have, if you're looking for my slides for DockerCon, I have a bunch of explanations about how to use all of them. So that covers the developer experience. Now Scott is going to talk about application packages, packaging. How do you package all these applications that have containers, functions, as well as managed cloud services? All right. Oh, yeah. Sorry. Can you hear that? Oh, yeah. All right. So Patrick's done a really good job of showing the application developer persona. But me, myself, am not really much of an application developer. I'm more of an opsy type of person. So i'm going to show you CNAP, which is an open specification from both microsoft and docker. And also, i believe hashi club and binami are now working with it. And what it actually does is it's an obstruction over tools that you already have. So these tools could be helm, these tools could be Ansible, these tools could be proper, it could be anything. And basically what it does is give you a framework to be able to Deploy these applications and code that you already have in An abstracted way. This allows you to have your Deployment pipelines really, really slim down. So instead of having multiple binaries and multiple things like that, You can have just a single way to deploy things. The best thing about CNAP is the invocation image is a Container so you don't need any binaries. You just download the invocation image and away you go. Microsoft has an implementation of CNAP called portal. What this actually gives you and why you would want to use Portal is we have built things called mixins. And what mixins are is the smarts to talk to a cloud native api. So if you want to deploy to kubernetes, you don't need to Know anything about kubernetes except the deployment mechanism That you want, the mixin will look after the smarts talking to The kubernetes api for you. Same with azura, same with Terraform and also helm. So if you've already got helm charts And you want to deploy them and pass different variables Through different yamls because you can do that with portal, Then you don't need to know anything about helm. There's a helm mixin there for you. So basically you just put the variables that you want into The code and away you go. So it's actually really, really Simple. And the good thing about CNAP, Whether you use our implementation or use docker Apps, which is docker's implementation, it's not trying To replace any tools that you've got. It's actually working with the tools you've got. And if you build a CNAP correctly, you should be able To use docker up or portal. They should be totally Interchangeable. The other good thing about CNAP bundles is there's work going on in the oci at The bottom to allow bundles to be pushed to the oci registry. So what this means is you can actually push your CNAP bundles Up with your images and the application scaffolding will Live with the container image, which is really cool. Now patrick's application, although it looks like a really Simple node application, it actually has lots and lots of Load on it. And he wants performance, Performance, performance. And if i don't give him that, Then when we do our manager's review, i'll get in trouble. No, he doesn't. A strange humor, though. But basically what we're found with kubernetes in general, And this is cloud in general, provisioning a vm is slow. So it doesn't matter if you're in azure or in alibaba cloud Or any other of the clouds. If you've got a node auto Scala, it's going to be two minutes before the kubernetes Node is ready, at least, if not more. So a project was started inside microsoft and now is in Cncf called virtual kubernetes. And what that allows you to do Is in the face to container run time and make it look like It's a node that's in the cloud. So if you're on azure, that Will be aci. If you're in alibaba cloud, I believe it's ebi. But there's a lot out there If you're on aws, it would be fargate. So what this allows you to do is actually scale your pods Across vertically without having to wait for a node to spin up. So it will hit the aci api and it will spin up a pod straight away for you. And when you're doing a highly transactional sort of workload, This can save you time and also could save you money. Virtual kubernetes doesn't give you a full node, but it does Give you enough that you can get away with a whole heap of stuff. So you can see all the different stuff there. So you get all the normal get pod stuff. And virtual kubernetes, the open source implementation, There is some limitations to networking coming back into the cluster And bits and pieces like that. If you use the implementation On azure, we call it virtual node and we'll set up the cni provider And we'll set up a whole heap of networking to speak back to your cluster. So if there's any resources in your cluster that you need to speak back to, You'll be able to do that. Alibaba cloud is taking it one step further And they've got a project called viking, which looks after dns resolution across The cloud. So at the moment we do ip across on the cni. They do dns. And they also are able to run service mesh across Virtual node, which is amazing. That was the first time I saw that was in Barcelona. But we need something to look after the scaling. And this is where keta comes in. So when a htp request comes in Through kubernetes, what we want to do is actually scale it. And we want to scale it to virtual node. We need something to look after that and look at the metrics coming into the cluster And then scale it across to the appropriate place. Which is virtual node for us or virtual kubelet. So keta looks after that. So keta can work with a whole heap of Technologies to look at metrics. So it can look at rabbit mq, Prometheus, there's a whole heap of things it can work with. And then what it does is it talks to the horizontal and Autoscaler and then talks to virtual node and scales out that way. So keta allows you to do a lot of cool things. And it runs both in a stand-alone method or it runs also in Another method which installs another product, another library By microsoft, by open deus, or deus lab, sorry, it's a habit. It's from deus labs. And it's called osiris. And what osiris does is allows a zero scale effect on a pod Because on kubernetes, if you scale a pod to zero and you have A service endpoint, once traffic goes there, like it gets Lost and there's a whole issue there. Osiris is another component that sits there and allows the Service to have no pod's belief. And then once it needs the Pods, keta will scale them up. So this is what keta looks like if you see the external Trigash law scaling pods. You've got the kubernetes Controller and all the parts of kubernetes and the horizontal Pod autoscaler so you can see it's looking after it. So keta doesn't actually replace the horizontal pod autoscaler. It actually works with it. So what we're going to do now is Just run a demo because it takes a minute. So basically what we're going to do is we're going to use Porter and we're going to install rabid mq by home. Then we're going to do a deployment and i'll go through The deployments and we're going to install a publisher and a consumer And we're going to scale it all on virtual node right now with a single command. So we're just going to put our, if i can spell, install minus c. The minus c is passing the credentials file for where my Kube control file is. So i can actually speak to kubernetes. So as you can see there it's installing the keta demo. Now i'm going to come back to this because it takes a minute For rabid mq to get up, start up. But as you can see now it's already invoking helm. So just with a single command on invoked helm and i'm going to do Some kubernetes deployment. So let's have a look at what those Deployments are and we'll come back and look at the horizontal pod autoscaler. Cool. So as you can see with that cna b bundle, What we've done is we've installed rabid mq. And then what we're going to do is we're going to do a Deployment and it's just a sample deployment. But you'll notice down the bottom here what we're going to do is We're going to say this deployment right here has a node Toleration, only the run on virtual kubelet. So that's not going to run on a normal node. It's only going to scale into virtual kubelet. Now you can see down the bottom there a virtual kubelet is ACi. If that was amazon it would be fargate or such like that. Then what we're going to do is we're going to use a custom Resource definition that keta uses to have a scaled object. And this will look and see the pollen intervals and everything That needs to happen within the rabid mq stack. And then we're just going to run the batch job, which is the Publish. So we can see things start to scale off. So as you can see there, that was a whole lot of stuff. I just played with a single command. So cna is pretty awesome. So let's go back. So rabid mq hasn't started. So what i'll do is just go to my other terminals. So as you can see here, we've just got an azure aks cluster. But you'll see that there's an extra node there that looks a little Bit different. So you can see that i've got a Virtual node running. My cluster is running 114, Which is the latest that we can run on azure. And the virtual kubelet is running there. So the virtual kubelet is acting like a single node. But what it's actually doing is talking to the ACi api. And what it's getting back, it's returning into calls that Kubernetes knows natively that a node has. So let's go back and hope that. So virtual kubelet. So we've just deployed, as you can see there, rabid mq. Now we're deploying the consumer. So if we go over... I killed that. Let's have a look. So that's going to get the horizontal auto scale off. And if you have here. So what we can see here is you Can see a whole heap of pods terminating. That's okay because they're actually running serverless Functions. But you can see there on the node That it's actually only deploying on virtual kubelet. So none of the scaling event that patrick needed is Actually happening on my AKS cluster. So performance of all the other applications i've got running There is actually nothing. Because i'm offending all this Load, this extra load that's just happened into virtual kubelet. Which is awesome. And then if i go back to my scaling, You can see now that i'm starting to get... Scaling events Happened. So you can see i had a zero replica of the consumer. And that's because i had no traffic. And as we're starting to deploy Podge, you can see the replicas are going up depending on the traffic. So before i had htb scale to zero, zero. Now i've got more traffic. I've got one. I've got four coming up now. So as you can see with that Single command through cna b, i've deployed a whole infrastructure Looking after a whole scaling event and scaling mechanism Which is quite complex. So let's go back to the slides. The good thing about this particular demo as well is i've Open sourced it. If you want the code and you want to run This yourself, there it is. It's all there. All you need is an azura account because it does need Virtual node. If you know virtual kubelet enough and you can Get virtual kubelet working on another cloud provider, just take The cnab bundles and you'll be able to deploy them there. And then there's all these other things that patrick spoke about. That's where they come. That's where you can get all the Different things. And of course down the bottom, if you Want to work for microsoft, especially in china, we're Looking for cloud developer advocates that join mine and patrick's team. So if you're looking for a new job, you love containers, And you're in china, come and see us after the talk. Any questions? Okay. Okay. If there's no questions, we'll be around outside for questions After this talk. Just a summary of what we presented Here, the goal here is when you're developing an application That has containers, functions and cloud services. There are several extensions in vs code and in azure that Let you develop that multi technology application. Cnab and the implementations. Cnab is a specification plus some implementation that lets you Deploy these applications and then keda is a pretty good way Of scaling these applications and an open source project that Runs into any kubernetes cluster. So cnab, keda and virtual kubernetes are not azure Specific but it's actually in cncf. But yeah, you can run them all on azure as well. And just for bonus, this was actually those demos were run on Wsl2, running a custom kernel. With bx support and other stuff that i built on friday And i'm glad it held up for my demos. But thank you very much for coming out to the talk. Thank you. Thank you.