 All right, I'm gonna go ahead and get started. Thank you for coming out. My name is Neil Peterson. I work with Microsoft in Azure. This is not a talk about Azure. This is a talk about deployment in the age of cloud-distributed applications. I've got just 35 minutes and an absolute ton of information to go through. So a lot of it's gonna be at the surface level. I do have a bunch of demos as well. Now, I did pre-record all these demos. However, everything that I'm gonna walk through, I've got available at this GitHub repo there. You need to take a quick snap, where all of these slides have been uploaded to the schedule builder as well. But so here's the things that we're gonna take a look at. Software deployment through the years, throughout the years, particularly throughout my years. My career spans about 20 years, and I've primarily been doing software deployment for most of those years. Well, then look at like, you know, what's changed? Why are we talking about this in a different context today? And then we're gonna look at three options for deploying software. But these aren't the only three. These are just the three that I decided to show off here. But really, you know, the goal is to talk about the challenges and what's changed in some different approaches to deploying software and everything that that software needs to run. It just so happens that these three kind of hit different like pros and cons, but there are others out there as well. So that's the quick agenda. Again, there's a link to all the demonstration that we're gonna see here. So talking about software deployment. I took this definition off of Wikipedia. Wikipedia defined software deployment as all of the activities that make a software system available for use. Like I said, I've been deploying software or doing software or deployment related activities for about 20 years. And when I started, honestly, it's bullet point three here is where I started. I basically built my whole career on software updates. I worked early on, I worked primarily in Windows-based data centers. So Patch Tuesday was kind of what I did for a living. But then I expanded from there and did a lot of operating system deployment and then applications on top of that. So whether it's client-based applications or applications to data centers. However, you know, as as I grow into the the cloud age, things are changing and I found my activities not only encompassed applications, operating systems, and software updates, but now it was up to me as the deployment engineer to think about things like deploying infrastructure. At deployment time, making sure that the right access controls were in place, just to secure that infrastructure, to secure those applications and then finally deploying policies. And I'm sure there's some other things in here as well, but defining policies to to make sure that that deployment stays in the state that it needs to stay in. So that's kind of a real quick look at at deployment through the ages and how things have changed there. But now let's take a look at kind of a modern application, and this is kind of my canonical example of something that I use to when I want to describe a modern application that runs in Kubernetes and runs on cloud services. So this is an actual application. In fact, in one of the demos, we're going to deploy this application. If the application interests you, again, there's a link to a GitHub repo where this application lives. But just so really quickly running through what it does, on this side right here, we've got a Kubernetes cluster. On that side over there, we have Naughty Kubernetes cluster. We've got a cloud provider, or it could really be anything. What I'm trying to get across here is that it's not, this is not just containers. Or pods inside of a Kubernetes cluster. There's a bunch of things going on here. So with this application, I've got a pod. It's just basically pulling the body of tweets off of Twitter. So the text and storing that in a message queue, a message queue living inside of my cloud provider. I then have another pod that pulls those tweets off of that queue, takes the body and kind of evaluates it against a text sentiment analysis API. Again, this is a service that you can subscribe to in any one of your cloud providers. And that basically returns back a sentiment score. Was this tweet positive in nature? Or was it negative in nature? It then takes those results and stores them in a database. It doesn't really matter what that database is. And then finally, I've got a pod that visualizes those results. So very different type of application than the applications that I was deploying 20 years ago. 20 years ago, I had like an executable and or a MSI or some sort of installer. But one, one thing, and I would install that on a server and boom, there's my application or I would deploy that using a deployment solution. But still I have that one installer and boom, there's my application. Things are very different now. In fact, when I first wrote this, when I first built this application, you know, I went and manually created this message queue and manually created this data store. And then I had to do things like take the connection strings and bring those over into Kubernetes and start my pods with those connection strings as environment variables. And that was very clunky. I mean, at a point, it took me a couple hours. And then I thought I was sophisticated and I wrote a bash script to do it. But what I want to show here is that we've got other options that are quite a bit more sophisticated. Now you might be asking yourself, but wait, what about CICD? You know, why do I need something special if I've got pipeline or deployment pipelines in my environment? And that's a that's a very valid statement. If you're working with an application, if I've got one instance of this application and I want to deploy that to QA and test and then finally to production, then maybe this talk is not relevant. However, if I've got a thousand instances of this or if this is something that somebody can, you know, open up their phone and maybe they don't know they're deploying an instance of this, maybe they're just signing up for a service and every time they sign up for that service, one of these an instance of this application is deployed. That's a very different situation than than something that I might use a CICD pipeline for in kind of like I only have one instance of this application scenario. So this is the application, like I said, we'll look at a deployment of this application a little bit later on, but so what are, you know, let's examine like how this changes things, you know, we've, we've already seen that a single application can be built and deployed across a diverse technical stack. Obviously, we've got Kubernetes, we've got cloud services, maybe we've got on-prem stuff. And this is pretty cool, like as an application, like as an author or developer of that application, things get a lot easier, like that, let's take that sentiment analysis piece, like I didn't have to go build sentiment analysis, I could just subscribe to a service and get that out of the box. As an application developer, like this modern world is awesome, but as a deployment engineer, an application like that, there's some new challenges there, multiple different deployment routines. I already talked about it, like when I first built that application, I was manually deploying stuff in my cloud provider and then manually deploying stuff in my Kubernetes environment to totally different things. And then multiple different management tools. So with that first instance, if I wanted to go look at this thing, I had to kind of like poke through my cloud provider to see half of the application, poke through my Kubernetes cluster to see the other half. So kind of a disjointed experience there. Those are simple ones. It's these next two that I particularly excite me. And the first one is secrets management. If we think about it as I deployed those cloud services, I've got connection strings and keys that I need to get over to the other pieces in Kubernetes so that, oh, this actually back up. So for instance, you know, I've got a connection string and a key to access this data store. Well, this pod needs that information to put stuff there. This pod needs that information to get stuff out of it. And that's what I mean by secrets management. And then finally, instance management. We've talked about deploying applications, but what about deprovisioning those same applications? Let's say I have a thousand instances of that Twitter Cinnamon application and I need to go deploy one. Let's say it's a service that I sell and then somebody no longer wants to pay me for that service. So I need to deprovision one instance of that application. I need to make sure I get the right database and the right sentiment analysis service and the right pod. So how do I logically tie all of these things together so that I can deprovision them with a single click or whatever it may be? And that's what I mean when I reference instance management. Kind of like managing a large pool of instances of the single application. So these are kind of some of the challenges. And next we're going to look at three solutions that may or may not be the right solutions for everyone, but three solutions that will help us kind of approach all these challenges with application deployments. Like I said, these are not, this is not it. And also just to kind of really be very transparent, this is pretty cutting edge stuff or new stuff. A lot of what we're going to see is not exactly what I would consider really production ready. This is more about getting our heads around this new world of distributed applications and looking at some stuff, you know, emerging technology that's going to help us there. So the first one is Terraform. Is anybody using Terraform? A couple hands. So unfortunately I don't have enough time to go like deep deep deep into Terraform or even really just glance the surface of it. So I'll really just talk about it from the perspective of what it has to offer here. And honestly I didn't even think about Terraform when I first started down this path. I actually looked at the remaining two that we're going to see first, and it occurred to me like man Terraform is actually relatively mature. Like I just made a statement of not production ready, but of everything that we're going to see, this is like the most mature solution. That said, it also has some, its own challenges here. But basically Terraform is, you know, a lot of people think of it as an infrastructure code solution. It's an open source project that codifies APIs, allowing us to deploy infrastructure into cloud providers or whatever it may be. So it's very declarative in nature. I need a VM. I need a container instance with these configurations. You can deploy it and then you can destroy it using the Terraform open source project. So Terraform 0.12.0 was released just a couple weeks ago. It's actually a pretty big upgrade. And Terraform works on providers. So you've got the Terraform runtime and then a provider. And there's a bunch of providers. There's providers for all your major cloud services or cloud providers. There's a provider for Kubernetes. And you can extend Terraform by building your own providers. So one of the brilliant pieces around Terraform that allows us to do this instance management, like deploy a bunch of instances of the same thing, is something called Terraform State and Terraform Workspaces. So every time I deploy infrastructure and potentially my applications with Terraform, something called a state file is created. And that state file is basically like the record of what was deployed. I'll call out right now. It's actually a clear text representation of that configuration. I call out that clear text piece because you've got to be careful with the state files. I mean you can end up with secrets and sensitive stuff inside of your state file. Now we'll see kind of a way to secure that somewhat. So the state file is the record of what's been deployed. A workspace is, the second piece here, think of it as like a virtualized namespace in which I can deploy a Terraform configuration. Works very similar to namespaces in Kubernetes. So I can have a single Terraform configuration, deploy it to Workspace 1, deploy it to Workspace 2, deploy it to Workspace 3, and then I can manage those workspaces, meaning like I can delete one of those workspaces. And it's going to delete everything in that configuration for that instance. And so that is kind of that instance management piece. For all three of these, I'm going to kind of just kind of give my impressions. Having spent about the last six months really digging in on this. And this is not like I'm not trying to make these compete against each other or I'm not trying to tell you which one is good or bad, but they've all got pros and cons. And this is kind of where I landed with Terraform. So instance management, we talked about the state file in workspaces. Secrets management, it's really kind of built into Terraform. Terraform has this thing called interpolation expressions, which allows us to like deploy the database and then take the connection string and use it later on in the configuration file. But it's these last two that I really kind of want to focus on. These are the two that really kind of like helped me decide what I want to use. And that's the maturity and like community involvement. And this is where Terraform just like destroys everything else we're going to look at in a good way. Terraform is a relatively mature product. It's only been around for like five years, but I mean it's going gangbusters. A lot of people are using it. And then the extensibility is, you know, I would give it like a medium. I mean you can write your own providers or in some cases, if like the Azure provider, if it doesn't have the capability that you need, you can write that capability. It's an open source project. So you can get that in if that's something you're willing to do. So with that said, let's take a look at a demo. And like I said, these are all pre-recorded. And I do apologize. I know this is hard to see. Unfortunately, I pre-recorded it and I can't really like zoom in or anything like that. But I'll talk through what's going on. So this is a Terraform configuration. It doesn't really matter what it's going to deploy. There's just a bunch of stuff in here that we're going to deploy. But one thing that I want to call out is right here inside of the configuration. And I'll pause on it here in just a moment. You can see that I'm using, this is an interpolation expression. So I've got Terraform.Workspace. What that's going to do is take the workspace name and we'll see how to create name workspaces and like use that as a value in my deployment. So in this case, I'm going to create a resource group in Azure, which is just a logical container for my resources. I'm going to name that. I'm going to give it an identical name as the workspace that I'm working in. I mean, that's kind of how I can like logically keep this stuff sorted. And then lastly, this configuration up here is a back end storage. And this takes that state file and stores it in Azure. So I kind of made that call out that you've got clear text in there. That's one way to kind of secure that. All right, so now I've run this command Terraform workspace list, which showed I had two workspaces. I did default and then I created a new one called test environment. I just ran Terraform plan and then Terraform apply, which is going to deploy an instance of this configuration. Again, it's just a simple application here. So if I flip over to Azure and refresh, we'll see that I've got a resource group that matches the name of the test workspace. And then if I go into the storage account, we can see I've got a state file that also matches the name. We can see Terraform.tfstate test environment. And that's the name of my workspace. So I've created an instance of this configuration and I can see that instance and the name matches the workspace that I've created. The next thing I'm going to do, and this goes pretty quickly because I didn't want to waste time watching everything, is I'm going to create a new workspace called production environment. I just ran a Terraform workspace list. So we can see I've got a default workspace, which we always have, test environment, which is the first workspace I created, and then production environment, which is the workspace that I'm working in now. I'll go ahead and Terraform plan, Terraform apply, which deploys an instance of the application. We can now see that I've got two state files in my state store. If I look at my resource groups, we can see that I have two resource groups, test and production. So now I've got two instances of the application, and now I can start to destroy them. So what I've done here is I've run Terraform workspace select test environment. I want to delete this test one. So I've selected the workspace. I then did Terraform workspace list. We can now see via that asterisk that I'm working in the context of that test workspace. I'm going to run Terraform destroy, which is then going to go and destroy just that instance of the application. So pretty simple configuration there using Terraform for this stuff. I found the workspace stuff a little clunkly. You can't see kind of like the workspace inside of the code. You've got to use the Terraform commands to do it, but it's definitely an option. So the next one that I want to talk about is Kubernetes service catalog. I've already seen or worked with Kubernetes service catalog. A couple more hands, so cool. So this is the one that, you know, I'll spend some time. I'll go a little deeper on this one since this is a KubeCon. So before we talk about Kubernetes service catalog, I want to talk about open service broker because this is a part of it. And if I do it in the other order, it doesn't really make a whole lot of sense. There's going to be a lot of information here. I've got some diagrams to kind of pull it all together. So what open service broker is, is an API specification for a standard cloud provider interface, meaning if you write an API that looks like this, you can put it in front of AWS or Azure, and then you can write scripts and code that can interface with that API to provision things in that cloud provider. It basically specifies five operations, provision, bind, unbind, deprovision, and update. So that's out of open service broker. Just keep that in your mind. We'll get back to that. Kubernetes service catalog is a API extension for Kubernetes that kind of enlightens Kubernetes so that it knows how to talk to an open service broker compliant API. So out of the box, Kubernetes cannot interface with an open service broker API. Service catalog allows it to do so. Basically, service catalog adds five types to Kubernetes, so just like a pod or a deployment is a type, when you install service catalog, we get these five new types. Cluster service broker, service class, service plan, service instance, and service binding. We'll see a bunch of these in the demonstration. I actually usually do about an hour talk on service catalog. I don't have enough time to fit all that in here, so I'm not going to define all of those, but we'll see those once we get into the demonstration. So let's see how this looks right here. Kubernetes with the service catalog installed. We then have a service broker. In the case of the demonstration we're going to see today, it's called open service broker for Azure. So it's just an open service broker compliant API for interfacing with Azure, and that service broker can interface with Azure. Now the cool thing here is that I can have multiple service brokers fronting like a single Kubernetes cluster. So we can see here I now have service broker for AWS. And just to kind of like prove out like how valuable these open service brokers are, I don't actually need Kubernetes here. Like once I have a service broker that can interface with these cloud providers, I can exchange Kubernetes with something like Cloud Foundry. So that's kind of how service catalog and the open service brokers kind of line up with the cloud providers. Yeah, I think that's that on that. So let's kind of look even deeper kind of at a Kubernetes like API level how this all works. So installing a service catalog, you can install it right into the cluster using a Helm chart. And then installing a service broker, you can also install it right into the cluster using a Helm chart. So this dotted line right here represents a Kubernetes cluster. So I've got Kubernetes. I've installed service catalog. And here's these five new types inside of my cluster. I've installed my service broker. Again, these are just Helm charts that I can install inside the cluster. And so now actually I'll describe some of these types. So this service class represents like a service inside of my cloud provider. Maybe it's a storage account. And then the plan is kind of like the SKU. Like a lot of these cloud providers, you'll have like P0 for like a small instance and P5 for like a large instance. That's what the plan is. The service instance is a Kubernetes type that represents an instance of one of these cloud services. And then the binding is what the binding is is that it reaches out to that service that we've created, gets those secrets, gets those connection strings and keys, and stores them in Kubernetes as a Kubernetes secret so that my pods can access those secrets and then access the service instances. So this is how this works. Once I have everything installed, I will create a service instance inside of Kubernetes which provisions some piece of infrastructure inside of my cloud provider. I then create a service binding which gets the keys and whatever I need to connect to that service instance and stores them inside of Kubernetes as a Kubernetes secret. I can then start my pods which can consume those secrets and now my pod knows how to interface with those services that I've created in the cloud provider and can do things like whatever it needs to do. So that is service catalog and open service brokers in kind of a nutshell. So here's my impressions, instance management. And I guess one thing I didn't call out is like, I just threw a bunch of stuff out there, service instances, service classes. Well, when you write this stuff, you're just writing a Kubernetes manifest file. So we're extending Kubernetes with these five new types and we can access them with all the same, you know, calls that we do things like pods. So I can do like kubectl, get, service instance, whatever. And when I write the manifest file to deploy this stuff, it looks just like a Kubernetes manifest file. It is a Kubernetes manifest file. We just have new types to work with. Because of that, I can do things like write Helm charts that let's go back to the first couple of slides. I have that Twitter application. I actually have a Helm chart that deploys all of that, not just the pods, but also the cloud services. And in that process, all the secrets from those cloud services are stored in the Kubernetes cluster and my pods can take care of them. So because of that, I can do like a Helm install, Helm install, Helm install. Now I've got three instances of it. Then I can do a Helm delete, and I can delete one specific instance. Secrets management, we're using Kubernetes secrets for that. Maturity and community, I would call it kind of medium. I mean, it's a project that's, a lot of organizations are building open service brokers and kind of putting some emphasis on using these open service brokers from Kubernetes, but I would still call it a relatively immature project. And then the flexibility, extensibility, it's all open source, so you can definitely write your own providers or your own service brokers or contribute to them, but you still have to kind of go through that pull request process. All right, how am I doing on time? Not great. So let's see an example here. And I'm going to dig, you know, we're going to look at quite a few things here because this is KubeCon, kind of go deep. So the first thing I want to do is look at some of the infrastructure stuff. So I said that we install all of this with Helm charts, so I'm just doing Kube CTL, Git pods, all namespaces. You can see I've got two pods, that's the catalog, you've got two pods, that's the service broker. In this case, it's open service broker for Azure. So I've just installed this stuff using Helm charts. I said that it was API extension, so I've just run Kube CTL, Git API service, and we can see in here I've got, I've extended the Kubernetes API to include service catalog. The next thing I'm going to do is run Kube CTL, Git service broker. This is just going to return a list of service brokers installed in my cluster. This is the open service broker for Azure. Described, I can describe that service broker and we can actually see an event here that it successfully fetched catalog entries for the broker. That's kind of the infrastructure pieces, that's kind of the set it and do it once type of stuff. But now let's look at like classes, plans, and how to deploy stuff with a service catalog. So I'm going to run Kube CTL, Git service classes. And here we can actually, we've actually got a catalog now inside of Kubernetes of services that I can deploy in Azure. So we can see things like Redis cache, app insights, SQL. So now I can see this list of things I can deploy directly from Kubernetes. Kube CTL, Git service plans. If you recall, the plan was kind of the skew or the size of that service. And notice here that it's a little hard to read because I can see like basic free, but I don't have a relationship between this plan and the class itself. So there's some rough edges here, definitely. We've got this tool called SV Cat that's attempting to kind of answer some of those rough edges, but it's definitely not a necessary tool. But just to show it off when we're in SV Cat, Git plans. And now you can actually see like Azure SQL is the class and then I've got a plan of basic or Azure SQL premium. So I can kind of browse that stuff using the SV Cat tool. But again, we're just kind of looking at the stuff that's loaded here for us. Let's actually deploy something here. So the first thing I'm going to do is create a service instance. So I want to deploy that thing into my cloud provider. So I'm running Kube CTL create and then I'm just specifying a YAML file. I'm starting that right now because it does take a couple seconds to run. But now let's take a look at the YAML file. So what you can see here is I've got kind service instance. I've got some metadata, like the namespace that I want to deploy it into. I've got the class name, so azure storage, the plan name all in one, and then some parameters that are needed like what Azure location do I want to deploy that to and what resource group do I want to deploy that into. But if you'll notice, it's just a Kubernetes manifest file. I can run Kube CTL get service instance to see the state of that, and we can actually see provisioning. So this thing is being provisioned, but I'm seeing this from Kubernetes. If I jump over to Azure, we can actually see my resource group and there's my storage account. Now I'm going to create that service binding. The instance is great, but I can't access it for my pods unless I've got those connection strings. So again, we've got the manifest file, kind service binding. I'm giving it a name like what instance do I want to bind to and then what's the name of the secret that I want to create, and I've just given it that name. I can return information about the service binding. We can see that it's ready. I can then run Kube CTL get secrets, or a secret, see that my secret has been created, and then finally I'm going to start a pod that's going to consume that secret, set the values of that secret as environment variables, and then that pod is going to do something. Now I'm not doing the Twitter sentiment app here. I've just got a very simple little application that just writes a file to the storage account that was created, but you can see it's just a standard Kubernetes manifest file. I'm creating a container and then I'm setting these environment variables. But I'm setting those, I'm getting the values for those environment variables from the secrets that were created by the service binding. I did this in kind of three files here, but I could have one single manifest file, or like we've already discussed, I could create a Helm chart to do it, which I've got some demonstrations. I've got some demonstrations of Helm here, but I'm actually going to skip those for the sake of time. But you can see that we've actually created this file in my storage account, and it was the pod that did that. So the service instance was created, the binding was created, the pod started up, it consumed the secrets, and then created that file. I'm going to skip the Helm demo. All the Helm demo was is that I just used Helm to deploy a couple more instances of the same thing, and then I ran Helm delete to delete an instance, which actually deleted the storage account and everything that was related to it. So we talked about Terraform. We talked about service catalog. Next thing I want to talk about is cloud-native application bundles, and this thing is like brand spanking new. Has anybody used cloud-native application bundles or read about them? We've got three minutes. So I'm not going to be able to get to the demo here, but I'll talk through this. So what cloud-native application bundles are is a package formatting specification for bundling, installing, and managing cloud-distributed applications. So really think about this as like a make file for cloud-distributed applications, but it's even beyond that. So using CNAB, we can do things like package up everything we need to install our application. Maybe it's a Helm chart. Maybe it's some bash scripts. In an OCI-compliant format that can then be like stored in a container registry, retrieved at deployment time using command line tooling. We can also sign our cloud-native application bundles. Wrong way. There's a couple different components of CNAB, our cloud-native application bundle. We've got the bundle metadata, which is just a file that declares like all the things that need to happen. Run Helm install, maybe after I've deployed in my application, run some scripts, and whatever else may need to happen to install your application. We then have an invocation image, and what this is is just a container image that contains all the tools needed to run that bundle. And then we've got a bundle run time, which is just kind of the entry point to run that application. Now CNAB is just the spec for kind of the format, kind of like a container image spec. It's just the spec for the CNAB bundle itself. We then have several different implementations that are actually kind of like the tools that we would use to create and deploy our CNAB bundles. The first one is called done, and these are the three that I know of. I'm sure there's others being built. Duffel is basically, the bundle run time is hand authored. We've got this thing called Porter, and I am short on time here. I would recommend if you're interested in looking at CNAB, start with Porter. Basically, Porter handles a lot of stuff for you, like creating the bundle, writing the spec, and then managing the deployment of those bundles to container registries. And then Docker app is probably the most mature implementation of CNAB. I've not used it myself, but this is a Docker-based implementation of CNAB. I do apologize that I wasn't able to get into the demo. I would have clarified a lot of kind of the rough edges around CNAB, but definitely check it out. It handles what we've talked about kind of some of the challenges here. And one of the great things about CNAB is the flexibility is through the roof. It's super duper flexible, where a lot of the other solutions that we talked about had limitations, like if it couldn't do something that I needed it to do, I needed to make a pull request. CNAB helps us do that by just being super duper flexible. That's all the time I have. Thanks a lot for coming out. I hope it was helpful.