 Okay, we're going to get started. Hello everyone. Happy Friday and welcome to today's CMCF webinar, developer friendly platforms with Kubernetes and infrastructure as code. My name is Jerry Fallon and I will be moderating today's webinar. And we would like to welcome our presenter today, Lee Briggs, staff software engineer at Paulini. Just a few housekeeping items before we get started. During the webinar you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. So please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CMCF and as such a subject to the CMCF code of conduct. Please do not add anything to the chatter questions that are in violation of the code of conduct. Please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CMCF webinar page at cncf.io slash webinars. And with that, I will hand it over to Lee for today's presentation. Thank you very much. As we mentioned, I'm going to be talking today about developer friendly platforms. And I'm going to be introducing Pulumi. But before we do that, let's just quickly do a quick introduction of myself. I am actually, as of last week, since this webinar was set up, my new title at Pulumi is Community Engineer. And my responsibility is to engage with people like yourself from the CMCF community to talk about Pulumi and build content and use cases for how you might use Pulumi in the world. I occasionally write on my blog, I have a GitHub account where you can find many of these examples that I will show you today. And if you would like to ask me any questions, then please feel free to follow me afterwards on Twitter. The handle is at Briggs L. And if you want to hear more about Pulumi, you know, I'm always happy to talk to people about where they see Pulumi fitting into their use cases. So what are we going to cover today? So there's three kind of points of topic that I'm going to kind of talk about and some of these may be familiar to you and some of them may not. The first thing I'm going to talk about really is what a platform is or why you might like to build one and, you know, why Kubernetes fits into that equation. Secondly, I'm going to introduce the ideas of infrastructures code, which you may already be familiar with. But I'm going to talk a little bit about what problems that solves and some of the issues with some of the existing infrastructures code tooling that is currently available in the market at the moment. And then finally, I'm going to talk about what it really means to be developer friendly. And the context that I'm going to use here is in previous roles, my attempts as a infrastructure focused individual, trying to help developers adopt an infrastructures code mentality. And some of the challenges that I faced during during my attempts to do that, and where I believe Pulumi helps and one of the driving motivating factors for me joining Pulumi as a company. So six months now I was previously working as a, as mentioned as a staff software engineer working on the actual Pulumi call products. And now I'm really happy to kind of evangelize what I believe is a game changing and an industry revolutionizing piece of technology. So let's talk about infrastructure platforms. We are going to talk a little bit about what an infrastructure as a service and platforms as a service and I apologize for my low quality image here I really wanted to kind of give an idea to, you know, anybody who wasn't really familiar with the terms. What, what the engine what what the kind of trade offs were an infrastructure if you're using a cloud provider like Amazon web services Google cloud or Microsoft Azure. You are generally using if you're using the car constructs from those cloud providers you generally you know using an infrastructure as a service platform. Those infrastructure service platforms can often provide elements of a platform as a service offering, but they also provide you the building blocks that allow you to build your own platform as a service. And the question that I think you need to ask yourself as you kind of go into a cloud native mindset is which of these things is right for you. If you kind of want to go down the infrastructure as a service path, which in my experience has been really useful for kind of larger medium and kind of middle tier organizations, the engineering cost and the control that you have over those infrastructure service offerings is generally very high, you can kind of make the decisions that really makes make sense for your organization or your business, and it allows you to be flexible with the caveat that you out you need to invest time and engineering resources both from an infrastructure engineer perspective and a software developer perspective that there needs to be time in your organizations, you know sprints and planning to actually invest in building out those those services to make it easy for you to ship software. If you are going down the path of a platform as a service, you know, and some of the managed offerings that are available. This kind of reduces the engineering cost it reduces the amount of control that you have it often these platforms are very opinionated, and they have a specific way in which they would like you to kind of do things. The lock in and the monetary cost can often be high you, you kind of have the trade off between monetary cost and engineering cost. And what we talk about with lock in is the idea that moving from one of those platform as a service offerings to another can often increase the amount of engineering cost. And so, you know, an early stage consideration when kind of going down a cloud native path is that you have to think about which of these is right for you and kind of make a trade off and a decision about what you would like to do. But what is often happening in the industry right now is that the organizations are actually trying to kind of get the best of both worlds. And as I mentioned before, the public platforms that are available are often very opinionated and you have to find follow the ideas and the past that are set out for you. But if you kind of take some of the cloud native services and put them into your infrastructure as a service offering, you can actually build something that is relatively close to a platform as a service, which follows your business logic and suits the needs that you have as an organization. And again, this it does involve some engineering costs from your perspective, you do need to actually invest time into building this, but eventually as that platform begins to materialize and and start to work in the way that your business needs it to you get all of the benefits of a platform as a service which one of which is, you know, increased velocity increased ability for you to ship features and increased productivity for your development teams and the people who are using these platforms. And that's obviously a really business critical thing in this kind of modern world. I've brought in a quote here from from Joe better one of the founders of Kubernetes, it Kubernetes has almost kind of positioned itself as this platform platform so a tool that you use to build a pass. And it's a really great tweet thread which I didn't have a chance to put into the slides from Kelsey high tower yesterday, where it talked about the idea that Kubernetes has intentionally chosen not to make opinionated decisions, but giving you the API's and the, the accessibility for you to make those decisions easily. And if you if you're familiar with Kubernetes, you may be familiar with the idea of custom resources, which can really help Kubernetes controllers and operators, which can really help with this. Whereas, as you may be aware, adopting Kubernetes is a is an investment of the organization level, and it requires you to invest time and engineering effort that you otherwise might not have to do if you, you know, simply choose to use a standard platform as a service offering. For that section we talked a little bit about infrastructures code. Sorry about platforms in general, but where does infrastructure code fit into the equation. And it's going to seem like I'm talking about two completely separate ideas here as I kind of introduced this this infrastructure code terminology. But as we'll see later on in the slides. This is going to be a very quick, or a very obvious moment where these two things start to intersect. So I've taken the definition of infrastructures code that I saw, and I've kind of tweeted a little bit. And I'll read this out. Infrastructures code is the practice of managing and provisioning computer systems and services through machine readable definitions rather than interactive configuration tools. But to expand on that just a little bit. If you are using your cloud providers web UI or the console that you kind of get via the web interface, you probably almost certainly not following infrastructure code practices. This is a perfectly valid way for you to kind of get off the ground with the cloud providers. But as the number of people that is operating in your cloud environment increases, the need for you to kind of stop using those, you know, those consoles or those GUI graphic user interface driven mechanisms, it reduces massively like you really do have to find a better way of managing this stuff. And what is the reason for some of that. One of the things that often comes up is auditing. I think if you if everybody if anybody on here has never used a cloud provider. You may be kind of wondering what auditing means and the general idea is and this is something that I've seen in my career all the time, something will just randomly turn up in your cloud environment when somebody starts to, you know, create stuff in the console. And you have no idea where this came from you have no idea what it does you have no idea what its responsibility is. And there are obviously mechanisms in the console to kind of stop you from doing that. And there's mechanisms for you to be able to actually go back and see the history of when it was created and by whom. But if you have a centralized sense, you know, infrastructure as code repository or a workflow that follows infrastructure as code methodologies that auditing becomes very easy you kind of just look in the get history, or the version control history of, you know, who made these changes, and it becomes immediately obvious who did that. The second thing that I'm going to kind of bring up is the idea of peer review. I think everybody who's worked in some kind of software engineering or infrastructure engineering kind of capacity has probably heard horror stories about, you know, somebody deleting the wrong thing or something getting removed by accident or somebody deleting and something going, you know, kind of horribly wrong peer review is designed to stop these incidents, but it's not always, it's not always possible for somebody to be leaning over your shoulder, especially in this, you know, in this modern world. It's not really possible for somebody to be looking over your shoulder to verify what you're about to do. And it also doesn't scale very well. It's not possible for someone to peer review things that you had that happened in the console. You know, with with that allowed of ease with an infrastructure as code methodology, you are able to actually follow a lot of the software engineering best practices that we've all become familiar with things like merge requests and all that kind of stuff. The final two things are kind of linked together consistency and repeatability repeatability and efficiency. I've talked already about like the idea of software engineering methodologies allowing you to scale, and allowing you to kind of do things in a more repeatable fashion. As you adopt more cloud native technologies, what you'll find is that the number of those technologies will increase and what and the usage of those technologies and will increase, and you need to find ways of being able to just off the shelf do things over and over and over again in a fast and iterative way in order to kind of provide value to not only your internal users but your external users as well. So let's talk a little bit about some of the current available infrastructure as code tooling. Almost all of the infrastructure to infrastructure as code tools that I'm familiar with use configuration languages to actually get the job done and if you think about tools like cloud formation, or even the Kubernetes API and the YAML documents that you may be familiar with, those configuration languages are often very very difficult for users to initially become familiar with and they're overwhelming. You know, a standard to Kubernetes deployment with a YAML document would include a service, a deployment, possibly an ingress possibly a service account if you don't want to use the default, probably a namespace if you want to have good role based access control, perhaps role based access control, Kubernetes role based access control definitions like a role binding and a role. Those, those manifests can often be hundreds and hundreds of lines of YAML and to anybody who just wants to kind of get their application into production, or even test it can often become overwhelmed by the sheer amount of understanding they have to have over these large large documents, and as somebody who worked in an infrastructure engineering capacity for a long time. This was a challenge that I ran into on a consistent basis. You know, I would perhaps naively believe that I could just, if I just sat down with these members of the development teams and help them understand what I was doing. They would enjoy it and love it and I think as time went on I become to realize that putting a YAML document of hundreds of lines of Kubernetes YAML in front of somebody was often not the right approach. And it, what I really should have been doing was spending time reducing the amount of actual code they have to write whether that be JSON YAML or some other dynamic DSL, reducing the amount that I had to write in order to kind of make it easy for those for those users to do their jobs, which is ultimately to ship software. Another thing that comes up with these configuration languages is that often you kind of need to make it's very difficult to make a, you know, one manifest to rule them all that is generic enough to kind of fit all of your use cases. And if we think back to the infrastructure infrastructure as a service and platform as a service kind of considerations. This is something that platforms as a service do very very well. On the server side when you say run my application, it figures out everything that it needs to know to kind of make those those tweaks. For example, you might want to run your application in multiple geographic regions. This is really hard to do in configuration languages and if you use a platform as a service it's often done on the remote side it's managed by the actual platform itself. If you are using something like a JSON document like you often have to do things like involve templating languages and one of the most popular ways of doing this is to use a tool like helm, which allows you to kind of insert variables into the demo documents that are conditional based on what you want to do. The problem with that is that, first of all, often people from a software development background don't actually want to kind of figure out how these templating languages work because it's not a language that they feel familiar with it's not a language that they feel comfortable with. And then often they just want to do things that they're used to in their software development. So day to day some stuff like control flow, if statements, conditionals, loops, you can achieve this with these templating languages but it's often clunky and an afterthought and it's kind of added on at the end. And it makes it really, really difficult to make these, make these things idempotent. So at this point is where I'm going to talk a little bit about Pulumi. Pulumi is a modern infrastructure as code platform, which allows you to kind of use programming languages or Turing complete languages to actually define your infrastructure across a wide variety of cloud providers including Kubernetes itself. We have an open source offering and we have a SaaS based offering for teams and enterprises. For those people who kind of want to get more familiarity around their infrastructure as code which is often really important if you are working in a production, you know, a production environment. And we build a tool that allows you to do declarative infrastructure as code I want to define the state of what our infrastructure as code looks like, but you can do it in an in a imperative programming language, and the magic happens inside Pulumi. We have multiple supported languages in the moment. We support two of the languages from the JavaScript ecosystem including TypeScript, Python, Go, which I'm going to show you a little bit of later, and then .NET support in F sharp and C sharp with a very, very alpha version of PowerShell is also available as well. But it's not something that we talk about in general but it is there if you need it. I've talked a little bit about infrastructure as code. I've talked a little bit about what a platform looks like. So what does a developer friendly platform look like? When you introduce these two different things of a platform and an infrastructure code, how might that look? And how would that look in terms of Pulumi as well? Before I talk a little bit like that, as I mentioned before, let's talk about a little bit like what the path to production would look like in a general organization. So if I'm a software developer and I've created this super new, exciting application I want to get to my users in a normal environment, I might approach my infrastructure team or the team that is responsible for the production environment. And even in this modern day of DevOps, those teams generally do generally tend to exist at medium to large enterprises for a variety of reasons. I would probably approach them and the answer that I might get back from that infrastructure team that manages my production environment is, here's an example cloud formation document for you to deploy to our AWS account. Just modify it to suit your needs. And to a software engineer that could be extremely daunting, especially if you're not familiar with cloud formation or some of the other different available infrastructure as code tools. It might be a DSL that you've never seen before and now have to learn a completely new language in order to actually get this thing to production. And the best practices that have been defined by your infrastructure team will be to kind of ramp up on this new tool. And I've certainly worked in organizations myself where I that was the expectation that I set for my software development team. And ultimately, what I found is that they just found ways around using these tools where possible. And that could often mean kind of going straight into AWS as console or Google Clouds console, and just spinning up the infrastructure manually. And then all of a sudden this is now a production thing. And then, you know, what ended up happening is that our infrastructure team would go and try and retrofit it into the best practices. And then the second thing is that what might happen is that they just get frustrated and they start to say, well, you do it instead you do this work for me. And that's, you know, often I think to myself that that is a perfectly reasonable thing to say when when really as a software developer what you just want to do is just ship you code. I kind of want to clarify this is not all software developers I don't want to make sure that I want to make sure that I'm not, you know, there's plenty of software developers out there that do love learning new tools and learning new languages. But in a time pressured environment where you really have to kind of just ship things because your customers want them. It might not always be practical or pragmatic to do that. And these organizational silos turn up because of this kind of organic pattern that has happened. So it, you know, this is this is an experience that I've certainly been to and as I've kind of become a community engineer at Pulumi. This is something that I've heard on a regular basis from many of our customers. We just want to kind of get these barriers out of the way we love these infrastructural code tools. We love these infrastructural code practices, but we don't expect to have to every single person that wants to ship applications to have to learn all of this. So some people will, and some people will like to use kind of like the bare bones, kind of stuff at the infrastructure as a service level, but what people really want is a platform as a service, something that's going to take away a lot of the pain for them. So I want to talk a little bit about what that means in terms of communication between organizations. If you are a software developer, one of the things that you may agree with me on is the idea that a developer appreciates an API. It's a machine readable language that can be implemented by the software that they feel comfortable with or the sorry the programming language that they feel comfortable with. So if you talk to your average JavaScript developer, it's often easier for them to learn new concepts and new languages, if they can do it in the context, sorry, new concepts and new engineering practices. If it's done in the context of the language that they already know, the number of variables that they have to learn is less because they can learn about cloud engineering concepts while doing it from within JavaScript. So that means presenting an API that a JavaScript SDK can actually consume. And so by presenting these ideas in languages that are familiar to the user, it makes it much easier to kind of engage in these new practices like infrastructure as code, best practices. And then this brings me to kind of a tooling consideration. You know, one of the things that I've run across on a regular basis is that if you work in a programming language on a, on a regular basis, if you're a Python engineer or dotnet engineer, software engineer, you feel very comfortable with the package management that you kind of get with those. You know, those languages, you may not love it, you may, you may think it's great, but you may not love it. But you at least feel familiar with it and it kind of follows the patterns that you are used to. If you want to define infrastructure in a, in a familiar environment, like, especially if you're writing code, you're probably very familiar with your ID, kind of kind of taking a lot of the work out of, out of it for you. So what I mean is, is Pulumi's stated goal and aim is to really kind of meet these software developers where they, where they are in a familiar ID, using package management that they're familiar with in a language that they know. And you can switch between them with Pulumi in, in, in terms of like the project so we'll see in a few moments in during the demo what that actually means. You can have multiple languages in the same Pulumi program, but you can pass references between multiple different, what we call Pulumi projects. And so if you have multiple languages within your organization, perhaps your backend team is writing things and go and your front end team is writing things in JavaScript. Those two teams can use the languages that they feel familiar with it's not the case that with Pulumi that you just have to pick a language and go with it. Pick your poison almost and kind of go down the path that you really want to. So I'm going to do a demo now with the caveat that I kind of went to bed last night and this was working flawlessly and unfortunately I'm having some, some issues, some issues this morning while trying to recreate this demo, mainly due to some DNS resolution issues. So this may, if this isn't going quite to plan, I'm probably going to just to bail out a little bit and kind of go for a question approach. So apologies for that but we'll give it a try. And we'll see how it goes. So, one of the first things I want to introduce and this is obviously not Kubernetes perspective this is kind of infrastructure is code based. So I'm going to introduce this idea of these reusable packages. And here is an example, an example definition of a VPC in AWS that is defined using our, in our flagship, what we call crosswalk AWS X passive package. And what these packages are, these are examples of abstracting away a lot of really difficult things, perhaps for even infrastructure engineers to do, we've abstracted a lot of that away into a best practice mechanism, so that you can now define a VPC in a nine lines of code, including the subnets and the tags. So what I've done here is kind of spin up a VPC. And I'm using our AWS package. And what I'm going to do is just show you what that would look like and how many resources this is going to create. So, I'm going to initialize what we call a stack. And I'm going to set my AWS region to something that is close to me. Let me just. So you can see here with nine lines of code. I am able to create a best practice VPC in AWS with NAT gateways and route tables and all of the subnets that I need it's calculated correct subnet so both my private my public subnets. I've done all the calculation for which site as it needs at that level. And it's creating all of these resources for me and abstracting a lot of the pain away of actually doing this and if you imagine doing this in cloud formation this could be hundreds and hundreds of lines of Jason or YAML or if you imagine doing this in some of our other competitors. And this is the idea of me showing this VPC is to introduce what this looks like and how much of the pain you can abstract away from people who just want to kind of get their job done. And this is an example that we have put together which we we call crosswalk, but you can build these packages yourself relatively easily without actually having to, you know, really do all that much work. I'm going to go ahead and apply this, and this is reassuring because this was having, I was having DNS errors earlier unfortunately my AWS account has reached the limit of elastic IPs that I have, but I do have other stacks that I have created earlier. So I'm going to switch select personal, and you can see here I've already created a personal stack with a bunch of different of bunch of those resources. So we're going to move on to what it might look like to create a Kubernetes cluster with the same methodology, very quickly to introduce what the idea of a stack is a stack allows me to create repeatable, you know, multiple of these things by just modifying some values. And so you can see here that I'm just getting what the current stack is. And I was able to select and switch between them. And then I'm allowed to I can, I can introduce those into my code very, very easily. Then I can export the things that I actually need. And if again if we come back to the idea of what an, you know, a developer API. I can export the things that my software developers are people consuming this would want to be able to use for example, if my software developers wanted to deploy applications to a VPC. They probably want to know what the VPC ideas and what the private public subjects are. And so what this ends up looking like is this. Here I'm defining an EKS cluster in AWS. And I'm able to grab these values between these Pulumi projects right here to grab a the VPC ID and the private subnets and the public subnets and just pass them into the EKS cluster. And again this is all done in TypeScript. So if you imagine a front end developer who's familiar with TypeScript being able to do this, if they had to do this in cloud formation, they may have to learn all of what cloud formation is and they may have to learn all of what Terraform is and learn the Terraform DSL. And you know it could be really, really overwhelming for them to do that. At the end of the day what we really want to achieve here if we're kind of making something that's kind of pass like is to make this as easy as possible. And introducing these ideas of modern languages to do this is one mechanism. So I'm going to go into my EKS project real quick. It's actually called cluster not EKS. I'm going to select, see the stacks that I have. I am on the personal stack. And again the EKS package that I'm looking at here is another instance of an installable TypeScript package that will basically define a full EKS cluster in, you know, 10 lines of code. And it creates all of these different resources. And I know for a fact this is going to take quite a while but I'm going to run it and leave it running in the background. It may not finish before the webinar is complete. But it's going to go ahead and create all these resources for me, you know, with a relatively low barrier to entry. And it may be that in your organization you want individual teams to have their own kind of Kubernetes cluster. And if that's the kind of methodology that you want to go to this kind of thing will allow them to do that in code that they feel familiar with, you know, in a relatively easy manner. But what if your organization, you don't want to have that level of control what if your organization wants a true pass, and they just want to, you know, you have multiple clusters that already exist, and you just want to deploy the code to those clusters. Well, you can also do it like define these things in Pulumi. You know, for the Kubernetes layer as well. And I'm going to very quickly switch languages here and move on to the go language, because we have a go SDK. And this is an example that I've put together of a very similar thing that I've looked at there with our crosswalk package for AWS and EKS package. In this particular case I've written my own in go. And I have, you know, I have made it available here as what we call in a component resource. And a component resource is just a mechanism of grouping different resources together, and then making them reusable and reinstallable. So you can see here, if I was an application developer at this hypothetical company where I'm working as an infrastructure engineer, and they came to me and said, Hi, I'm like a back end engineer and I have this new service that I want to deploy. And I really want to kind of get it deployed. And the traditional way I would have asked them to do that is learn a bunch of cloud formation, and then learn a bunch of Kubernetes YAML, and they might be, you know, completely overwhelmed by this. But in this kind of environment here with Pulumi, I can say, Hey, I've got this go module that I've written for you, just do just import it and then do new app deployment and give it a name. And I see here, like, this is 20 lines of go for the end user, the end user would only need to know 20 lines of go. But what it actually looks like under the hood is here. And let me just quickly sync my dependencies. You can find a reinstallable and we, you know, if you wanted you could kind of put this on GitHub or in your internal Git repository and, you know, make it available as an external package. And I've defined this new app deployment here that allows them to just kind of do their job in the language they feel familiar with without having to learn a whole bunch of stuff. Unfortunately, this is the thing that I was having problems with earlier, but let's see if it works now. You can also see that this actually takes the path to a Docker image as well. So if you, what I've done is defined a very simple web server. And this is written in go as a hypothetical backend that the, you know, the, the software developer that's managing a hypothetical, hypothetical backend might want to, to kind of deploy. And this is all within the same kind of Pulumi project. So I have an app directory with a Docker file that builds my Docker image. And here's my hypothetical web server that says, you know, return some JSON. And then the deployment YAML, sorry, the deployment. This could be an external package that is imported, but in this particular case, I've just included in the same package. And then this is my actual, sorry, this is my actual deployment that my, you know, hypothetical backend engineer has to do. So I'm going to cross my fingers and run a Pulumi up. And what's happening here is that it's building all this into a go binary as part of the Pulumi up. And so it takes a with TypeScript this, I should, I should reiterate, you can do this in multiple languages as well. This isn't just available in go. This is available in all of our SDK. So you can do this in JavaScript. You can do this in TypeScript. You can do this in Python. You can do it in all of the .NET languages that we support. I'm just kind of showing you an example here of a different language other than the TypeScript stuff that we've already seen. This can be sped up by doing the go build ahead of time. It's just in this particular case. It's running a go build. And you notice here that the execution was quicker because JavaScript just has to run the node binary and it's often quicker. It's not always the case that these go programs actually take this long to build. It's just that I haven't done the work ahead of time. So we'll just give that a couple of minutes. And while that's actually compiling in the background, I'm going to come back to the actual Docker image. So this is just a standard go web application with a Docker image inside the actual, you know, Docker images inside the actual application. And you can see that as part of my now that it's actually compiled, you can see as part of my deployment, there are multiple resources here and they go across multiple different available cloud providers. We're creating a repository in ECR to store our Docker image. We're building the Docker image locally, again, all in the language that you feel familiar with. And then we're creating a Kubernetes namespace, a deployment and a service with a type load balancer that basically does all of the things that you would need to do as a user trying to deploy to Kubernetes. And you can see it's showing me all of the things that I'm creating. And if you imagine the experience here for a, you know, a back end engineer who's got no familiar with Kubernetes and perhaps no familiarity with AWS. This is very, very easy for them, like it's taking away a lot of the pain that you might experience in these kind of environments. So it's currently building my Docker image. As you can see that here, it's building it locally. And then once it's completed, it should push to the ECR repository, which was created here. And the logic that we, what we call the a weight logic in Pulumi is going to wait until the actual images created and pushed. And unfortunately this failed earlier, so maybe we'll get lucky here and this will actually be successful. And this allows you to really just abstract a lot of this pain away. While it's running in the background, I just want to kind of introduce very quickly the final concept that I wanted to talk about. So in all of these examples, I've talked about this idea that the interface for your software engineering, your developers could be at the infrastructure as a service level by defining a VPC or a Kubernetes cluster. All you may want to create something like a pass where you build packages like software packages that they can actually use to kind of consume in a language that they feel familiar with. One of our recent capabilities is something which we call the runtime API. The runtime API is an abstraction even further than kind of creating reusable packages. What the runtime API does is allows you to create a binary that you give to your users that allows them to deploy their application without actually having to run Pulumi at all. And so this builds everything into a binary which I have locally on my machine. And I'm going to go into the examples directory here. And you can see this is just another, this is a very, very simple docker file that's just running engine X, but this ploy command is doing all of the same things that you saw in the other screen to deploy the application without actually having to run Pulumi at all. And you can see here it's generated me a random name even it's creating me an ECR repository and a bunch of Kubernetes resources is creating the docker image locally. This is using Pulumi behind the scenes. It's essentially the exact same application that is running here, but it's all built into this ploy binary that does all of the heavy lifting for you. And I hope you've been able to see as I've been through the process here, the different levels of kind of engagement you would have to expect from your software developers. It allows you to kind of make the decision of your organization level about what is appropriate and what's the interface for them. And this is only possible because of the ability to kind of use modern programming languages to do that. And I'm just going to quickly show you what this looks like. This is a standard Viper and Cobra if you're familiar with go. It's a standard Viper and Cobra application. We've implemented a get command that allows you to kind of retrieve them with this up command, which is what you just saw here. Again, this is a standard Cobra binary that runs this Pulumi program, which this is available on GitHub and I'll send the links in the notes afterwards. But this is all available for you to kind of look at. And if your organization has a low understanding of cloud native technologies, but you as an individual have a high understanding, you can actually just abstract all of this away into a single binary that runs for your organization. And as I suspected, unfortunately, this, I'm still having the DNS issues that I was having earlier. And I naively expected to be able to set the time out a little bit higher and hope for a better solution with more retries, but unfortunately that doesn't seem to be the case. So I'm going to head back to the slides now. And I'm, you know, happy to ask any questions, answer any questions afterwards. So I just wanted to just very finely before we wrap up talk about what I've introduced here as concepts. You heard me talk about these packageable components. These are what we put for Pulumi we call component resources. They are reusable components that you can grab directly with your package manager. So if you are, if you are building a component resource in Node.js, you would put it in, you would put it in npm and Python you use pipi go you just have a go module and then .NET you put it in NuGet. And then you can kind of mix and match the resources that you actually expect as well. So there's no need for you. A great example of this that I always use is if you're familiar with Helm and you are installing a Helm chart that requires cloud resources. For example, like an ingress controller that needs an IAM permission. In a lot of infrastructure co-tooling that usually involves two different tools. It involves cloud formation and then Helm, which points to cloud formation. And with Pulumi you don't have to do that. And like you can see this here that I am creating my ECR repository here. And then I'm creating a docket image that references that right here. And I can send examples if people are interested of like the ALB ingress controller, which defines an IAM role in the same component resource as the actual Kubernetes stuff. We also support Helm charts natively. So if you already have a Helm chart and you just want to point it to a cloud resource, you can do that directly from Pulumi and just do just pass your values for your Helm file, your Helm YAML directly to Pulumi and it will do all the interpretation for you. So this is what I introduced as this idea was component resources. And again, this is the SDK or the API that I talked about earlier that allows your software engineers who really, really want to stick with their own language. They can do it in a way that allows you to integrate with their platform and the interface is just their package manager. And this is really, really powerful. We've seen so many happy users that kind of take advantage of this. Sorry, not the final thing. The penultimate thing that I wanted to talk about is you saw earlier that I created a VPC and then I also created an EKS cluster that used the values from those VPCs. That's what we call in Pulumi a stack reference. A stack reference exports of value. And in the particular example that I showed you, I used our TypeScript SDK, but there's no need for you to stick to a single language. And so if you are familiar with Python, but your application developers are familiar with Go and your database, you know, your database is familiar with C sharp. You can pass those stack references between those Pulumi projects. And you can then consume them in the language you feel comfortable with. And that again is really, really powerful in organizations that are using multiple different languages for like front end and back end. They can still get all the benefits of infrastructure as code without having to learn this new language. They can do it in the language I feel familiar with. What I didn't show you here is that if you are familiar with, you know, Kubernetes YAML and you really just want to get some Pulumi out of this, we have cube to Pulumi. And cube to Pulumi will convert YAML from your, from your helm template or from your existing YAML and convert it into Pulumi in the language of your choice. And so this really if this really does lower the barrier to entry. Here's an example of like a very, very complex role and you might think oh this is so daunting to actually have to turn this into code. Well with cube to Pulumi, you kind of don't have to do that. It's much easier. It will take care of all of the hassle for you. We also have a command line tool which does this as well. So if you, for example, one of the things that I do on a regular basis is do helm template and then pipe that to cube to Pulumi, which allows me to just kind of remove helm from the equation if I need to. And again, this is about using the languages that is familiar. So if you as an individual are more comfortable with YAML, then you are the programming language, but you see the value of what I'm talking about for your software engineers. Cube to Pulumi can help you very quickly kind of understand what Pulumi is doing. And I personally have been as somebody who is very, very familiar with the Kubernetes YAML spec. This dramatically increased my ability to learn Pulumi. And I actually feel like I'm a better software engineer because of what you put to Pulumi show me. I know the Kubernetes YAML and I didn't always necessarily feel like the strongest software engineer, but by learning these concepts in the environment of something I know is really, really helped me to understand that. And then finally, I talked about the automation API. And this is kind of like the advanced version of a Pulumi program that allows you to bundle all of your application into a single binary. We at Pulumi had a hack day a few weeks ago and we have our automation API available in multiple languages. We have it available in TypeScript and we have it available in Go. I showed you an example of building a Go binary with the Cobra libraries. But my wonderful colleague, Christian, used our TypeScript automation API to build a GUI in Electron that does a very similar thing to what my Poi command line application does. And you can see a screenshot of it here. If you really, really wanted to pacify your environment and you feel comfortable with TypeScript and you've built Electron applications before, you could give this to your developers and build something very, very similar to this with Kubernetes. And just say, just tell us what image you want to run and hit that update button and everything will be okay. And like this is such a powerful, you know, a powerful possibility for anybody who wants to build a pass, but lowers the barrier to entry for their software engineers. So that was all I had in terms of my slides. I really appreciate everybody patiently listening to me. We have around five minutes left. I'd love to take questions if anybody has any. Or if not, then, you know, I really, really appreciate everybody joining me. I quickly bring up the Q&A. It doesn't seem to be actually showing here. Unfortunately, it doesn't seem to be showing me the actual Q&A questions. So I apologize that I'm not able to see that. Is anybody else able to see the Q&A questions? I am. Thank you very much. Would you be able to read the Q&A? Yes, absolutely. That would be great. Thank you. Yes, absolutely. Thank you for a wonderful presentation. And we'll get we have a few questions here. We'll get to the first one. Can these codes be similar for all cloud providers like AKS, GKS, EKS, or just an adapter to plug with the cloud provider? Yeah, we have providers with what we call, you know, I focus really on the AWS side of things because it's what I'm most familiar with, but we have the availability to kind of do EKS, AKS, and GKE Kubernetes clusters. It's available in all those languages. We also have digital ocean, Hetsna cloud, packet, scaleway. Most of the cloud providers that you are familiar with are is available for you to spin up Kubernetes clusters. I personally really like the digital ocean offering and I've used it fairly extensively. So that is available as well. So it's possible to kind of do the stuff that I was talking about on the Kubernetes side with any of the managed Kubernetes offerings, including if you wanted to build your own cluster. So far in this example, this is a second part to his question from earlier. So far in this example, we need proper permission of EKS. Can a IAM user with limited permission do this too? So in that particular environment, I mean, the IAM permission scope is the IAM permission scope. There's no real way around it. So if you have limited access to the resources that you actually need to provision, you know, Pulumi can't magic its way around that if you don't have access to create EKS clusters, then you probably Pulumi isn't going to be able to do that. What we can do and what we've seen a lot of our users do is kind of introduce a infrastructure as code kind of consideration by pushing these things to like a CI CD pipeline and then using things like a peer review to actually mean that, you know, you can kind of achieve the same thing. So in this particular example, obviously I'm doing everything locally from my laptop, but in a in a more kind of robust or production environment, what you would probably find is I would push this to a Git repo, and then the Git repo would have the permissions to create Kubernetes clusters and then it would happen in your CI CD pipeline. And so you can do this with if you're limited IAM user and I use the quote unquote limited IAM user has permissions to create the resources that we need for for the EKS cluster that I showed you earlier, then yes you can do that locally, but in general the best practice is kind of follow CI CD pipeline stuff. It's very similar to cloud formation and all the other tools that you might be familiar with. You need the permission to do the things you need to do, but Pulumi allows you to kind of abstract that away and kind of help you with your CI CD pipeline as well. Unfortunately I didn't have time to show that in this particular presentation. And now up to date is Pulumi with newer releases of Kubernetes and cloud provider features. So we actually build our Kubernetes provider based on the upstream Kubernetes API specification. So the day that the Kubernetes API gets updated and the API spec is updated, we ought to generate our SDK from that upstream so we generally have Kubernetes updates within a couple of hours. So I think Kubernetes 1.22 was released a few weeks ago we had a 1.22 version of our provider available very very quickly almost minutes after that became available. With regards to the other cloud providers, we do the same thing for Azure provider so our Azure provider has full API coverage of Azure and when they update their upstream API all of the features are immediately available with AWS and Google cloud those things are actually available and presented in that way. So we generally have those features available in a couple of days and we have an SLA of three days to make sure that any new features that are shipped into AWS and Google cloud providers are generally available in Pulumi. So it's generally very quickly and we have very very robust coverage, even in those providers where we're not generating the SDK programmatically. We are still relatively quick in actually getting those features to our users. And this is something we had the AWS Lambda EFS support was available on launch day for example, and I believe that there are often times when we are probably first to market with some of these features, you know with reinvent coming up as well. Excellent. All that just about does it for all the time that we have for today's webinar. We'd like to thank Lee for wonderful presentation and very informative Q&A session. And I would like to thank everybody else for joining us today. Hope everyone has a wonderful weekend. Everyone take care and stay safe and we will see you all at the next CNCF webinar. Thank you very much everybody. Thank you.