 Okay, let's start. Welcome to my session about infrastructure as TypeScript. I'm going to speak about how you can define and declare your cloud infrastructure with the language that you hopefully know well, namely TypeScript. So a couple of words about me. I'm a software developer, so I'm going to speak from a perspective of software developer, not an IT people, operations people. How can I define my cloud applications in TypeScript? And then obviously I'm interested in the cloud and I come mostly from the perspective of serverless computing, if you know what that is, basically manages cloud services. And I'm also a big fan of Fsharp.org, so we're great to see many Fsharp sessions here, but not mine, unfortunately. For my session, I'm going to use one sample application. I'll go through building a dis application in the next 20-something minutes. It's going to be a URL short term. You probably know what that is, like Bitly or aka.ms. So it might look like this. You have a screen where you can put short LAS for a link and then the full link there to save it into a database. And then when somebody goes to your website and then this short LAS after the route, it gets automatically redirect to the URL that you defined. So pretty straightforward, hopefully you understand the problem we are trying to solve. I'm going to solve it. I'm going to deploy this application to AWS and I'm going to use serverless. So Lambda functions as the main compute module and then DynamoDB for SQLStore. So basic layout of my applications, I have two lambdas. If you don't know what lambdas is, it's just a piece of code running in the cloud somewhere for you listening for an event to happen, for example, HTTP request. So one lambda is listening to post requests to add newer URLs, gets the name and URL from post body and then it puts it into DynamoDB, which is like no SQLStore in Amazon, managed by Amazon. And then there is a second lambda, which listens to all the URLs that people might visit, gets the path, looks up if this URL exists in the database and then if it does, then it returns 301 redirect to that URL. Pretty basic stuff. So I start with code. This is all code I need for one lambda. It's basically 10 lines of code or so. I import some SDKs, it doesn't really matter what I import some SDKs, I then get object with event, I get the path from that event. Then next I make a query to the database, namely DynamoDB, I construct the request, get the response, if it exists, then I return frame one, if it doesn't exist, then I return 404. So this can be done in like half an hour, obviously I need the second function, which I don't show here, 30 URL, but it will look almost exactly the same, but just sending data to Dynamo instead of reading it. Pretty basic. Now, how do I deploy this to Amazon? Well, actually, the actual setup is a bit more complicated than my first diagram because lambda cannot work directly with HTTP, so I need to put an API gateway in front of it. Also I need to store my site somewhere, I choose to store it in S3 bucket, which is like a file store in Amazon, so I need two more pieces in my puzzle. And of course I need to wire them together, so I need to set up all the permissions to access from API gateway to lambdas, from lambdas to Dynamo. I need to put my objects into the bucket, I need to define the API gateway is quite a complicated beast, it needs stages, deployments and points, what not, doesn't really matter. There are a lot of moving pieces, when I count this, it's about 20 resources that I need to provision just to run this small application, which is about 20 or 30 lines of code so far. So how do I approach this problem of provisioning these resources in AWS? Well, there are many options because a lot of people have to do something with this, so there are many, many options. I'm going to run through really quickly some of them just to understand the landscape, and then I'm going to suggest another one. So the first obvious one, you go to a website made by Amazon console, you click the buttons, the UI is more or less easy to follow along, you set up all the combinations, they do some magic behind the scenes like setting up permissions by default. So it's relatively simple, good for exploration, but it's very hard to reproduce these things because when you do it today, and your colleague does it next month, it's going to be two different environments, and you don't want this for your production environments where it's staging and so on. Also, you need to be able to evolve this application over time, and then again, you already forgot what you did last time and how you need to change it this time, so it's not really good for commercial long-living applications, right? Basically, I want to script the provision. The obvious thing when you say script is CLI. So there is a tool from AWS, from Amazon, that you install and you have a command line interface, and you can do all the things that you could do in a web console with commands. Now it's a bit harder to discover all those options, but once you're done with the script, you can save it to your source control, and then at least you can share it with colleagues and reuse it for the future. The problem with scripts is that they are very imperative, so you say what to do step by step in exact order instead of describing your environment, your desired environment as you want it to be. So you really have to say what to do in exact order instead of describing how you want your environment to look like what. So there are tools, and this concept is called desired state configuration. So you describe a desired state of your system, and then whatever the current state of your system is, you say please migrate the system from this state to my desired state and go figure out how to do this in an optimal way. That's the idea of this kind of tools. The default one for AWS is cloud formation. Basically you write your YAML files, which are text files essentially, with all the definitions of resources. That's just a small example. There is a URL shortener example on the web, which I found it takes about 300 lines of this file to define that simple application. So it's quite verbose, and also you don't get things like intelligence or compile them check because it's just text file. So can we do better? Actually it's also very specific to AWS, so if you want to move to Azure, you basically lose your skills there. One another option is Terraform, very popular in the community. It's open source. It works for multiple clouds, so you can more or less transfer your skills between clouds. Although there are exact definitions of resources, I'm not going to be different, but at least you know the way you work and you can reuse some skills there. They use their own proprietary format, but it's again a text-based format, and another example that I found was about 500 lines of Terraform to produce environment for Amazon-based URL shortener. It might be more complicated than mine, but still it's a lot of code, order of magnitude more markup than the actual code in lambdas. So there are also more specialized tools that raise the level of abstraction like service framework. It's still textual files, it's still YAML in this case, it works for multi-cloud, but it's focused on service applications. So you can define lambdas with less lines, let's say, and then bind them to the resources, to event sources, API gateway. So the same code is like 100 lines of code or even less, maybe 70. It's most akin to it's easier to learn obviously, but it's not universal. You cannot use it for all the resources in AWS. You only use it for subset of those resources. So what have we learned so far? What are the desired properties of the tool that can provision for search for us? It should be scriptable, it should be able to reproduce at multiple times, multiple deployments should give the same result. It should speak in a language of desired state, so I want to describe my environment and then ask the tool to provision it as, and I don't care how. Ideally it should be universal for any service in the cloud and also multi-cloud or maybe even applicable for hybrid scenarios like Kubernetes or what not. And then the question mark is language. All the tools that I mentioned so far, they are using some sort of textual markup YAML file, most probably, especially in Kubernetes space. So is that ideal? Well I say that I want to use the same language that I already know, for example TypeScript or C-sharp or F-sharp, instead of markups. And I want to get all the benefits of those tools that I already have, like IDEs and compiler called completion intelligence to provision my environment. The library which does this is called Pulumi, so for the rest of the talk I'm going to talk about Pulumi as a library. The promise is basically that it does all of this, but also you're defining infrastructure in the language of your choice. In the moment I think they support TypeScript, Python and Go. C-sharp or F-sharp might be coming in the future because those folks come from Microsoft background mostly, but it's not there yet. So I have a t-shirt of Pulumi, but I don't work for Pulumi. It's a company and open source product, so you can use it for free as long as you don't need their enterprise level features as always. It's like a free tier that you can use for yourself versus some paid services from the company, like support and integrations and so on. It's all open source. I did a couple of contributions. The engine is in Go and the libraries on top of it are mostly in TypeScript. So at this point I want to switch to the first demo. It's probably too big. Right, so this is Visual Studio Code TypeScript, referenced in the PM package called Pulumi AWS. And I'm starting to build my application URL showing. The first thing I do is define DynamoDB table. It's a TypeScript class that I got from this library. I have IntelliSense here so I can see which parameters are there. I need to define name and attributes and say that hash key is key and blah, blah, blah. But I get all the nice features from IDE while doing this. So if I make, let's say I made a typo somewhere here, I will get the red squiggly and then it will suggest that maybe you made a typo and you mean write capacity instead. So that's a nice experience. Once I'm done with this, once compiler is happy, I can run command line here and then I type Pulumi up, which is short for update. And at this point it will compile my code and it will run it. And every new resource which was created in the script, it will map it to resources in AWS. So in this case it's just one. It's giving me a preview of my plan. The internet is a little bit slow. So it's saying that it's going to create a new stack, which is like a container for all my resources, like a project. And then it's going to create a DynamoDB table called URL. They're going to say yes, please create it for me. And then it will go actually to AWS called AWS APIs to provision all the resources that I need for my application. It's showing the progress. It's showing what exactly it's creating. It's smart enough to find the proper order of creating the resources if they depend on each other. And in the end I can see that it's all created and done. Now this code might look like imperative as if I'm giving commands to create resources. It's not. It's just describing what I actually want to get. So in case I want to change this, I have already resources provisioned and I want to change read capacity to two instead of one, right? So I'm making a change for my environment. Then I do Pulumi up again. And this time it knows the history of my previous operations. So it will compare the history of the previous operations in the last state of my environment with what I want now. And it will say this time that it's going to update table URL. And if I zoom out a bit, no, it doesn't work with. It's also, if I go to details for example, it will say that it will update read capacity from one to two. And that's all that it's going to do. And the rest it will just remain the same. Okay, I'll hit no for now. So back to slides to, no, this is not slides. Back to slides just to show how this works a little bit. Here my file is here in the red rectangle index TS. Then I have a number of language hosts, one per language obviously, which compiles the file and translates us to commands to the engine. And then there is an engine which also keeps the last deployed state in the database. It can be on Pulumi's backend on their site or it can be a local file just starting your system or on a shared drive or anywhere resource control. And then the engine is smart enough to figure out what the exact commands that it needs to do. And then it translates the resource tree that then compiled from the compiled language into create update commands to the cloud provider. And on the backend there are plugins for every provider that you can think of or major ones. This is not the complete list. Also for Kubernetes and some other stuff. And at this point it's quite dumb. It's saying create this, update this, delete this. It's not the new provider. Creating new provider is a feasible task. So back to the demos. I'm going to switch to number one. So now I extended my script. Now it's the same DynamoDB that I had before. But also I defined all the resources to create one AWS Lambda and put API gateway in front of it. And you see that the code starts to grow. It's still very much typed and I use instances defined on top and the bottom. For example, here I use the DynamoDB table to configure my AWS Lambda environment variables. And if I again, if I make a stupid thing and I use, I forget name here, try to assign something which doesn't fit typed-wise, it will say that I'm doing something wrong. It will make me fix it before I break my environment, which is very powerful and very short feedback loop. You see that it's still a little bit low level. I have to reference the folder in my file and then say which function I want to run from there. And if you look at the API gateway, it's a lot of very hard to read stuff in order to configure a very simple thing. So it works, but it becomes again comparable on the length with what we've seen before with textual templates, YAMLs and so on. So can we do better? Can we make it actually shorter? And here again we can use the power of real programming languages. We can start inventing abstractions. So on the next step that I did is that I keep my table as this. That's fine. But then I define my own Lambda component. I reference it from the Lambda file. So I define my own component and then I abstract it away all the machinery that I need to provision a new Lambda with permissions and so on. But I only expose the options that I care about that will differ from one Lambda to the other. And this time it's very short, very brief. And I was able to do this because I'm using a real language which allows classes and components and functions and so on. So in this case Lambda just defines the folder that I need to look at and then entry point for the code and some environment variables. REST API is even simpler. Like it's literally three lines of code and I just say that it should react on this path and then call this Lambda. And if you look at the controls themselves, that's where all the smart things are. So the policies, permissions, everything is once here. That's my interface for options that I want to define while constructing the Lambda. And then the component extends the Pulumi component resource. And that's the key way to explain to Pulumi that I'm creating a custom control which will create a child controls inside of it. I'm also saying the name of my component and then create everything inside. So when I switch to command line, I've already ran this to save some time. So you can see how complicated it is. It has 11 resources to create and that's just half of the application, just one Lambda. But you can see if you look really carefully that there is three structures here. So my component foes them Lambda and foes them endpoint. They are on second level after the stack. And then everything that they create go one level deeper. So you can visualize your structure. There are also tools to like make a picture of this, of your environment. Visualize the structure of your deployment and then see what exactly gets provisioned and so on. So with these tools you can, for example, create a library inside your company and then share it between multiple teams or even make an open source contribution. There are lots of Pulumi controls out there contributed by somebody and then they can be reused. Actually Pulumi team themselves, they came up with a library of controls that are quite helpful in my scenario. So that's my next example. So this is now the full application of URL shortener with both Lambdas and UI. And it looks in 44 lines of code without any components of mine. So one line to define DynamoDB table named URLs and with key name. Then I start declaring API in here just to control. Then with one line I can say that all the static files should be served at the root from the www folder they have on my disk. That's for serving the user interface JavaScript and HTML. And then interesting things happen that we have a blend of resource provision and the code itself because we say for my API when I get the request of this shape URL and then template, then I want to execute this code. So this looks very much like express application that you would write in Node.js. You just have your resource and then you call back on it and there you define the code. What Pulumi will do, this control will do, they will serialize this code at compile time, they will zip it, they will upload to AWS with all the modules and pain modules that it references for example. So you can do anything here. There are no limitations or maybe temporary limitations, I don't know. It's still on preview version I think, it's still not 1.0 yet. But in theory it works, also in simple cases like this. And then in the body you can do your type script, use actually resources which I defined outside London. You see that I defined my template in the script above. And then I'm using it here without the need to know how my AWS DynamoDB table is called and so on. I just say get and then I get URL back and then depending on the URL I make a response to London. You can also imagine like for example having a queue where you say new queue here and then queue on new message and then you define handler. So you blend together the definition of infrastructure and code. Infrastructure plus code is 40 lines of code in this case. That's the second lambda which reacts to post. Actually there is a third one which just lists all the URLs for the pipeline. Now if you look really closely at the script the only mention of AWS is really here when I reference cloud AWS. For the rest it's all table, API and then some callbacks. So they went one step further and you find I replaced just one line in this program. It's called Polymer Cloud and all the rest stay the same while I renamed AWS to cloud. So now I define that in my configuration file I want to use cloud provider AWS and the region. And this is all that is required to switch between cloud providers. If I want to Azure, I put Azure there. They have Azure provider. They will load the library, something like Azure cloud and then provision the resources specific to Azure instead of AWS. So a lot of people when we talk serverless or managed servers they are scared of vendor locking. Like what happens if, I don't know, in two years time I don't like AWS anymore and want to migrate away. So that's one possible solution. I'm not sure it's going to cover everything because obviously get just a common denominator not all the features of all the clouds. So I'm not sure if that's the way to go, but at least for such simple cases you can try. And if you really care about being multi-cloud at the same time, you can abstract it away to some extent. So that was my last demo or just a couple of other slides. To reiterate there are several options or layers of abstractions that you can work on. Obviously the cloud API is without Pulumi and the resource provider is Pulumi providers which work on top of that. But then you can either create raw resources as the cloud describes them, be able to set all the features that cloud has and so on. Or you can start creating controls or use controls created by others, which are more abstract, more brief, but probably less configurable. And then you go all the way up to abstract in a way your cloud with something like Pulumi cloud. So quickly to conclude, if you are doing cloud applications and managing infrastructure, you kind of have to start using infrastructure as code as they call it. And there are lots of tools provided by cloud vendors themselves like ARM templates, cloud formation, third-party tools like Terraform, great tool if you wanted. But if you want to use real code, so not just configuration but infrastructure as real code as TypeScript for example, you can also use Pulumi and use one of those languages. Fsharp is probably still very raw, but there is some work there. With these things and I have some questions, these are some time for the questions. These slides can be found in my github and Pulumi has a lot of examples that you can browse through and see how other tasks, if you are not working on serverless applications but something like Kubernetes again or whatever. You can find an example there, there are hundreds on different languages. So this thank you. Questions? Five minutes. Yes. This cloud configuration from Paragang started and I love it. Often your cloud will like Pulumi will go down. Can it query the current configuration state instead? From the cloud. Yes, you can. There is command called Pulumi Refresh. It will go fetch the current state of your resources. So if you change it manually or delete something manually, you can say Pulumi Refresh and it will ask you whether you want to override your state with the current state and you will sync it back.