 It's time to get started here. My name's Jeremy Green and I'm talking about going serverless. So what do we mean about, what does serverless mean? That's kind of a weird term. The idea is that we want to deploy web applications without servers. So I'm going to go out on a limb here and do a little bit of mind reading. I bet some of you are thinking a very particular thing right now. Are you thinking serverless, really? Come on, if you're going to serve something, you need servers, right? So if it'll make you feel better, we can think of it like this. We'll call it serverless with big old air quotes. So the goal here is that, yeah, there are servers involved. But we don't want to think about them. We want to think about our code as code and not worry about the particulars of the infrastructure that's being deployed to. If you've deployed to something like Heroku, you already kind of know how this works. You, on Heroku, you're not really thinking about instances. You might be thinking about your dino size, that kind of thing. But you're not thinking about the physical infrastructure that it's running on. You're just coding your stuff and pushing it up and letting Heroku worry about what machines does it go on? What instances does it go on, that kind of stuff? So that's where we want to get to. And in particular, in this talk, when we talk about serverless, we'll be talking about the serverless framework. The serverless framework is a framework for building web, mobile, and Internet of Things applications exclusively on AWS Lambda, API Gateway, and their other related services. So, a little bit more mind reading here. I bet a few of you noticed that word exclusively on the last slide. And you're probably thinking this, vendor lock-in. So the cons of vendor lock-in are pretty obvious and I'm not really gonna get into them. So why would you sign up for this kind of vendor lock-in? The most compelling reason is ops. Amazon is better at ops than you will ever be. If you disagree with me on that, please talk to me and we can put together a nice fact-insulting proposal to help Amazon be better at ops, but that's unlikely. So this is especially true if you're an application developer and not an operations engineer. Another one is scale. They have data centers all over the world. By deploying into their infrastructure, you can take advantage of their scale with very little work. And another compelling reason is money, especially when you compare the salary of operations engineers. Deploying into Amazon's infrastructure can be very cost effective. So just a little bit about me, my name's Jeremy Green. I'm a consultant, an author, and I run a couple of SaaS businesses. You can find me on the tweets at JaggedTheDrummer. There's my email, send me an email if you'd like. The Independent Consulting Manual is a book that I recently co-authored. Remark.io is one of my SASSs. And I'm also into drumming, photography, and brewing. So if you like any of those things, we can talk about that. I wanna give a shout out to my client ClickFunnels. They've supported me in this talk. Working with them is where I got into all the serverless stuff. They helped me get to the conference. They've been very supportive. I really appreciate them. So I don't wanna get too deep into the weeds of what they do, but I've prepared this very technical diagram of their infrastructure. All right, so enough silliness. First, let's talk about the pieces that we're gonna be dealing with. We're gonna be using some building blocks that we're gonna build up into bigger applications. So the first piece is AWS Lambda. This is basically function execution on demand. I think of it like Heroku for single functions. You give them one little function, tell them how you want it to be run, and then you can call that function either via their API or they have this service called API Gateway, which allows, this is kind of like routing as a service. So this allows you to set up endpoints. And then when somebody hits that endpoint, you can route that request to be handled by Lambda. You might use DynamoDB to store some data or you might use RDS or any of their other services that they offer. And then CloudFormation is kind of infrastructure as code. This is a way that you can describe assets that you need in your infrastructure like a Dynamo table or an RDS instance. You can keep that description in your source code repository and then you can push it to Amazon and ask them to deploy this stuff for you. So this makes it really easy to have like a staging environment that you can duplicate into production and you know that you're using the same stuff across all of your stacks. When you put all this together, you're gonna end up with something that looks kind of like this. You're using multiple Amazon services to route requests around. And they give you a lot of tools where you can do this. You can get into their UI, see a list of all your Lambda functions. You can browse their repository of demo code that shows various applications, various things that you might do. When you go to create a Lambda function, you can set a bunch of config variables like how much memory should it have, what's the timeout. You can hook it up to an API endpoint. You can add an event source from some other place. But if you do all this for very long, you're gonna start thinking seriously, am I coding in the browser? Is this what we've come down to? And then if you do that long enough, you're gonna go through a cycle, something like this. You're gonna feel a little bit unamused about it. Then you're gonna start to get worried about the number of things that can possibly go wrong when somebody's just making changes in a browser and not committing anything to source code. And then if you keep down that path, you're gonna get very angry. So don't do this, this is where the serverless framework comes in. The serverless framework allows you to manage Lambda API gateway and other cloud formation services via code instead of via GUI. You can find their docs at docs.serverless.com. It's all pretty good. I should go ahead and say that this is a very young project and is moving very quickly. So everything that I'm gonna talk about and show you in details could change by this afternoon. Probably not, but just be warned, it's very young. So when you get started with serverless, the first step is you wanna create a new AWS account. You don't wanna use the one that you already have up and running for all your production stuff. And the main reason for this is that the serverless documentation at this point advocates that for getting started, you should create basically an admin super user that can do anything and everything in your infrastructure. And so that's kind of a security risk. You don't want that profile to get into the wrong hands where somebody can start shutting down your production instances or deleting a S3 bucket or something like that. So really seriously, just start with a new account and then as you figured out what you actually want to deploy into production, take time to understand the permission model and do it correctly. So to get started with serverless, it's an NPM module. My apologies to Searls and Tenderlove for having to talk about Node, but I'm fully on board with making Ruby great again. So we'll look at how we can do Ruby in a little while. So once you've installed it, you can do serverless project create. The CLI gives you a nice shorter version of serverless. You don't have to type out the whole thing. You can just say SLS if you want to. And then the first thing you're going to see is some sweet ASCII art. So that's how you know it's good, right? So then it's going to start stepping you through the process of creating a new project. It's going to ask you to enter a name. It's going to ask you for a stage. And a stage in the serverless terminology is a lot like an environment. Like if you're used to development and staging and production environments, it's almost exactly the same thing. It's just called a stage. It's going to ask you what profile you want to use if you want to create a new one or use one that's already there. And then it's going to ask you what region you're going to want to deploy your stuff to. After it does a little bit of stuff, it's going to finally tell you, okay, your project's ready and some things have been deployed to cloud formation. If you go into the directory that's been created for this new project, you're going to see a tree that looks about like this. All of this stuff that you're seeing in the Rails world is going to be stuff like config.ru, config slash application.rb. A bunch of just bootstrap boilerplate that needs to be there to get the thing to run, but that you probably don't really care about until you need to make it do something that it doesn't do out of the box. So we're not even going to really mess with looking at any of that stuff. So let's build something. But if we're going to build something, the question is, what do we build? So let's build something that actually does something, not just a stupid hello world application. So how about we do this? I mean, there's an MPM module for it, right? So that means it must be useful enough to be a service. So first thing we're going to do is serverless function create, give it the name that we want, left pad. It's going to ask you to select a runtime. In this case, we're going to go ahead and use node for three. And then it's going to ask if you want to create an API endpoint or an event or just a function on its own. In this case, we're going to actually create the endpoint so that it's easy to get deployed. If we look in the directory that's created for us, we have three files. Event.json is basically where you set up sample data that your function is going to use. Handler.js is the function itself that you're going to write. And sfunction.json is some configuration for the lambda, for the endpoint, for any other resources that you need. So at this point, it's already ready, we can ship it. And this is part of the workflow that you're going to use with serverless. You can't exactly run lambda and API gateway on your local development box. And so you're going to be constantly shipping stuff to Amazon in the dev stage, testing it out there, and then when you're ready, you can promote it into your staging stage or production stage or whatever. So to deploy, you can use serverless-deploy. Dash is short for dashboard. It's going to give you something like this, more sweet ASCII art, and then a list of all the things that you can deploy. Here I've selected that I want to deploy the function and the endpoint, and then hit the deploy, and it goes and starts doing stuff. And it tells you that it's deployed the function into the dev stage, and then it's deployed the endpoint into the dev stage. And then at the very bottom there, you'll see it gives you a URL that you can hit where you can see your function running through the API endpoint. So if you go hit that, you're going to see something that looks like this. It returns a JSON blob that has a message, and the message is go serverless, your lambda function executed successfully. So let's look at what does the code look like that generated this? This is the default handler that serverless generates for you, very, very simple. It exports a handler function. The function has three arguments, an event, a context, and a callback. And then for the default one, it's just calling the callback with the payload that we want to deliver. So the event, the event is just a JSON object full of data. This is something that you as the developer or whoever is calling this function is gonna put together. These are the inputs that go into the function that you want to have worked on to create your outputs. The context is an object that's provided by the Lambda infrastructure. This gives you some details about who's calling. One of the most important things is get remaining time in milliseconds. So this allows you to do some things that are kind of long running as long as you can pause them so you could be iterating through a bunch of records and then be checking this to see how much longer you have until your Lambda is forcibly killed. And then you could write to the database, here's the last one that we did on this run, and then queue another event. You can also get some stuff about the identity if you're using some of the AWS authorization methods. And then if it's being called from a mobile device, you can get some client context about what's the device, what's the operating system that's running, that kind of stuff. And so then finally you have the callback function. This is also handed in from the Lambda infrastructure. The signature of it is that you call the callback passing in an error and then the data. This is fairly standard node callback structure. If you need to only return an error, you can just return the error, but if you need to return actual data, you should return null for the error and then return your data. So now let's get this function actually doing something. So if we go into the left pad directory, we can do npm in it, and what's happening here is we're creating a local node modules directory and package.json so that we can have additional libraries that we wanna ship with our function. Everything that's in your function directory is gonna be shipped to Amazon when you do serverless-deploy. And then we can install left pad and save it so that it gets added to our package.json. So then we can start updating the handler. So the first thing we wanna do is require the left pad library. This is happening outside of the handler instead of inside the handler because everything outside the handler is run once when Amazon first launches and tries to run your Lambda for the first time. So anything that's like a, it's gonna take some time and is not part of the actual processing of your inputs to generate your outputs, you wanna do that outside. This is setup stuff. So then the next thing we can do is declare a couple of variables where we're gonna pull data out of the event that is passed in. And then we're gonna create a padded string by calling the left pad function that we get from that npm module. And then we're gonna construct a payload that is let's just return the padded string as a JSON object. And then finally we're gonna call the callback with null as the first argument because we didn't run into an error and then the payload as the second. We also need to make some adjustments to sfunction. This is all of what you're seeing right now is automatically generated by serverless. This is where you can tell Lambda how much memory you need for your function to run. You can also set what the timeout is, that kind of stuff. And then the handler line there, you're giving it the name of the handler that should be in vote. And that is constructed by the name of the file that it lives in and then the name of the function that's being exported. So in this case, and by default, it's gonna be handler.handler because it generates a file called handler.js. And then inside of that, it's exporting handler as a function. This stuff is also auto-generated when you go to create your function. I told it that I wanted it to create an endpoint. So it generated, says there's a path called leftPAD. And so that means that once you hit the URL that API gateway generates for you, you go to that slash leftPAD. That's how you're gonna invoke this function via the gateway. You're gonna be doing it via get. You can use all of those standard HTTP methods here. It could be get, put, post, delete, even options. And then there's a request template. And by default, the request template is blank. And this is basically how you tell API gateway how to generate the event that you want based on the HTTP request that comes in. Because your Lambda function doesn't know anything about HTTP. It's not a native HTTP function. It just gets an event object. It doesn't care how that event object is created or what. So you need to tell API gateway how to deconstruct an HTTP request and create an event that can be used by your function. So in order to do that, you can add a couple of lines that look like this. So what we're saying is that we want to create an event with two properties. One's called string and one's called padding. And the string event should come from the input params of the HTTP request. And it should be the param called string. And then the same thing for the padding. So once you've made all those changes, you can dash deploy again. And again, it's gonna give you a URL where you can hit it. And so then if you call that URL with a couple of query params that look like this, string and padding, it's gonna return a padded string, pad it out to the 10 spaces that we want it to be. So how do we test something like this? Like I mentioned, the function is just a function. So it's really pretty easy to test. You test it just like you would any other vanilla function. You don't need to worry about the Lambda stuff. You can, or the API gateway stuff. You can just test the function itself. So we can install Mocha to drive the tests. And then we can install something like Chai to give us some nice assertions. And then we make a left pad slash handler test. And we require Chai, require the handler file that is under test in this function. And so then we can just create, this is basically a mock event that we're gonna pass in to the function in order to test it. The context, we're not using any of the context that Lambda supplies to us in this case. So we can just make it an empty object. And then for the callback, this is where we can do assertions on the return value that we get out of Lambda. So we just make a function that has the same signature as the callback that Lambda provides for us. And then once that's called, at that point we can say that we expect the error to be null because we didn't expect to get anything. And then we can say that we expect the response dot padded string to equal the value that we expect to be returned based on what's in the test event. So then once you've got that handler in place, or that test file in place, you can just call mocha handler test and it's going to run it, show you that it's returning the right thing and that it's been successful. So what about Ruby? I mean, that's kind of a bummer that Ruby is not supported right out of the box. But we can do some things to get it to run. What I'm gonna show you here is all at the proof of concept stage, it's not production ready. If you want to do Ruby in Lambda, you're gonna wanna harden it a little bit, make it a little more resilient to errors, crashes, that kind of thing. So what I did here was serverless function create and then called it mruby hello world. I added two additional files into that directory. One is a Ruby script that's gonna run some Ruby code. And the other one is the mruby executable itself. I used mruby on this because I had found a proof of concept that Nick Caranto put together previously that was using mruby, so I knew it was compiled and would run on Lambda. AWS does publish what operating system and AMI image that they are running the lambdas on. So if you do need to compile stuff, you can fire up an EC2 instance, compile new stuff there and then add that binary into your package. You're probably gonna wanna make sure that you statically link everything because they don't provide a whole lot of stuff in the default image. So since Lambda doesn't support Ruby right out of the gate, we kinda have to get it to work in a backhanded way. We have to write a node-based handler that then shells out to Ruby to let it do what it needs to do. So to make that work, we're gonna use the spawn library, I'm sorry, we're gonna use the child process library and in specific their spawn function. To create our process, we're gonna call spawn telling it that we wanna use the mruby executable that's in the project directory and that we want it to run handler and that it should pass in a JSON stringified version of the event. This lets the Ruby script know what the inputs are that we got. We're gonna hook up a couple of handlers for any time that the Ruby script puts anything on standard out or standard error. We wanna capture that and push it into an array so that we have all of our outputs. And then we wanna say that when that child process closes when the Ruby script is done running, at that point we want to call the callback function that we get from Lambda so that we can tell Lambda that everything is executed and that we're done. So for the purposes of this demo, this is a very simple handler.rb. All we're doing is a couple of puts to output some data. And so then if you call this one mruby hello world, you'll see that we get back an object that has the message, one message that came from our node handler itself and then the Ruby output which is an array of all of the data that the Ruby script put out to the command line. Like I said before, this is not production ready. There are numerous ways that you can improve on this to make it work, but it does work and is a proof of concept that you can use Ruby with the serverless framework in Lambda. So how fast is all of this stuff? It's really reasonably fast. So this is a chart of the API gateway timing. That baseline down there at the very bottom is about 30 to 40 milliseconds. And that's the entire time that a request coming into the API gateway is inside the Amazon, inside the Amazon infrastructure. That's from the time it hits the API gateway, is routed to Lambda, Lambda executes, Lambda returns, and then the response is returned out of the API gateway. But there's some pretty big spikes there. They're going up to like 650 or 700 milliseconds. So what's going on there? That's what Lambda refers to as the cold start penalty. And so basically any time that a request comes in and AWS doesn't already have a Lambda function or a Lambda spun up and ready to accept a request, it's gonna go through the cold start process. That's basically them provisioning a container, loading your code onto the disk in that container, and then calling your handler, getting it started. So that takes a little bit of extra time. And the cold start scenario can happen either when you just push new code and are calling it for the first time, or if you start to get a lot of concurrent requests. So like you'll have one request come in, cold start happens and it's handling a request. And then if another request comes in before that one has completed, you're gonna get another cold start and then you're gonna have two Lambdas running. By default, you can run up to 100 Lambdas concurrently. And if you need more than that, you can apply it to AWS to have your cap raised. So I set up RunScope to just time these two things from outside of the Lambda environment, or the AWS environment. And I was consistently getting about this, average response time of 70 to 75 milliseconds with the M-Ruby one being consistently about five to six milliseconds more time. And so this is going over the wire, getting into the AWS infrastructure and getting back. If we look at the timing for the Lambdas themselves, they look about like this. The orange one is the M-Ruby, and the blue one is just a vanilla node-based Hello World. We can see on this that the M-Ruby one is consistently about four milliseconds slower. The baseline on the node one is about half a millisecond of runtime. This is only in Lambda itself. This is not including the API gateway or any network effects. This is just the sheer execution time of the handler function itself. And oh, I should mention that M-Ruby is a very scaled down, small version of Ruby. It is not everything. So if you need everything that Ruby provides for you, you're gonna have a larger executable. That's gonna take longer to load onto the container, so your startup, your cold start time is gonna be worse, and your execution time is probably gonna be a little bit worse too, just because it's gonna take longer to load. In the Lambda timings, you can see cold starts there as well, even though this doesn't include the time that your code is being loaded onto the disk, it's the first time that your code is being run, so it's the first time it's being loaded into memory. And so that takes a little extra time just in the Lambda execution itself. So to wrap this up, AWS provides building blocks. They give you Lambda, API Gateway, all of their database services. Serverless provides some structure and process on top of everything so that you're not coding in the browser or trying to manipulate their APIs directly yourself, and then you provide the magic. So thanks.