 Let me know if you can hear me now. I presume you can hear me. So I think I can you hear me? Yeah, I can hear you now, Renata. OK, I will switch to a better microphone. Please let me know if you can hear next. OK, I don't hear you. OK, now I hear something. OK, you hear back. All right, I'm having a problem with my mobile microphone. I don't know what's going on. So I switched to the webcam microphone. Is it good enough for you? Can you hear me well? I can hear you pretty clearly, yeah? OK, all right, great. All right, thanks, Taurus. I will switch on my webcam just to test it. OK, I can see you. Yeah? Yeah, all right. OK, great. That's good, now. So let me share my screen with the presentation. OK, can you confirm you see my screen, the presentation? Yeah, I can see the presentation perfectly, Renata. That looks good, yeah. All right, great. So everything seems to be working. Let me stop sharing my screen. Maybe you can share yours just to make sure it's working as well. Yeah, sure, let's do that. OK, so let me know when you can see the presentation. Yeah, I can see it. Looks good. OK, good. OK, perfect. And when I stop sharing, then you can see my code. Yes, correct. You can see my browser. Yes, I do. OK, great. Perfect, everything is working. OK, what about my web? Yeah, let me turn it on. I can't start your video because the host has stopped it. It says you need to enable it. OK, I don't see anything here about blocking your web. What's that about? I have made you the host, so you should be able to. Yeah, OK, worked. OK, perfect. So everything is working. So we have 20 minutes to go. I'll leave everything connected here, just switch off my webcom. We have a chat on the Zoom platform as well. So if you need to send me anything in private, you have a chat icon somewhere in your screen, probably at the bottom. You can send to which person you want to send the message. So you can just select my name and only, yeah, correct. So I will be able to. Yeah, so during the presentation, we will be receiving questions on the chat or on the Q&A section of the Zoom. So I will mediate those mediums and call you during the presentation when we have some questions from the audience that would be interesting for us to address. Good. Yeah, I hope we do. Yeah, or would you prefer to address all of them at the end? What do you think? No, I think it's good to address them as they come if we can make it more interactive. All right, OK. I'll just call you after when someone sends it. Yeah, feel free to interrupt me. That's all good. OK. Excellent. There was one thing I forgot to mention actually. I don't know if it's really important, but let me just show you. So we talked about this signing library, the AWS signed Axios library. I'll also cover it. So obviously, we'll talk about this during the presentation. There's another library which is based on this, which is it's actually sorry, not a library, but a command line utility called aws-sigcur. So it uses the AWS signed Axios library. Actually, sorry, it doesn't use AWS signed Axios, but it uses the same AWS for module to do sign credentials. But it allows you to make instead of doing current requests, you can make signed current requests. OK, nice. To an API endpoint. Yeah, so for example, you can, for example, run a command like this. So if you were to just do normal curl, so this is an API gateway endpoint. If you were to do normal curl, you get a 403. And if you do aws-sigcur, it will use your AWS environment variables, so your profile or your access key, and make it so then you get your 200 with the actual request. So for developer testing on the API, it's useful. Yeah, so I just wanted to mention that so you were aware that I might bring it up. OK, yeah, it's a handy tool. Nice. I will add it to the repo as well. It's aws-sigcur. It's on GitHub, right? OK. Yeah, that's an NPM module, so yeah. It's an NPM module. OK, great. All right, perfect. OK. OK, so we'll start in 15 minutes from now. Good, great. OK, awesome. Cheers. Yeah. Hello, everyone. Good morning, afternoon, or maybe good night to some of you depending on your time zones. Hi, Owen, can you hear me well? Hi, Renato. Yeah, I can hear you perfectly. OK, awesome. I presume you also see my screen right with the presentation. Yeah. OK, great. I cannot start my webcom now, but it's fine. So I think you're going to be leading the main presentation, so let's go with your webcom. So everyone, thank you very much for joining us for this tech talk. So we'll be talking about securing APIs on AWS Lambda. My name is Renato, my developer advocate at Dashboard. So my main experience is in building serverless backend systems for various large-scale services from data mining to machine learning. And so I will defer now to Owen. He's going to introduce himself. So, Owen. Hi, everybody. Yeah, so my name is Owen Shanahee. And I'm the CTO at a company called Four Theorem. I can explain a little bit later what we're about. But I'm a software developer and architect who's been working for about 20 years across a lot of different industries and different architectures. And focusing in the last couple of years is very much heavily on cloud-native serverless architectures. Awesome. So Dashboard, it's a company I'm working at. So it's a serverless monitoring and debugging service. So it provides automated fault detection algorithms for serverless applications. So regardless of which runtimes you use, Node.js, Python, Java, whatever you're using, we can automatically detect errors on your applications. And we can send alerts. You can customize your performance policies so that you can get alerted, for example, when an API starts performing in a bad way in some way that's not expected by you and your team. We provide real-time alert team. We integrate with AWS X-ray. So there's tons of features. So please go ahead and access dashboard.iu after this tech talk and check it out. And 4Theorem, it's a native-less cloud-consuiting partner. So Owen, would you like to say more about 4Theorem? Sure, yeah. So 4Theorem, we've been around for about two years. So we're quite a young, small company. But we're a consulting partner. And we're focused on, I suppose, two primary things. We deal with a lot of different types of systems. But our strategic focus is really on serverless and machine learning. So on using kind of latest generation cloud technologies like managed services, serverless, and AI-managed serverless to help modernize applications. But we don't have any products for a consulting partner. So we help customers to either build products from scratch, but actually more often than not, taking existing systems that might have a rich history and helping to bridge the gap between traditional technologies and the next generation in serverless DevOps automation and machine learning to really help people go faster. Awesome. So everyone after the tech talk, please go ahead and access Dashboard and 4Theorem websites and Twitter accounts. And please thank them for supporting ourselves in promoting this tech talk today. So we are going to be covering proven and scalable architectural patterns, how you can structure your APIs either using API gateway or an application load balancer, for example, in order to implement scalable and secure APIs on top of AWS Lambda for your cloud back-hands or to serve front-end applications as well, or maybe mobile applications. So whatever you want to serve from a back-hand, this is some of the best patterns you can follow in order to have secure and scalable API back-hands. We have a bunch of resources that are connected to this presentation. After the tech talk, we're going to save the recording and also share in this repo. So you can go to github.com slash dashboard slash tech talks and you're going to have everything we're going to mention here, including the recording and the presentation slides, all right? So I'm going to stop sharing my screen now, Owen, so that you can start sharing yours. And we'll move on to what's actually really interesting for everyone. OK, great. Ronaldo, just let me know if everyone can see my screen. I can see it. OK, great. So this we've already covered about 4Theorem. So just to move on from that, I mentioned that machine learning and serverless are two of the things we're focused on. So it's worth mentioning that I have written a book together with the CEO of 4Theorem, Peter Elger. And it's about these two topics. It's about building serverless applications with AWS. We'd love for people to check it out. And if people are interested, Manning has given us a discount code for all of their titles, which is on the screen right now. But if you're interested in the book itself, I also have some free codes. So if people want to reach out after the tech talk, I'd be happy to see if I can get them a code for the book. So there's my email address and Twitter handle. You can find me. And yeah, even if you're just interested in serverless applications, it's a hands-on engineering guide to building serverless on AWS, but also a lot of the AI managed services that you can find there. So then just to set some context for today's talk, one of the other things apart from the book that we've been busy with and all of our customer projects is some open source work. And what we've created over the last year or so is an open source serverless starter project. So this is something we've put quite a lot of time into and are continuing to develop all the time. It's designed as a resource for people who are getting started with serverless architectures for the first time and either want to use a template to start their own commercial production grade project or even just as a learning resource. It's on GitHub and I'll refer back to it a few times in the talk. So you'll have a chance to explore it in a little bit more detail. It will set a bit of context for our discussion today around APIs. The project is called Slick Starter. So you'll find it there on GitHub and there's also a short code, a short link Slick.app. So why did we go and create this project? Well, I want to talk a little bit about I suppose what I call the problem with serverless and it's not as bad as it sounds really but I suppose with any new technology hype there's a huge promise with serverless but at the same time it's still quite early days in the adoption of serverless. So as with any new technology wave that means there's a couple of associated problems with it and while we build many serverless applications you really kind of have to be aware of those challenges before you start adopting serverless at scale. Otherwise you might end up with a lot of frustration and disappointment and delay as you get started. So really what the problem boils down to is that there's a huge amount of choice when it comes to building serverless applications between cloud and all the different ways you can configure services in any given cloud provider in order to get your application into production. It's actually a minefield of many different decisions which you have to make. If you want to consider what's the best way to structure your project, which services you should use, how should you configure them? What are the best practices in terms of architectural patterns? And this can really slow you down if you don't really consider them all upfront. And it's not necessarily always about making the ultimate best decision upfront but it's about making a decision so that you can just get to production quickly and then start learning from your experiences. So we went through that kind of cycle a few times on serverless projects and after a while we decided to put together a template project to describe and really illustrate what are all those best practices to help us really accelerate the start of our new projects. So we put all of those best practices into a template repository. We wanted to I suppose make 80% of those decisions that you have to make when you start a new project. We wanted it to be production grade. So we wanted it really to cover all of the necessary basis and then we also decided to make it open source because by making it open source, it doesn't just showcase our work and it's not just about providing it as a learning resource but it also allows us to get feedback on it from the wider community and everyone else is doing serverless development. So these are some of the things that we tried to cover off in the Slickstarter project that even things like how do you structure your project? You know, if you're growing serverless projects, typically are like microservices projects and that they're composed of many fragmented, discrete components, different microservices, different cloud resources which are a configuration of a managed service in AWS for example. So you need to kind of think about how you're gonna structure your project if you're going to use one repo or multiple repos and then there's how do you do continuous deployment in a cloud native way? How do you do centralized logging? What are the security considerations? What about observability and monitoring, local development? And there's like how to integration testing. So Slickstarter has integration tests and UI tests as part of the project. So you can see how all of that works end to end in a continuous deployment pipeline. Then we also have things like domains and certificates. So you might, for many projects, if you're imagining launching a SaaS product, you want to publish it to a domain. You might have a domain for the front end and a secure domain for the API. Then you'll want to secure those with a HTTPS certificate and manage the deployment of all those things. That can actually get quite complex and you can get really bogged down in weeds trying to figure out the best way to configure that. So we have an answer for that and that's something we can cover today. Events, so how do you do point to point events? How do you do pub sub events? There's a lot of options there, but when you're getting started, what's the best way to get started? And then you can start thinking about evolving that. And then coming back again to today's topic, when you've got APIs in between services, how do you discover them? How do you actually, if you've got two modules within your application, how do you separate them so you don't have tight cohesion and tight coupling that might cause you problems down the line? But also, you deploy it in such a way where services can discover each other easily, can be confident about the availability of other services and obviously secure them effectively as well. You've got things like user accounts, front end and data access. All of that we've included in this project. The platform itself is actually deployed from the open source repo to slicklists.com and it's quite a simple kind of SaaS application for managing checklists. So you can sign up and this is the React-based front end. You can go in, you can create checklists, you can create items in those checklists and mark them as done. So one of the features then that kind of illustrates some of the architectural patterns is a very basic one. When you create a new list in the system, it will trigger an event that ultimately results in you getting an email, which just welcomes you and congratulates you for creating a checklist. And we can use this simple feature today just to describe how some of our separation of concerns is architected within the system and how we use secure internal API gateway to achieve that. So this is, I suppose, the full diagram for the six-starter application right now. We're not going to go through everything in detail. We don't have to, but we are going to focus on one particular aspect of it, which is the internal API gateway. So at the top, you see the front-end web application, which is our React application. And we're using the Amplify SDK for two things, for authentication and authorization, and then the API invocation. So the JavaScript SDK is a wrapper around Axios, the Axios HTTP client. And we're using Cognito user pool authorizers for the public APIs. So people may already be familiar with that in many of the canonical serverless tutorials. We'll have DynamoDB database with an API gateway on top of it. And then on the front-end, you'll have an API gateway and you'll use Cognito user pools to secure them. It gives you a very nice feature complete, quick way to deploy an API. If you're doing a single API, things get a little bit more complicated as you grow a real system and you end up with multiple API endpoints, but we can talk a little bit about that today. So going back to the welcome message feature I mentioned, when the front-end uses the checklist API to create a new checklist, that record is persisted in the DynamoDB database, but the checklist service will also actually send a lifecycle event to the event bus. So this is a very nice, we're using event bus for the event bus in Slickstarter. So it's a very nice way to do pubs on events, and especially kind of arbitrary events where you just want to publish an event and you don't necessarily have a consumer for that event, as needed. So in our case here, we have a welcome service, which is a separately deployed service within the application, but it's one that can listen to those checklist lifecycle events. And what does that occur? So it listens basically for this message saying a checklist has been created using the CloudWatch rule or the event rule. And when it does, it needs to send that user an email, but the checklist event that comes in doesn't have the user's email address. It just has a user ID because that's all a checklist has, it has a user ID. So the question then is how do we go and get that information? Well, we've got a separate service that's dedicated to functionality around user accounts. So it can go off and retrieve the email address for a user. So in this case, what we're using here is an API. A user service has an API which is only designed for internal use. So we only want it to be consumed by services that need to access user information. Obviously, user information is something you want to protect. So it's a good use case for this. But once it gets the email address, then it can construct this email message, put it on the queue of the email service and that results in the email being sent. So this is the story in the Slick Starter package and people can go in and have a look at the repo and see what this looks like. But today I've got kind of a simplified version of that internal API and an internal API caller, which we can use to kind of showcase what this is all about. Just before I do that, I just wanted to set another little bit of context. So it is worth, why do you need an internal API gateway? Because you don't have to do it this way. This is just one option to do it. I think it's quite a nice pattern to follow this way, but there will be arguments against it. Some people will even prefer, rather than having an internal Lambda function calling an internal API that invokes a Lambda function, they prefer to do the other way or more directly, where you have a Lambda that invokes a Lambda directly. It really depends on your preference and other requirements. So one good reason to put an API gateway in front of it is because you might like some level of decoupling. You don't necessarily want your welcome service to know that the user service is implemented in AWS Lambda. I mean, it doesn't have to be. It could just be a HTTP endpoint. So you might just decide that for most of your services in your application, HTTP is the boundary for synchronous services where you're doing a lookup like this. You might also want to take advantages of features in API gateway like caching or validation. You can also get custom authorizers which you can put into your API gateway. And you could also have something else behind that API gateway. API gateway that isn't Lambda, but you might change it in the future without having to change that public contract. So for example, instead of having a Lambda function called DynamoDB, you can just connect API gateway to DynamoDB directly. If you had an API gateway and as the entry point into that service, you don't have to change the invokers. It's all just done within service. Or you could just change it so that it's actually proxies to an external HTTP origin server. So you've got quite a lot of flexibility with API gateway. But so those are the kind of reasons that I would put forward for using an internal API gateway. Owen. So this is, yeah. Owen, I just wanted to let everyone know if anyone has any questions or comments to make about the presentation, you can send on the chat box on the Zoom platform or the Q&A section as well. So feel free to send your questions and we will address them as soon as possible here. Yeah, absolutely sounds good, Renato. Yeah, I mean, any questions? I'm very happy to be interrupted. So please, please do. So this is how we're going to construct our internal API. So we'll go through this step by step. There is an associated article and Renato has already included that in the GitHub repository we mentioned. So this is our internal API. So in this case, we're just going to create a very simple internal API. It's got a Lambda function with an API gateway in front of it. And then we've got a separate serverless application, which wants to invoke this internal API. So they'll both be deployed into the same AWS account. But we just need to figure out how the caller will discover the internal API and how it will then invoke it in the secure way. So we're going to go ahead and build it. And again, the code for this particular simplified example is the URL is here. I'm going to go through the code now, but that'll also be available for everybody on the GitHub repo. So let me just show you the structure here. So this is our repository. You can see this is the repo I just referred to. And if we go into the internal API itself, let's just first take a look at the Lambda function itself. So this is as simple as it gets. The Lambda function is just going to return a HTTP status code root 200 with a JSON message. This is a response from an internal API. So let's go ahead and look at the configuration of the serverless service. And we're using the serverless framework. Okay, so this is our internal API. A couple of things to point out here. So this is the endpoint type. We're configuring that as a regional API gateway endpoint. There's no particular, there's no good reason really to have an edge endpoint in this case, but that is one of the other options. So if we wanted to be globally distributed, we could use edge and it will, the API gateway will put a cloud front distribution in front of that seamlessly, which will allow our API gateway to be distributed. Even for a public API gateway endpoints, that will actually incur some delay because it will have, if it needs to actually go to talk to your API, it will have to go to your region anyway. So in general, there's not really a good reason to use edge unless you're taking really full advantage of caching at the edge. So this is the configuration then for the Lambda function itself. And the important bit is down here where we configured the HTTP proxy event. So this will, serverless framework is going to take this configuration and create an API gateway for us. And this simple, this line here, the authorizer is configuring how the API is protected. So there's a few options here. We can create a custom authorizer, which is another Lambda function that will check if the incoming HTTP request is in some way authorized to use the API. We can also set up a Cognito user pool authorizer this way. In this case, we're just saying AWS IAM. And that means that it would be protected by IAM and any caller needs to be explicitly authorized using an IAM policy. The last section of the serverless configuration then is the service discovery piece. This is typically how for an internal API, we do API service discovery. And that's by deploying a system manager parameter store parameter for the URL. So this takes the generated ID of the rest API from API gateway and constructs the URL of that endpoint. So we don't create any domains or aliases for internal API endpoints. If we were to do a public API, then my preferred way of doing service discovery is obviously using a domain and having a naming convention like API.cyclists.com slash checklist. That's generally much cleaner for a front end, but for an internal API, we take the generated API and we put it in parameter store. And then the Lambda function that needs to look it up, can take it from parameter store, cache it for as long as it wants to and retrieve it whenever it needs to. So we'll see that when we go through the caller. So let's, yeah, let's take that API, make sure it's deployed. So I'm just running serverless deploy here and deploying it into the account. So it's going to create the Lambda function, configure the API gateway for that endpoint and create the system manager parameter store variable as well. We've actually already deployed it. So I'm going to go into the AWS console and just show you, here's our API and we can check that we've got one method configured here and it uses IAM authorization. So that's going to, that's taken a few seconds to deploy what it is. I know that it's already been deployed because I just, I just checked this beforehand. This is the endpoint that's generated for the API. So you can just see it's given us the endpoint URL here. Serverless framework has printed that out to the console for us. So let's just try and curl that and see what happens just by way of illustration. So that's given us a four or three forbidden and the message coming back is saying it's missing an authentication token. So then let's look at how we provide that authentication token, but let's do it in the context of a real external, well, not an external service, but a service within the same stack that needs to access that internal API. So we've got another, within the same repo, we've got another module here, which is the internal API caller module. And it's got one function, the caller. So let's have a look at the code for that caller. So this isn't the final version of the caller. I've taken out all of the authentication piece for now. But what we're doing here is we're taking the, on each request, we're looking up the URL of the internal API, and then we're invoking the HTTP client to invoke a GET request. By the way, we wouldn't get this parameter every time in a real production Lambda function. We would retrieve it, we'd retrieve it outside the Lambda function and cache it in some way. And we'd probably use the MIDI framework to do that in a neat way. But just to show you what that's like, and how we get, what kind of results we get, let's deploy that caller as well. Okay, well, that's deploying. Let's have another look at it. So we're using the Axios HTTP client, and we're just calling GET with the URL that came from parameter store, which is the same as the URL I just used for this current request here. Okay, so that's been deployed. So let's go into our Lambda console into where that caller function is defined, and run it. So I'm just gonna run it here with an empty event. It doesn't need any inputs at all. And it's going to try and call that URL. Okay, so we get a failed message back. If we look at the details for that, and we can scroll down on the logs that come out here and eventually we'll see missing authentication to open the same message. So I'm going to, yeah. Oh, and so perhaps it would be interesting to address one question we received from Manuel. So what are the benefits of using AWS IAM instead of an API key in this particular case? Yeah, this is really good question. Thanks Manuel. I've actually been through the experience of trying all the different possibilities for internal API authentication, not authorization. API key seems like a good one actually. Basically the way API key's work is that you can specify the, you can specify a private endpoint and mandate that it needs to have an API key. An API gateway will generate those API keys for you. The difficulty I find for an internal API is effectively sharing those keys with the services that need to use them. It's, there's no real neat way to do that. And I have tried using custom cloud formation resources to publish those keys. It's a little bit messy. And I've talked to a lot of people at AWS about that and they kind of recommended against using API keys for internal APIs. Really, I think API keys are better suited when you need to make a public API available to tenants on your system. And for that it works really well. I think IAM is just, it's just cleaner for this purposes, this purpose for internal APIs. So I hope that answers the question, but if it doesn't please feel free to persist and ask me again. Okay, so let's, basically I've taken the security out of this, what I'm gonna do now is go to the full version of this function, which has the correct HTTP requests signing. So our request failed because we were missing the authentication token, the authorization token. That's because we didn't sign the request. So if you're using IAM authorization on an API, the API request has to be signed. So it's the same way that every request you make to any AWS service needs to be signed with an AWS version for signature. That's what you need to do for your HTTP requests, which require IAM authorization. And there's a couple of ways, there's a couple of libraries available where you can construct those signatures and then you can add them into your HTTP headers. I kind of wrapped up all that functionality and we created a library called AWS signed Axios. It's basically a wrapper around the Axios HTTP client. So if you're using HTTP, sorry, if you're using Node.js as your Lambda implementation method as the majority of people are, but if you're using Node.js, you can use this library. It's available on GitHub and it's under my own account. So that's it, AWS signed Axios on GitHub. So what that does is it will take the, it uses the AWS SDK to take the credentials from the role your Lambda function is running in, signs the request and as in the HTTP header, it's for you seamlessly under the hood. And there are other options for different languages. Renato, you have one for Python, right? Yes, it's something I needed some time ago for Python developers, the Boto3 SDK from AWS has one issue is that they implement a blocking code when you fire a request to invoke an AWS Lambda or an API gateway endpoint, for example. And when you have to invoke lots of them in parallel concurrently, it's a problem because your code is going to fire one, Boto3 is going to block the entire execution and then it becomes an issue. So I implemented a very simple way to work around this with the AIO HTTP library. It's a great open source library for Python. So it signs a request, use AI HTTP library and send, you can send those and hundreds of requests in parallel. So it's in the the repo that I shared at the beginning of the presentation as well. Excellent, good stuff. Okay, so I just deployed the updated version of the function which uses the AWS signed Axios to sign the API. So this is basically the only changes that we're swapping Axios with the signed Axios library and that's the one we're using to invoke. Let's give it another try on the console. I'm going to run test again and it fails again. So let's have a look at the failure again. So we give a four or three forbidden. If we look, the message is a little bit different. So we could see this better in CloudWatch logs but we can actually see here that the role, the Lambda function's role is not authorized to perform and then we get the IAM action. So the action is execute API invoke and then we're given the resource. So this is expected when you think about it because the caller needs to have explicit permissions. So even though it's signed the request, this role hasn't yet been given permission to explicitly invoke this API. So that's the last thing we have to do. So let's have a quick look at our IAM policy. So these are IAM role statements which get included in the role that the serverless framework will use when it deploys the Lambda function. And right now it only has access to the URL parameter in the parameter store. So what we need to do is give it explicit permission to access our API and that's what it looks like. So there's actually two action verbs. We need to grant permissions here. So you need to give it execute API invoke and also the HTTP verb of the API you want to invoke and then the resource. So in the resource, the syntax of the resource is like this with our region and account ID. I'm using for just for simplicity here, I'm using a wild card on the generated API ID and then we've got the stage and then the verb and we can have the path after this. So you can be explicit about that or use a wild card. I'm going to save that updated version and deploy. So we get an updated version of the IAM role. So since Malwell asked about API key, it's probably also worth mentioning the other options for securing. So we mentioned custom authorizers. So you can also use implement whatever level of authorization you want using the custom authorizer. And then you can use JWTs API gateway has performance benefits around that, can catch your keys and stuff without having to invoke your authorizer Lambda every time. You could also use VPCs. And this is in the article that is linked from the slides and also from Renato's GitHub repo on this tech talk. One of the disadvantage at the time, the article was written VPCs were really ruled out because of the massive cold start delay. That's been resolved recently. So it's more viable now to put Lambda's in VPCs. I wouldn't necessarily reach for it unless you need VPCs for other reasons in your application. It is a network level of security. So you do get that comfort from it. And for compliance reasons, it might be necessary to have that network level of security. It does come with additional complexity. There's no way to avoid that. When you have a Lambda in a VPC and you have an API gateway in a VPC, and then you need to call that from either the same VPC or from another VPC. There are many additional resources you need to create. You need to create VPC endpoints, et cetera. So if you need to have that level of network level of routing and security, it's fine. But in general, if you're doing really serverless first applications, then the general way to secure these things is to have the API available publicly but secured using IAM. And by the way, in this repo, the playground repo that we're using for the internal API, there is also a VPC example. So if you're interested in the VPC example, you can go in there and take a look at it. It's more verbose, right? So here we create all of the VPC resources, but we have to create the VPC endpoint too. It's not as neat, right? Because you can't just use, with service discovery, you need to actually discover two URLs. You need to use the URL of the endpoint. And you also need to know the domain of the API endpoint because the host header has to be different from the URL header. If you're interested in it, you can look into it in more detail in those code examples or feel free to ask me afterwards. It's interesting to look at, but I generally try not recommend that if you can avoid it. Let's go back and invoke this now. Given that we have the signed request also signed with the credentials of the role which has explicit permission to invoke that API. Then we see the API invocation is successful and the output shows the message that was retrieved from the internal service. So this is a response from an internal API. So that's the simple example. And if we go back to the diagram here, so this is really an illustration of what we've got. It's quite simple. You've got a lambda with an API gateway in front of it, to figure to use IAM. Requests into that should be signed. So they're signed with an AWS version four signature. And the lambda and function that invokes it has a role with permissions to invoke that API. So it gives you, you know, the great thing about it really is that it uses, I mean, IAM and good IAM discipline is unavoidable with AWS. And it's something that, you know, you spend a lot of time on, but it gives you very powerful fine grained access control. So it's actually makes sense to use that free internal APIs as well. So there's one other topic. I mean, if we're okay, Renato, is it okay to move on beyond internal APIs and talk about some other API gateway patterns here? Yeah, I think that would be great. Yeah, I think most people will be interested in both implementation. So most systems have some sort of internal, certainly. And probably, I don't know, 99% of the systems will have some sort of public APIs as well. So yeah, I mean, the internal API one is one, you know, you could spend a lot of time figuring out what the best approach is. And I think Manuel's question was a good example of that. You know, there's a lot of different ways to try and approach it. And once you pick IAM, you can see the code is quite simple. It's not that, there's not much complexity to it once you know what you need to put in place. This story is a little bit different. So I also wrote another blog on this and the URL is here. Bernardo's also going to share it. But this is about public APIs and it can be a little bit tricky to figure this out. So what the use case is here is that you've got two services in your application which both expose a public API. And rather than having these ugly generated API gateway URLs, you want to publish them to api.urdomain.com. So figuring out how to do that is quite important, right? Because the normal case for people is you start off with one API and then you can put a certificate and a domain and DNS entries in front of it. But then you have a problem when you go to the second service because you actually have to construct API a little bit differently in order to facilitate multiple paths from the same top level domain. So I just wanted to share how we do this typically so that people can kind of adopt this practice upfront and really start getting some, really start putting the practice in from the start. So Slick Lists, the Slick Starter application uses that. And it's kind of, in order to see how you do it, you need to kind of look at four services. So we've got the cert service, API service, and then we've got the two services that publish an API. So let's have a look at those things in code. So if we go into certs, for example, first, we've got a serverless configuration in here. And we're using Route 53 for DNS and AWS certificate manager for generating the certificates. It's quite difficult for people to be using other services for DNS, but you still need to think about the same principles here when it comes to setting up the API gateway. The public hosted zone is set here. So that's kind of the hosted zone for your top level domain. And then we deploy, we provision a certificate for the front end website and also a certificate for the API domain. So that's deployed as a standalone service. And it's worth mentioning as well that this is always deployed into the US East one because we're using a CloudFront distribution for the front end and we're using an Edge API gateway for the API. And because you're using CloudFront, if you want to use a certificate with CloudFront or with API gateway Edge APIs, you must create your certificates in US East one. So that's a hard restriction for AWS. So we open up the API service then. So the API service then, this is where we deploy define the domain name. So this is a specific resource within the API gateway service that says we want to configure a domain name for you, so the API gateway. So this is going to be at slicklists.com or whatever it is for your application. We then give it the certificate. So API gateway always has to use HTTPS if you want to use a custom domain, you must configure a certificate. And then we can create some DNS entries for that API gateway domain name. So this CloudFormation resource will output a distribution domain name, which is like a generated domain name, like a generated name.cloudfront.net. And then we create some DNS records, which will alias that domain name to the API gateway endpoint. So that basically sets up the the top level domain endpoint for API. But we actually haven't actually created any APIs yet. Then if you go into one of the specific services, like the checklist service, for example. So this is the resource configuration for the checklist service. And the serverless framework is going to generate an API gateway endpoint for you. What you then have to do is create a path mapping that will take requests on your domain with slash checklist in the suffix. And you need to tell it which domain name to use. So you don't need to refer to the resource using like an ARN or anything like that. You just need to tell it which domain name is used in the path mapping. And once you do that, then all requests from api.slicklists.com slash checklist, they will come into your endpoint, that front endpoint into API gateway, then API gateway will know from your domain name or will know from the base path mapping, which API gateway it needs to redirect that request to. And similarly then in the other service, the sharing service, we've got the exactly same thing here. If I go down to the end, we've also got a base path mapping here with slash share. So they're both using the same domain name. So if you set it up in this way, then it means you can, it's probably good to isolate them into separate stacks like this. So you don't, typically you might put everything in the one stack, then it gets difficult to re-architect it. So if you put everything into separate deployable units like this, then you can say, okay, this is my configuration for APIs in general. This is our endpoint, these are certificates. And then every time you add a service that needs a public endpoint, it is base path mapping. There is, with the serverless framework, there is one gotcha. So there is, if you look in Slickstarter, there is an error which I think is still open in serverless framework, where you need to make sure that there aren't any unresolved dependencies by putting in this dummy resource, essentially. So that's just something for people who are interested in this to watch out for. So that's it on the public API gateway endpoint. If anybody wants to see this, I mean, there's quite, there's a lot of functionality in Slickstarter, but you've kind of, you've probably got some insight into it now because you could see, like with the API service here, checklist service, those are your public API endpoints with sharing service too, got your certs configuration. And then between the user service and the welcome service, you've got the internal API gateway story that we just discussed. So it's the same as this simplified internal API gateway here. So I think that's on the two stories we wanted to cover today. That's everything. Have I missed anything, Renato? I don't think so. Very good stuff. I think it's pretty much everything covered that we had in mind. That's one thing that I just remembered. If you, the API gateway service from AWS is an awesome service. Very good because it makes it easier for you to implement an API, obviously. But in some cases, maybe the API gateway service cannot, is not feasible for you to use it. For example, it has a limitation in terms of the duration of the request. So it has to be up to 29 seconds. So for internal API calls that you're gonna, you know that they might take more than 29 seconds, API gateway cannot serve your needs. So in those cases, maybe an application load balancer could serve that purpose. And it's also quite easy to implement as well. It integrates very well with AWS Lambda. You can use IAM permissions as well with the application load balancer. So all of the architectural patterns that are covered for the API gateway can be implemented in an application load balancer as well, if that suits your needs best. So that's only one thing that I remember right now, additionally to what you covered. It's a good point to mention, because I think it's quite recent that ALB support was added to Lambda. Like it's relatively recent at least. The timeout thing is interesting. So if you've got long-running API requests, yeah, API gateway is 30 seconds. Also, even if you've just got some existing legacy, so you've got existing EC2 instances, it might make sense just to use application load balancer. Anyway, some of the paths will redirect to Lambda, some will go to EC2 or to Fargate or Kubernetes, something else. Yeah, it's a little less... It's good to be aware of what's missing, I suppose. You don't have things like validation. The gateway will give you validation internally. You may or may not need that. But also, ALB, if you really want to use plain HTTP and don't want to use HTTPS and domains and everything certificates, ALB allows you to do that as API gateway does not. So it's up to you really. People also say you can get... Depending on your use case, you can get better cost. So from a business perspective, if API gateway costs are an issue, it may also be worth looking at ALB for that reason. Awesome. So one thing, Owen, Joshua, so Joshua, thanks for bringing that out. So he tried to use your coupon code to buy your book, but he said the code was expired on August 19th. So I don't know, you probably should renew the code or... Yeah, if that's the 40% code, then I'll publish another one. Okay. I'll tweet it out, actually. So if you will have a look, I'll tweet it out after the call, or yeah, were you going to suggest something else there or not? Yeah, so go ahead and follow Owen on Twitter so that you can get that coupon code. Yeah, so this is my Twitter handle. I'll share the 40% code there, but for Joshua as well, if you wanna drop me an email, I'll get you a free code because that code that I showed on screen is a 40% discount code, so I can get you a free code, I'm sure. All right, awesome. So last chance, does anyone have any other questions or comments for Owen about the topics of the talk? Yeah, thank you, Joshua. So Joshua is sending thanks. Albert, thank you very much. We appreciate you guys joining us. So if you have any questions, any comments about implementation or this leak-started project, so Owen would be happy, certainly happy to help you out by email, Twitter, or any other medium. So I would be glad to assist you guys on anything as well that I could possibly help with by email or Twitter. So we will be sharing on that Github repo the recording for this talk and the presentation slides. So you can follow the repo on Github as well in order to receive alerts when we publish there. All right. Okay, thank you very much, Owen. Thank you so much for joining us. It was very, very good stuff. So I really believe it's important for us to get the word out about the best practices for really good architectural patterns on AWS Lambda, API gateway, all this stuff. It's growing a lot and maybe some people could be lost on how to implement these in a scalable way, a secure and scalable way. So this is very, very informative, very useful for everyone. Yeah, listen, thanks very much, Renata. I really, really enjoyed the presentation. I hope people enjoyed it. Thanks very much for joining and listening and I hope I hear from you afterwards. If anybody wants to follow up, I'm happy to chat on any of the topics or point them in any particular direction on Slickstarter or also to hear any tips if people have further suggestions. Like these, this is always a moving target like the serverless world. So we all need to keep learning. Yeah, it's evolving very fast. All right, all right. Thank you guys, have a great day. Bye-bye everyone. Bye.