 Good afternoon, everyone. Welcome to building event-driven serverless applications. We are very happy to have Gunnar Brasch with us. Gunnar, thanks a lot for joining us in doing this for us. And without further ado, over to you, Gunnar. Take it over. All right. Thank you very much, Adip. Hi, everyone. Very happy to be here, even though I'm right now in Sweden instead of India. All right. So welcome to building event-driven serverless applications. My name is Gunnar Grosch, and I am a senior developer advocate with AWS. So serverless is one of the differentiating capabilities of the cloud. And it basically means that there are still servers in serverless, but AWS manages those servers for us. So customers using serverless technologies, they've found that they can be more productive and agile when they're bringing new workloads to the market. So today we're going to talk about how to use the technique of using events as the driver for building our serverless applications. So let's jump into it and look at what we're going to talk about. So this is the agenda. I'm just short going to cover what serverless is. Perhaps most of you already know what it is, but just a few slides talking about that. Before we move over to the event-driven part, the core part of this presentation, we'll look at what events are and the different type of events we can see and how we actually use those in our architectures. We'll look at one of the core services from AWS, Amazon EventBridge, that we use when building our event-driven applications. Look at some common use cases and some patterns that we can use. All right, so just short about me first. As I said, I'm a senior developer advocate at AWS. My background is, well, it's hard to believe perhaps. I've been in the industry for 20 years now. I started out as a developer way back in 1999 and have worked my way through development, operations, different management roles within IT. And I'm really happy to be here now as a developer advocate for AWS. I am a community builder in that I help organize and drive events in the Nordics, mainly focusing on serverless. And also, I have three kids. They are definitely keeping me on my toes all the time. All right, so let's get to the first part. What is serverless? And these are the main tenants that define serverless as an operational model. First off, we talk about that there is no infrastructure provisioning, no management for us. So there's no servers to provision. We don't have to operate them. We don't have to patch them. So it's basically about not having to do this heavy lifting that's involved with running servers. Next up, we have the automatic scaling. Our serverless workloads, they scale by the unit of consumptions. By that, I mean a unit of work rather than the actual number of servers in the background. So we're able to scale as we need. And we also talk about paying for value. The model is that we're paying for value. So if we want consistent throughput or we have a specific execution duration, we pay for that unit rather than, once again, the number of actual servers in the background. And perhaps most important is that we have these built-in capabilities for availability and fault tolerance. So we don't have to think about the availability and the fault tolerance of the underlying services. It's more about how we put these services together to have these reliable serverless applications. So it's about removing that heavy lifting part of server management and server operations. So it's an important distinction in what building and running these applications mean. So when we build our serverless applications, we usually talk about it in these three headers. So we have our event sources. That might be a different type of changes in the data state or different requests to endpoints where a client or user calls specific endpoints. Or it might be changes in the resource states and state of the different resources we're using within our application. And then we usually build our applications around a function using AWS Lambda as our service for functions. And the function, well, that contains our business logic, the code that actually does what we want it to do. And we usually talk about our functions that they should be processing data rather than transporting data. So we build our business logic to do some sort of processing of our data. And the functions we build, they can use different type of runtimes as seen on the screen now, Node.js, Python, and so on. So use the runtimes language that you are familiar with. And this you're able to mix and match since you are dividing your applications into the different functions that is involved in the applications. You can have certain functions using Node, certain using Python, and so on. And then we connect our functions to other services. And that can be other AWS services, for instance, our data stores as DynamoDB or the object storage Amazon S3, or it might be third-party services that we're using as a downstream service. So that's the main way that we build our serverless applications with the event source, the function, and the services. So that was a really brief description of serverless, what serverless means and how we build those applications. So let's get to the good part. Let's get to event-driven architectures. So what we want to do is we want to build our applications. They should be scalable, resilient, and so on. And event-driven is a way that we use to build our applications. And Lambda as the function service, that is what we say is an event-driven service. In that every time a function, a Lambda function is invoked, it is invoked with a payload containing an event. So even your basic web service that we use, which is more or less a request-reply type service, it contains a sort of event. And the event in that case is the API request that's sent to it. So we talk about our applications in different ways. In many cases, we build our applications around APIs. And the APIs, they are basically the front door of our microservices. So they are the primary mechanism through which our services communicate with each other. So let's think about how we build applications usually today. Using microservices is a common practice today, that we split our application into smaller pieces. Each of these pieces contain a service or a microservice, and they communicate to each other using these APIs. So we set up different API contracts between these services, how they are supposed to communicate with each other. And the other way that we integrate our services, it is using events. So the events, as it says, is the connective tissue of modern applications. And we use these events instead to get our different services to talk to each other. And the events, it can be changes in state or updates and so on. So think about an event like something being placed in a shopping cart in an e-commerce site, or changes in state of a support ticket in a support system. So our event-driven architectures, they can help us improve scalability, fault tolerance and so on, by reducing the dependencies between different processes and teams. We'll look a bit more about how that can look. So when building our applications, we want to have them be decoupled, as we say. And a decoupled application means that the different parts, the different services of our application, they should have as little knowledge about each other as possible, so that they don't rely on each other to be there. So think about, once again, an e-commerce site where you have an order service. The order service for it to be decoupled means that it shouldn't have to rely on other services to be there to work as intended. So there are different ways of decoupling our applications. And the one is by using APIs. And that's what we call the contract decoupling. So we have this API contract between service A and service B. And then that is more or less all that those services know about each other. How should this API call look? And an API contract, or using contract decoupling, it means that we can make changes in implementation, how service A works and how service B works, as long as the API calls still remain according to the contract. And another way of doing it is what we call a runtime decoupling. And runtime decoupling, that is done by switching from having synchronous calls to what we call asynchronous calls. And those are asynchronous invocations of the services. We'll look at the details about these invocations in a bit. But what that means is that we are able to decouple the service A from service B even further. So we have less risk of what we call cascading failures. If there is an issue with service B, it shouldn't affect service A and vice versa. So the basic part of an event-driven application is the event. And in the simplest of terms, an event is a signal that a system state has changed in some way. So the definition is a thing that happens, particularly one of importance. So the events are the key in our event-driven applications, obviously. So we can compare it to what we perhaps previously or in many cases normally build our applications with, and that is the command. So comparing those, we can see that a command, it has an intent. It is directed specifically to a target, and it is personal one-to-one communication. An event instead, that is more of a fact. So it is something that happens that others can observe. And by having it as a fact, we're able to broadcast it. So it's one-to-many communication instead of one-to-one. And examples of that, for instance, commands, they can be create product or add product or create account, for instance. Looking at it from an event perspective instead, it can be then that account created and product added. So those are things that happen in our system that others can observe. So this is the key to how we want to build our event-driven applications. So having an event-driven architecture, it means that we're able to drive reliability and scalability. Since these events are asynchronous, you don't have to wait for a response to move on to the next step compared to the command-driven one, where we're actually telling one to add a product, for instance. Instead, when it's asynchronous, we don't have to wait for the response to add the product. So it improves resilience and reduces dependencies. And we talk about different parts of our event-driven architecture. And one tool is what we call the router. And that is something that then abstract the producer of an event and the consumer of an event from each other. So the router is where the events are actually routed. And we'll look at that, how we can use different services to act as this router with Amazon Event Bridge later on. So we can publish events to the router, and then others can consume those events without actually knowing anything about other producers or other consumers. And then we have the event stores. And the event stores, they act as buffers so they can hold on to events until the consumer is ready to use them. And examples of event stores can be our different queue services, for instance, AWS Simple Queue Service, SQS, where we're able to store our events or queue them then, and then consume them when we're ready to consume them. All right. And in the middle of this, we have the asynchronous event. It means that as the example shows, a client is pushing an event that then ends up at Service A. The client doesn't have to wait for a reply from Service B. Instead, it just gets a notification back or a response that Service A has received the event. Then Service B is able to then fetch the event and use it when it's ready to consume that event. So these are the three different invocation methods that we talk about with AWS Lambda. So the synchronous one, the one that we see in our most common APIs, for instance, is the one where the client needs to wait for a response. So the Lambda function in this case needs to perform its processing of data, perhaps even wait for downstream services like data stores and so on to do what it's intended to do with the data before it's able to send a return back or a response back to the client. And synchronous calls or synchronous invocations means that since the client is waiting for a response, we might have this different type of latencies or be more dependent on what's happening downstream in the service. But when we have a synchronous invocations instead, as I said, the client gets a response back just that it has been received and doesn't have to wait for the actual processing of the event or the event data. And the asynchronous invocations, they use services like Amazon SNS, the simple notification service, or, for instance, Amazon S3, the object storage. So think about storing an image in an S3 bucket. Your client is uploading an image to the S3 bucket. That is then invoking a Lambda function that is supposed to process that image. And the client gets a response back that it has been uploaded in S3 before Lambda starts processing that file. And so that is a synchronous invocation. Then we have what we call the pole-based invocations as well that are based on streams. For instance, Amazon DynamoDB streams and Kinesis streams where it's... Well, the events are grouped into batches and then sent for invocation. So to be able to simply start using events within our applications, we can use a feature with AWS Lambda that's called event destinations, which is a really neat feature in that we can really easily get started to using the actual events in our applications. So when we have an AWS Lambda function that is processing data of some sort, we can use the event that is created when that is invoked or executed and then act upon that. And the event that is happening when our Lambda is executed, it contains details about both the request and the response stored in JSON format. So that is data that we then can send as an outcome either on success or on failure. So let's look at it in another picture. So we have different type of services that can invoke our Lambda function. I've mentioned that. When the Lambda function then processes that data, it then emits an event that we can use to then do the next step within our application. So on success, for instance, we have fetched the image from our S3 bucket. We have processed the image and it was successful. The event that is created when we have a successful execution of our Lambda function, that can then be sent to do something else, to another Lambda function, to a queue, to a notification and so on. And on the other hand, if it was unsuccessful, we had a failure to process that image. Well, we can do something with that message as well. Basic pattern there is that we take the event on failure, store it in a queue, and then we're able to inspect that event either automatically or manually, of course, and use that event to reprocess the image to make sure that it actually is being processed later on. Since we're using a synchronous invocation and the client isn't waiting for a response, so we are able to then process it again later on. So those are basic steps to get started building with event-driven architectures, to use the actual events that are emitted from our Lambda functions as what's called a service feature that's called Lambda destinations. Then we have other services. I mentioned some of them already, but that we use within our event-driven applications. First off, we have Amazon SQS. I mentioned SQS before as a queue service, a simple queue service. It is a fully managed message queuing service that enables us to decouple and scale our applications in that we're able to store events and use them later on. So we can have other services pick up these events from the event storage that we're using. Then Amazon SNS, that is a fully managed notification or messaging service. So it can be both system-to-system, so different parts of our application, different services, for instance, can talk to each other using this as a pub-sub, a classic pub-sub service, or it can be apped to person as well. So we can use SNS as a way of sending messages to clients through SMS, email, and so on. So using this pub-sub pattern enables us to use messaging patterns between our decoupled microservices. And then we have Event Bridge, which we'll dive a bit deeper into later on. But that's the way that we route our events inside our application. And it is basically what we call choreography, where we're then able to filter and route events within our application. And then the last service, which we won't cover now, but that is Step Functions, which is also an event-driven service that's used for orchestration. So you can use it to build workflows and then process events and objects within our workflows using these state machines that are involved. So let's look at an example of how we can build our application in an event-driven way. So let's have the example of us launching an e-commerce site. We'll try to make this scalable, but still as simple as possible from an architecture standpoint. So looking at the example on screen right now, this is our standard synchronous API-based service. So when a client makes a request to the order service, the order service has to make a downstream request to the fulfillment service. When the order service gets a reply back from the fulfillment service, it can then respond back to the client with a success message and give the client order number, confirmation number, and then we have a happy customer, hopefully. And this is a great pattern to use, this simple web service, as long as we have this simple small system as on screen right now. But that's usually not how our applications look. They usually might start out this way, but then they grow. We need to add more features to it. So how do we support these growing number of services that our store operations or e-commerce service needs? What happens when we add the next service? Now we have an invoicing service as well, and we have a forecasting service. And there is actually nothing wrong with building it this way and just adding more downstream services. But what's problematic with this API-oriented design is that the order service needs to know about all of these other boxes. So the client submits or creates an order, and the order service needs to talk to the fulfillment service. It needs to talk to the invoice service. It needs to talk to the forecasting service and so on. And it means that our application that started out in a simple, easy way has now become more complex because these microservices that we have, they aren't really decoupled anymore. We are starting to get these hard couplings between services so that the order service needs to rely on the invoicing service to be there. So we need to build in retry logic. We need to have error handling and so on for it to work as intended. So next up, perhaps, we need to have even more services. We need to have other services within our application, and it becomes more and more complex to maintain and have this working successfully. So that's when event-driven architecture is a great way to actually maintain this decoupled architecture where the different services doesn't really have to know that much about each other, as little as possible. So using an event router like Amazon EventBridge, it means that instead of having these services talk directly to each other, they can talk through this event router. So we can keep or we can move to an asynchronous model for our application. So the order service receives an order from the client and replies back, thank you, I've received your order. And that event then, order received, that's not a command, it's an event. Order received is then sent to Amazon EventBridge as an event. And then we can have different consumer services use that event to do its processing of the event, to do the things it needs to do, the fulfillment service and so on. And basically, this is what the event would look like. It's a JSON containing the different details of our order that we're then able to route through EventBridge and other services can then consume it. And using EventBridge as a router, we can then of course also filter this. So not every service needs to use this order data. So only the downstream services that need the data can then use it. So EventBridge is first ingesting, then filtering and delivering these events to our client or our downstream services. And we don't even have to build any complex custom data for it because that's part of the managed service instead. And then when we want to add new features to our service, well, we can simply do that. We can just add data within our event and we can add new rules to the event router. So for instance, in this case, we have a loyalty service added. Well, as long as we have that data in the event, we can have that new service act as a consumer using that event. So let's look a bit more about how EventBridge looks or works then. So this is the basic overview of it. We have on the left our emitters. Those are different services and so on that can create events that are sent into Amazon EventBridge. Then we have the actual event bus, the part of our application that handles the events where they are pushed into. I'll dive into those shortly and then to the right we have our receivers. So based on the rules we set up, we can have different receivers and they can be more or less any type of AWS service or other services as well. It can use these events. So the different event buses that we can use, first off, we have the default bus. That's the one where events that come from other AWS services, for instance, S3 or other services, they are pushed into this event bus or routed into that bus. Then we can create our custom buses as well, which means that instead of using EventBridge only for events that come from other AWS services, we can create custom events that come from any type of application. So even if it's another application we have built within AWS, some legacy system perhaps, or it's something that is running somewhere else, a system that we have on-premise even, we can then push events into this custom event bus that we create in EventBridge. Then we have the different SAS buses and those are third-party software services service events that we can push into EventBridge. So there are already a lot of different third-party events or third-party integrations available that can be used. So depending on the type of system you're using, there might already be different integrations in place. Think about different monitoring systems, your ticketing systems and so on that can easily integrate with EventBridge. So when a ticket is created or something happens within a monitoring system, that event can be pushed into EventBridge. So those are the three different type of event buses we have. And then looking at the structure of an event, it contains some basic parts. We have the source of the event. We have where the event originates. We have details of what type of event it is. We have the detail about the event and the detail is where we enter our data, the data that is important for our service, the order data for instance, the customer data and so on. And the resources, any type of identifiers that are related to this event. So this is sent in JSON data and is then delivered into EventBridge. The roles that we can set up within EventBridge in our router, we can use different patterns to match against different events. So we find patterns in our JSON data and match against these for our consumers to get the right events. We can also do transformation of the event using JSON path. So we can transform the event and then we have a list of targets. So the targets are the consumers that will receive these events. So for each rule that we can set up, we can have up to five different targets for each event. So for instance, when we create a rule, we can write our different patterns. For instance, that I want to be notified for EC2 instance state changes. Then we create a rule for that when the source is EC2 and the detail type is instance state change notification. That will then be captured by this rule. We can make it even more narrow in that I want to be notified for EC2 instance termination. So in this case, we set up a state of terminated in our JSON rule. So we fetch events only for EC2 instance termination. Or for instances that are in specific states. If we want to track specific orders, for instance, we set up these patterns within our rules. And then the targets are the consumers. Well, it can be many kinds of different AWS services that we can set up as target that can then capture these events that are sent through Amazon EventBridge. There is a neat feature in EventBridge that's called a schema registry and discovery because creating these events, even though the basic structure is easy, the details of it, well, that depends on what you want to have within your event. And when you perhaps build different services with different teams, it means that the teams need to know what are the details of these specific events. Using what's called a schema registry, we're then able to store the patterns for how our events look, the schemas for those. So other services, other teams can then use that as their source of truth for how the event should look. And there's the discovery feature where you're able to automatically discover the schema for different type of events. And there is built-in integration for VS Code for this and JetBrains, which makes it easier for teams to work with it as well. So let's look at some common use cases for how to use event-driven architectures. An easy one is where we want to take action. So a SaaS application emits an event, a ticket is created, and that is then pushed into Amazon EventBridge. And we have a rule where it passes through to a target, in this case AWS Lambda, a Lambda function that takes the event as an invocation, processes the event, and then uses that downstream of some sort. So basic take action use case. We can use it to run workflows. So the same setup, we have a SaaS application emitting an event, a ticket is created, pushed to EventBridge, which then invokes a step functions workflow. So our state machine takes the event and is able to then continue doing the next steps with that event. Or we can apply different type of intelligence. Event is sent through EventBridge, which invokes a Lambda function, which in turn uses downstream AI services like Comprehend or SageMaker to process the data and have an output of that data. We can audit and analyze is a common use case, where the event is pushed through EventBridge, and this might be a large number of events. So they are then using Amazon Kinesis Data Firehose as the downstream service to then push these large number of events through the service and store them in an S3 bucket, and then we're able to use other services with that data or those objects. In this case, Amazon Athena to use that as a query for the data stored in S3. Or perhaps use it as a way of synchronizing data is quite common as well. So think of it as your basic web service that we saw early on, where we have Lambda using DynamoDB downstream. But in this case, we've built it asynchronously using events. So we have an event sent through EventBridge, Lambda is processing it, and then fetching other type of data from the SAS application in this case. So based on the event, Lambda is able to do other things with data from that upstream service. So those are quite common use cases, and based on how you filter the events and so on, many of these of course can be used within the same bus and based on what the event actually is. So for instance, if we have a ticket created as the event, well that might mean that we both synchronize data and we audit and analyze based on that. But those different patterns, they don't have to know anything about each other. So let's look at some more patterns for this. And this is our decoupled messaging that we can use for event driven. We're not using Amazon EventBridge in this case. Instead we have a client that is sending a message to an SQS queue, and SQS as we know, that's our hosted fully managed queue service. The message is stored in SQS, and we have a Lambda function that is pulling the queue for data and received the data, is then processing that data and stores it within DynamoDB. So that's an asynchronous invocation of our decoupled messaging. We can even use these dead letter queues to store events that fail to process. And so that we once again are able to then reprocess them, inspect them and so on. Or the PubSub way of working. I mentioned the SNS as a PubSub service earlier on. In the same way where we have a client then publishing something to an SNS topic, and we can then have different subscribers to that topic, where different services use that event as a way of processing the data within the event. So these are examples of when we're building event-driven applications, but not using Amazon EventBridge. Even though Amazon EventBridge is a great service for routing our events, we can build event-driven services without using Amazon EventBridge. Sorry, there we go. This is another example. Where we have data stored in S3 bucket, which then invokes a Lambda function. And the Lambda function is invoked by using this event that we saw earlier that is created based on the upload to S3. And these have now been a few examples of how we can use our event-driven architecture with serverless and how we can start building with event-driven architectures. If you want to get started using it, think about using that basic method that I mentioned earlier on, using Lambda destinations, where you're able to use the on-success or on-failure method of creating events that you're then able to use. Look at using SNS and SQS as ways of storing events within your applications. And look at using Amazon EventBridge as your event router, as it is a very powerful tool for creating event-driven applications. And a simple way of getting started, if you haven't used these services before, is by doing this hands-on workshop that AWS offers. So it is hands-on with serverless messaging, with a few different scenarios that you can build on. You can use both SNS EventBridge, do filtering and have this asynchronous processing for decoupled applications. And it's available on the URL seen on the screen right now. And it's a great way to get started building event-driven serverless applications. And with that, my name is Gunnar Girosh. I want to thank you for joining me in building event-driven serverless applications today. I have a session tomorrow as well on chaos engineering for serverless applications. I'd be happy if you joined me then. All right, thank you very much.