 Le son est bien. All right. All right. So, yours gave you a pretty good introduction to Serverless and the Serverless Network. Framework, sorry. So, I will slightly go outside of those Serverless boundaries to show you an example of hybrid Serverless architecture. But beforehand, I would like to know by show of hands, how many of you are using AWS? Okay. How many of you are using Serverless Lambda and what we just saw? All right. And how many of you are using CloudFormation, like writing CloudFormation? All right. So, I will go quickly because yours did already a good overview. So, I will just introduce you what are the limitations of Lambda. So, now you know them already, but I just have to go through it. How to counter those limitations. Therefore, the hybrid solution. And after, I will show you a demo. So, I will try to show you more code and live demos and just talking and because without proper example, it's hard to figure out. I don't think I will have time to deploy properly from scratch. So, I can show you. But basically, the demo I will show you as already everything deployed and so on. But I will go through it later. So, the limitations are basically the memory. So, if you want to execute basically image processing, you may have an issue. Audio processing, you may have an issue. The CPU is not really clear. How much CPU do you have? Basically, it could be an issue also if you are really picky about your specification. The disk space as well. And furthermore, you only have access to the slash temp, which is already great. But you may have the use of other directory in your file system. The duration, which is five minutes. So, for some processes, five minutes is still long, but it could be short, especially if you are processing video, for example. So, you may have some issues until AWS increase all those specifications. But for now, I will say the current boundaries. So, as I said, it's maybe not suitable for long running processes such as image, audio, video. Maybe you need specific CPU and memory. Maybe also you need some third party libraries such as why not FFMPEG to process your video and images, which are not provided by Lambda itself. They used to provide image magic, but they don't do anymore. So, you need to kind of install your own tools, if you want. So, depending on the third party you use, it might increase the size of your package. And also, it implies, as bigger is the package, meaning the code you push to your Lambda, in theory, longer is the call start. Remember, this call start, roughly 50 milliseconds guarantee. But, yeah, maybe it will be too big and then you cannot. So, you need to find a way to use your code in a serverless ecosystem. And maybe you want a specific quest. So, basically, for now, I will say I don't know exactly what they are running, but it's like Linux Unix style. So, there is obviously a solution. I forgot to ask, who is using Lambda on production? No one, perfect. And who is using ECS on production? Perfect. Okay, so I will maybe, so I will get the overview of all those services and the way it works. And after, maybe I will maybe go into the Q&A. It will be maybe more interesting for you. So, what I show you here is basically a type of hybrid architecture you may think about. So, what you see in front of the user is what you call in AWS API gateway. Basically, in my architecture, I put it as an example, but I will not deploy it, it's not deployed. So, basically, my demo start at the Lambda level. So, this Lambda, we will see what it's used for. Then, this Lambda will use SQSQ. It's basically very suitable for job queue. So, my demo is more likely. You have a job queue application back-end, let's say. And then, the hybrid things come from the fact that you will run a cluster, ECS cluster, meaning EC2 behind the scene. And behind the scene, you have a docker container. So, after, we will see how we use this in a serverless spirit, let's say. And then, you have a dead-letter queue, but this goes with SQSQ. So, basically, to summarize, the hybrid comes from the fact that you are using an ECS cluster, which is not serverless because you need to take care of this cluster. I will show you via the code, more likely, how you do it. Basically, you will do it with the auto-scaling group. Meaning, it's kind of a service to help you to scale your instances, your cluster, according to the workload you have. And so, you have this lambda, which will kind of interact with this, I would say, old-fashioned way to design a service. So, I will go straight into the code. So, I think you can see properly, right? Is it big enough for everyone? So, as yours introduced you, I will use a serverless framework, but I will not run any command line or any, because it can be a bit long. So, basically, it's an example of architecture, I mean, in term of project style. So, you will notice a docker file. This will be useful for the ECS cluster, basically. The functions is what, the lambda, finally. The lambda is under functions. So, basically, you remember this handler. Right? So, this will run what you saw in the graphic, this lambda in front of the user, basically. And then, you will have a worker. The worker is deployed in the container, the docker file. So, this worker will run in the ECS cluster to take the workload. What this worker is doing is pulling the queue, this SQL queue I was talking about in the architecture. So, you have this function which takes the workload. I will go into detail maybe a bit later. Then, you have this cluster. So, this function will post into the queue, obviously. The cluster will listen to the queue, more likely pull the queue, get the job, and after you can imagine other event. So, basically, the idea is to have this cluster as a super lambda. And after from there, you can keep going as a serverless. So, you start with a serverless function which can be accessed by an API gateway. So, you push an event to this lambda function. This lambda function, because it's serverless, you can scale easier than having as a frontend a pool of EC2, for example. So, whatever workload you have, you are sure that you can store in the SQS queue. So, whatever the number of user, for example, or the number of video to treat or whatever, lambda will guarantee you that you can still store in the SQS queue. That's why you are serverless at that moment. Then, the ECS cluster will take over by pulling the queue and do the job. This is actually the main point where you will be slow or whatever. But the load is completely outside of this scope. It doesn't care about if you have a lot of... The lambda is here for you to store in the SQS queue. Then, the ECS will do the job. And then, after, you can imagine using the API AWS SDK to trigger other events. You could trigger a lambda function from your ECS cluster using the SDK. Like here, I use in JavaScript the SQS, which actually is... It's the SWS SDK. So, I just use the SQS SDK to pour in my JavaScript code. So, you could imagine the same to post an event. Because, as you saw previously in the presentation, everything will be based on events. So, you could imagine taking over after the ECS cluster, you could post the image process in an S3 with the SDK. So, once the image is on S3, you can imagine having all the events triggering all the lambdas. So, that's for the architecture. And just because it will not be interesting otherwise, I will run a bit of C++ in my... in my worker. So, it's just a hello world kind of. It will just print what... So, basically, it will take the event from the lambda. It will post into the queue. The cluster listen to the queue. Run the worker. The worker will spawn with Node.js native app. Let's call it like this. So, it could be ffmpeg. So, you can run native app. Once it's finished, take the output and display basically what... what is the number of attendees of this meetup, which I will actually do dynamically. But this will be output in the logs. Nothing much. CloudWatch logs for the one who knows. So, yes, the great thing as well is like lambda function will log everything in the CloudWatch by default. CloudWatch logs in that particular case for the EC2 cluster I did create a log to show you this. So, I did create the log group. So, I will show you... So, basically, if I go before this if you see the serverless.yml it should be familiar to you right now. There is some custom variable directly linked to the serverless framework itself. So, this is not CloudFormation. It's just YML. You have your environment variable that will be passed to the lambda function itself as environment variable. You have the IAM roles testments which were mentioned previously. So, it will grant the access to your resource from your lambda. So, basically, if I go to the IAM.yml which is just here you see that it just described with IAM role what it supposed to be. So, this lambda is supposed to read and write into a queue because it's a front. And it's supposed to also trigger the worker in the ECS. So, basically, how the worker will pull the queue. Basically, the ECS is running nothing. It shows the instances. So, you will pay for the EC2. But as soon as there is a work to be done and something in the queue the lambda will, at the same time, spawn kind of tell the ECS to run the task to run the worker. I hope it's kind of clear. So, lambda receiving a job it goes into the queue once it's in the queue it runs with SDK it say hey, I will run this task and AWS does a job for you to know which cluster and so on. Then the worker take over and output the result. So, if I go a bit in the in the sorry, yeah. So, you see the function here, the service, which is just the lambda function. And then after, I declare my cloud formation. So, because it's hybrid the serverless framework will not create the ECS for you. Will not create whatever you need, the auto-scaling group. So, this you need a custom cloud formation and for that they provide you the way to add your own cloud formation. So, what they will do is they will just amend their own generated cloud formation template and put your resources. So, the EV lifting of declaring the lambda function and so on is still done by the serverless framework. But what you will you have to do manually is to write your cloud formation template anyway. So, basically you just declare the what kind of instance you have the right to use in your cluster be this famous map array of instance type image from the region. I declare my queue which I name serverless queue. The dead letter is by default generated by AWS. I declare the ECR repository which is it's kind of Docker hub. You restore your container in this repository which is after used for the ECR cluster to pull the image, run it in the cluster. Then you declare VPC it's mandatory. VPC is kind of internal network so that your cluster is kind of isolated from the outside. VPC configuration is quite big. Then I have the security group which is mandatory for the EC2 running in the cluster. For the one running EC2 it should be kind of familiar to you EC2 role as well. I just go through the template just to show you what it will look like actually. Then you have to declare in the role what your worker has the right to access to as well, meaning the EC2 has the right to access to the log because I had to create the log myself to log what is outputted by the ECS which is transparent which is completely seamless with lambda because they will log for you whatever they need to log so you just have to pass the log it's easy. In that case I still have to create from scratch so it will be under the name ECS worker and the name of my stack pool ECS create cluster everything I go, I move forward the logs ECS cluster, I declare my cluster I declare my task definition this is a bit more interesting because it's kind of what lambda will do it's like I tell the cluster to run worker, it spawns the worker the worker down it shut down so it's not an ECS running a service you can run it as a task or you can run it as a service meaning the service will run all the time so if you have a memory leak maybe it's not great and the task will run and stop when it's finished, when there is exit whatever so that's what I want actually I want something which is running in the cluster in EC2 but I don't want it to run forever so in my case it's a node application I will show you after but that's basically it so it's kind of simulating a lambda behavior so I run the task when I have a job in the queue basically I mean this task is triggered by the lambda as soon as the job has been posted in the queue which is slightly different and then the container instance then the auto-scrolling group because in that case it's important also to be workload robust meaning you still need to scale this ECS because otherwise if you have 30,000 lambdas posting in the queue and running tasks so meaning 30,000 tasks basically task definition triggered if you have a small instance small cluster you will not sustain a lot so basically it's always important to have this auto-scrolling group so that you will spawn more instances running your workers so it's the basic of a cluster basically so you still need this auto-scrolling group which will be scaled according to the number of just let me see ok so yeah I will start with the one 2 if a job is in the queue for too long if the queue the SQS queue start to pile too much then it will auto scale and then the output just to output in the cloud formation thinks the value so I will show you how it looks like so basically so this is what you just saw as a code it's just output so I have my task definition my queue my cluster so basically I will just go straight to the demo so here you see the function this is the one deployed here so I will test directly through the AWS console but you could trigger with whatever event you want basically overview of event it's just like basically all of that is event so I will not go through the list but you can imagine S3, DynamoStream what has been shown basically in the previous stock so here I will take for example this hellward but I will not be interested in this I will rather go to here and and here and if there is no demo effect it should work so what this is replied back by the lambda so what lambda tell me basically I ask the lambda to return me the the fact that the task has been properly triggered yeah you see here I just ask the lambda so this is the lambda function I just ask the lambda to enqueue the job to spawn the task in the ECS cluster and to return me back the ECS response so if I go to my cloud watch so this is my custom cloud watch that I had to write down in the cloud formation this is a default cloud watch generated by lambda itself with serverless framework so serverless framework like is really nice for you so you don't have to take care of all those EV lifting so the logs is I don't log anything I think yeah I don't log anything so what you have is just the very basic logs that AWS lambda gives you basically the duration the build duration and the memory size that it was used the memory size configure and the memory size used basically here you see that it's quite high because first it was called start and then it waits for the ECS to reply back that ok the task is running if I go to the ECS worker so this is the message that I sent as here you see 300 so this is replied back by the C++ application which could be image processing so basically I spawn in javascript my C++ app I wait for the return when the promise return I just display console log of the number of attendees so just so it's basically here this chunk of code and he return you the job once the job is done the worker will call the SDK to just remove it from the queue so that it just keep going and I think I put 6 seconds of polling with the SDK when it's finished you just go back to the poll queue it will wait 6 seconds and this is automatically done by the SDK you don't really need to care of so just for the demo you could say that we are 310 save so the task has been run it will take a bit of time you see to display the log so for now it will be empty and now it appears so it was maybe slightly faster so here even if even if the cluster tells me that there is no not enough memory because in my task definition I said like for container to run you give this amount of memory otherwise you don't run it it's not a big deal because it will still be in the queue by the lambda and it will be taken over it will be taken over the worker since it's polling so even if the task cannot be launched meaning the worker cannot be triggered directly the worker just finishing the previous task will take over the next task so basically even if the event trigger is lost since it's a polling process it's fine so if you trigger you will see that resource memory meaning the easiest tell me that you cannot run more container but it doesn't mean that the task is lost proof is I don't know how many times I did but let's say you see you have this basically all the event has been processed because anyway it's polling so first time the task was triggered the second time it could not be triggered but it's still in the queue so the first task just finished just take over and if there is no task anymore everything shut down until the next time here if I do it again I will have enough memory then I do it again I don't have enough memory because I took a small instance because I don't want to pay too much so that's why and so you have all your logs here the queue is empty my job is done and you see that it was 210 so this is just to show you that you could eventually imagine that it's video processing when you finish go to S3, text and so on so I think I did pretty much everything around the demo there is something I can try to show you is the fact that I spawn the worker or not for that just to show you for real yes it should be this because I don't remember the DNS ok so now I'm connected to my ECS cluster so basically I have only one instance so I'm connected to one instance of my cluster it's an EC2 instance so if I do I think I'm sorry you can write using a server less and deploy to your instance this is I have both actually I deploy my function and I deploy the part of code in ECS which is a worker so I will just alright so ok should work like this it's a bit improvised ok so this is my lambda function ok so when I will click test I just send the event through the AWS that trigger this function with this payload and the function knows that after ok I receive an event what I have to do I have to incur in the SQS and I have to trigger the task because there is no way to any services so that's why I trigger the task manually even if you are pure lambda there is no way yet to trigger lambda through the SQS queue right so it's not because you receive a job in the queue that other service will be aware you need a way to trigger them that's the way by using the SDK so if I do that and I do that please yes so you see the worker here triggered so I trigger as many as needed and then it shut down so basically it's kind of lambda because even if I have a memory leak in my worker I don't really care because I know that I don't need to kind of manually kill my instance or go to the instance connect to the ECH agent and restart it or whatever so if I can do it again normally so here I did many times so the worker will stay up here maybe there is two workers run but it will be up until the here the task is very tiny so it's very fast but you could imagine it running just shutting down so yeah it's kind of lambda but actually in this configuration you get access to ECS instance so basically you you have all the all the benefits and if you want to see the ECS logs it's in here I mean after it's very classic but that's oops so here it shows you that the task change event blah blah and the container and if you read the log you will see shut down then running shut down running shut down so that basically it for the demo and then maybe if you have some question from here some basic maybe a misunderstanding because I know it the docker file is it's just to build because you are in ECS so you need to run the container it's basically in my case I choose docker and you need a docker file to build the how much time do we have ok now I will show you just the docker file after I can't try a live demo of pushing it to the cloud at least you will have seen it working if it fails after you can just so this is just docker file there is nothing really I just took a one two instance which could be if you have your own C++ code basically what I am doing is I put everything it's very dirty one but basically what it just contain what it should contain only is a worker and the binaries this C++ app or even third party or you can also install directly you can run it from your own image with ffmpeg already installed anyway it's just a docker file that will go to the ECS to the ECR the repository it's actually like like that yeah ECR is so basically it's like a docker hub you just store the task to always run the latest image after you can choose what kind of image you want but that's the basic as soon as the task is triggered it will run the container which is stored as an image here as a cluster as you can see there is one EC2 instance and the task definition I was talking about which is defining the amount of memory the container the minimum of memory for the container to run the amount of CPU it's kind of docker compose somehow and it's also the where it's I think there is some detail yeah you see you just as well you can map as every docker you can map the local file local folder which here I map the temp but since I'm in EC2 right now I could map any kind of volume I want if I have to store a big video temporary so yeah maybe other question so basically in that particular case if I had to do it manually I can show you at least some commands maybe but maybe I will do the demo ok I will do the demo for that I will have to do it manually it will be quick normally and you close your eyes because it's my private account no it should be quick if internet wants to be quick alright I delete the bucket by the way what is the best practice ok yeah so I will delete my stack it should be before that sorry I will delete that because cloud formation will not be happy so cloud formation is really nice because they protect you to do some batting and yes I will delete the stack should be fairly easy and fast so once it's done I can redeploy obviously otherwise it will conflict no this is really manual normally you go to a CI basically the benefit of cloud formation is you just update your cloud formation you push because I want to show you from scratch how it goes after I can relaunch if you want but I just wanted to show for the one who are not get used to it how it goes from scratch so normally it should be almost done but basically when you update a cloud formation it will just take the diff and now AWS will be smart enough to just update your stack so if you update your autoscading group rules it will just be seamless sorry maybe a bit slow it's slightly different the lambdas triggers the worker first it store into the queue then it triggers the worker because as I was saying there is no way yet to trigger something from an SQS queue directly it's not a push, it's a pull system so basically because of that I need to trigger the puller alright the worker is in the SQS I don't want my worker to be running all the time to avoid any kind of memory leak or whatever so for that as soon as a lambda receives a message whatever the workload it store in SQS then it triggers the SQS if the SQS cannot be triggered it's not a big deal, it's still in SQS and the currently running worker will yeah no there is no event that's why it's hybrid you start from lambda any event you want it could be an HTTPP for example it takes more time to delete sorry if you see this here you can have any kind of event you want it could start from an S3 let's say I uploaded an image imagine an S3 bucket in front of the lambda right the event is triggered, the lambda is listening to this event what does the lambda takes the job takes the S3 URL of your image the path, the object key store it in the SQS so you store this because you have a limited amount of data that you can store in an SQSQ as well so the goal is to store the minimum possible so what you need is just the object key and maybe the bucket name and at the same time you spawn, you trigger the ECS task which is just a container alright you have a cluster, you say just trigger this what is this this is just something listening to the queue so since you store the job already when you spawn it it's already ready so the worker will just take the job if the queue is empty it will just shut down as you saw with the crap, you know the worker was spawning, shutting down so it's kind of a way to limit memory leak or whatever so, but you obviously you have this ECS cluster which is supposed to run a container but what you want is this container to just shut down when there is nothing to do because if you have a huge workload and you have a memory leak you will have your ECS cluster which is just hanging forever so your queue will pile the autoscale will scale more with more EC2 but if all those EC2 are hanging as well you are basically in a bad situation sorry because you don't have the memory at the moment you want to trigger it but if you don't have enough memory it means a worker is already used after you can imagine a container imagine in your instance you say I want to run two containers in this instance alright the lambda when it triggers a container let's call about container and not worker you have a container which is supposed to be triggered then the lambda receive another event you have two containers running now two workers then there is other events coming the lambda store in SQS and send the event like trigger another container but the cluster reply I can't but it's not a big deal because there is already those container running so that when they just finish they just take more job from the queue we have the task definition was it in the cloud formation yes yes it was in the cloud formation the cloud formation yeah you see it was this so basically you declare the cpu it's like a cpu unit basically it's half of the total available in an ecae2 I mean I go very in the detail it depends what is your knowledge but basically the task definition is the boundaries to what the containers are allowed to do so when you have instance or if you just call the instance for you with this container yeah actually imagine the instance is a big server running containers those containers they run allowing from this task definition they are allowed to run according to this task definition so imagine you have a big box with smaller box those smaller boxes are running if there is a task or not so I think I will not have too much time to run it but let's say just to finish just before maybe I will just show you question sorry yeah go ahead for the question because it's more interesting alright so given the way you've structured this you're assuming that the ECS will return the job within 5 minutes can you look it along the function yes show a function and the function directory functions folder oh non sorry I guess it's not in there no it's that's why the hybrid is confusing because you think you are in lambda but you are not anymore no because like in lambda you basically dispatch the queue but you're waiting for ECS to tell me that it has been triggered but the ECS replied back as you saw it replied me the ECS agent will reply me back directly if there is not enough memory because it's your scheduler the ECS is your Kubernetes or yeah no I understand that but you're triggering it so you're triggering the lambda and then you're waiting for ECS to give me a route yeah but this is a snap because the ECS agent knows if it can or not so it's like I ask you to do something you tell me no no no it says that it can't but you're processing a very big video but the concept is not to wait for a job this goes after so you go from the lambda you have the worker the lambda doesn't care the lambda doesn't wait for the result of the job but just wait for the result of the job has been taking care of some route like I know that I told you that you had a job to do that's for sure if some route it's even not true it's still in the queue so basically after when you finish the job is over your next move will be as I said to use the SDK to trigger another resources meaning you could even trigger another task in another cluster or you could just store in S3 and from S3 you go back to your serverless architecture exactly it's time out so basically one of the limitation by the way of lambda is that if it's time out you don't know that it's time out there is no obvious way there is no cloud watch alarm or there is no such thing yet so the only way for you to be aware of time out is to tell the function that your process is supposed to take this amount of time like you say I want this to take 4 minutes if the function time out you will time out the goal is to put a time out as long as your task is supposed to run so your own task will just know oh I time out but you don't trust the time out of the function itself it's more your code will time out because you say ok I have to run like a HTTP response or whatever I want to time out my HTTP response if it takes more than 10 seconds because if you wait for the 30 for the 5 minutes or plus to time out basically you will never know that you something time out before so it's better to time out first by yourself and so you don't rely on those because you don't have easy way to know like if you have a strange bug appearing it may take a bit of time to know that you time out like it's not obvious but for that you have a cloud watch if you run some matrix if you go to the cloud watch dashboard after you can see the average amount of time the function is running and obviously if you see that a lot run until 5 minutes or less I mean you define the time out of the function as well it's like you can say time out 10 ms that's what I think I did yeah you know here I time out in 10 seconds because I know it's overkill because I know my job is few ms but the goal is you time out before the lambda time out itself so that so yeah if in your log you see that the duration of your function are always 5 minutes because you didn't put any time out basically you put default basically default is 5 minutes then you have a guess that your function time out because they reach the maximum and after your stuff is still not good so meaning you may time out then you may force a time out to be sure that now you can inspect the log because once you time out even if you do a stack trace or whatever nothing it just so better to time out yourself yeah that's one of the tricky part of lambda especially when you run in production you need to it force you to basically it's just part of best practice you should always be knowing what you are doing so if you should time out you should know that you want to time out your task is suppose to take this amount of ms second or second and because it's also good for your because if you would say oh I have task to do but I don't know how much it takes so let's just oversize our cluster or you should always know kind of the specification if you go over then you try to optimize your application or you slightly increase your you can do that with cloud watch alarms whatever I'm not sure it's possible to dynamically modify the time out of a function I never did but maybe there is some ways as you scale dynamo for example with the capacity other question hope it was kind of clear but yeah ok thank you