 Okay, I want to start with PHP, the very old logo of PHP, which was created about 25 years ago, and I'm not going to do the whole story, but there is one thing I really like about it is that at the time, creating a website or a web application meant you had to do maybe C code, or it was really complex, and PHP arrived, and it made the web accessible. And I think this is really amazing, being able to start as a lone developer to start as a small web agency and create stuff. And I am really, really passionate about serverless, and I talk about it because I think I see the same thing happening today. It's so hard to create a web application and host it in a proper, secure way, something that can maybe scale or something that those look actually decent. And with serverless, finally, the web is accessible again. So let's look into that. My name is Mathieu Napoli. I'm French. I come from Lyon. And I've been working with PHP for a long time now, about 10 years. I've been working on some open source projects like PHPDI, silly, brief, which I will mention in the talk. I've worked also on some standards like PSR 11, 15, and 17. And yeah, this is my Twitter handle if you want to tweet some slides. So anyway, we've talked about why serverless, but what actually is serverless? And I find this is a concept that is, it can be hard to understand. Serverless is not a technology. There is definitely technology behind that. But serverless is an approach. It's an approach not where there are no servers, obviously there are, but it's an approach where you don't think about them, you don't manage them. And the way I like to explain it is to look at how we run applications today. If you want to host an application, we think with resources, a server, how much RAM do we need, how many CPUs, we think with disk space, and we use, we buy, rent, provision those resources, we have to monitor them, scale them. Moving to serverless is moving from resources to services. So with a service, the job gets done. And you don't really care how, I mean, it's always interesting to know how, but that's not your job anymore. So let me give you a few examples to illustrate. Storage as a service is a very good example. Instead of buying a disk with a specific amount, I can use Amazon S3 or something similar in another cloud provider. And that's it. Who has used Amazon S3 before? Raise your hand. Okay. So you've done serverless. Congratulations. You don't deal with a disk, you don't monitor it. That's Amazon's job, and they scale it. And the great thing as well is that you don't pay for the whole disk, you just pay for whatever you send to Amazon. The backups are their problem. But you have any services like that, and you have more and more every day. You have database as a service, like Firebase, Firestore. You have DynamoDB at Amazon. And we start to see like MySQL and PostgreSQL as a service as well. It's not completely serverless yet, but it's really interesting. Caching as a service, authentication as a service like OATZERO is a good example where you just delegate authentication to an external service or Amazon Cognito as well. Search as a service, instead of setting up elastic search, you can use Algolia, for example, for your search. And the idea behind all of that and behind the service approach is that you have less to do, you have new things to do, new technologies to understand. So it's not magical, but you have less things to do. And the long-term promise is that you can do less of the infrastructure and more of the development. You can scale more easily because Amazon and Microsoft and Google, they are really good at scaling. And at least me, I'm not. And you pay, usually you pay for what you use. This is not always the case, but most of the time you pay for what you use and that's really interesting because instead of over provisioning stuff, well, you pay exactly for what you need, and you can save some money sometimes. So this is great, but I guess you're thinking like, okay, but where is PHP in all of that? There is one service that is really interesting to us developers, and that is function as a service. This is a service where you send code and it runs. So let me give you an example because the term function is a bit confusing, maybe. This is an example of two functions in Python and JavaScript of what you can do on AWS Lambda, for example. So these are functions. You have a function that takes an event object. The event is what triggered the function. And you can do whatever you want, query a database, process some stuff, send emails, and then you can return a result. So the first time I've seen that was like two years ago. I was like, that's great, but what do I do with that? Where do I go? Because I don't really care about doing crazy, even driven architectures with everything. I just want to build a website and an API. But the thing is that an event, an even driven architecture, isn't that exotic? HTTP applications are even driven. The event is the request. Chrome tasks are even driven because it's like every day at midnight run this piece of code. This is an event. Messages in a message queue are even driven because whenever there's a message in the message queue, we need to run some code. So moving to serverless isn't that hard. And yeah, this is to illustrate the transition to serverless. So let's say we have a piece of code, the blue square, the blue bar, and we have an event or request, the green arrow. The green arrow, the request will trigger the execution of the code. But if you don't do serverless, what you usually have as an architect or as an infrastructure is something like this. Well, the large blue bar here is the web server. It's the demon running, waiting for requests. That's how it runs with Apache or Nginx or any go Python node Java application. In those languages, you start the process and it keeps alive in memory and it waits for requests. PHP is a bit different, but if you run workers in PHP, you start the worker process and it waits for messages in the queue. And then whenever there's an event, a request, a message, it will run your code, your controller, your job handler, your Chrome task, your CI command, whatever. So moving to serverless is actually moving from that to that. We don't deal with Apache, with Nginx. We don't write demons. We don't write anything that waits, pulls, stuff like this. We just write the code that reacts to the event. And again, if you're not doing PHP, that means your application needs to change a little. If you take a JavaScript, a node application where you can share variables and memory inside the server process, this is a bit of a change. But with PHP, it's interesting because we've always worked like that for the web. We have Apache or Nginx and we have PHP, FPM. And whenever there's a request, there's a new process. It handles the request and then it dies. So this is really interesting. To me, it means that PHP is perfect for serverless. So let's have a look at what happens in reality. How does it run? First, you'll write your code. Then you put it into a zip file. You can use tools, obviously. And you upload that to Amazon Lambda, for example. Great. But what happens then? The great thing is that nothing happens. The code is not deployed anywhere. You don't pay for anything. Until the first request or the first event comes in, nothing is running. And then, first request. So I would say Amazon, but it supplies to any fast provider. Amazon will boot some kind of container, like a Linux environment, from scratch. And inside that, it will run your code. So it can be indexed at PHP, it can be any language. Doesn't really matter. Your code will return a response or will generate whatever result we need. And that's it. The code is stopped, the container is frozen, and that's it. For optimization, instead of booting processes all the time, Amazon will keep the process alive and the container alive for about 10 minutes before shutting it down. But what you need to understand is that all those containers can be thrown away at any time. You can't configure the 10 minutes. So that's great. But what if there is a second request? Well, the container is ready. It will handle the request again and return a response. But what if there is a third request and this container is busy? A new container will be booted on the fly. And that execution model is really different from an auto scaling platform. Here, this auto scaling is at the most granular level, the request level. And what's really important to understand here is that there is no concurrency inside a container. Your code and your container will handle exactly one even one request at a time, just like PHP. And that's really great because it simplifies a lot of things. You can then scale up again. And you don't have to do anything. This is Amazon's job to create the containers. Containers are reused whenever there are new requests. And then the container will be killed because there is no activity. And later on, containers can be recreated again. To sum up, so this is how it works, to sum up, if you have no request, nothing is executing. You don't pay for anything. If you have one request, you have one container, and your code is running once. If you have 1,000 requests at the same time, at the same exact millisecond, you have 1,000 containers. Each container is completely isolated from the rest. It can run at the other side of the planet. That's fine. And this is what makes this so scalable because we are only limited by the number of servers at Amazon. So we're good. Again, it looks a lot like PHP, I know. Now, how much do you pay? You have five minutes? Now, to give you an idea, a better idea of how much you will pay with Lambda, because this is the formula, and it's not really easy to understand. Let's take this example. Let's say this is websites. You get a bit of traffic. So during the night here, at first, there is no traffic on the website. It's really quiet. Then there's a slight peak at 8 AM, and the large traffic spike is at 9 PM. So at 9 PM, I mean, you would want to use a server that can handle the traffic at 9 PM. So you will buy and set up a server for that. And at 9 PM, everything is perfect. You have the perfect setup. But at any other time of the day, you have too much resources. So if you were to use serverless and function as a service, you would pay only for something proportional to the green area. If you are not using serverless, you would pay something proportional to the green plus the blue area for the whole box. So what I'm saying here is that look at your application and production. And basically, the more blue you have, the more resources you are wasting. The more blue you have, the more money you are wasting. And the more blue you have, the more you could save by moving to serverless. Of course, it really depends on the application. Some applications really match your applications with very fine auto scaling. They don't have a lot of blue. That's fine. But some do. Some kind of applications have a lot of blue, like workers, batch processes were running during the night. Chrome tasks, this is really interesting because you can run tasks with a lot of resources for very short time. And you don't have to provision containers or whatever and deal with auto scaling. It's done for you. So to sum up, you manage less stuff. You don't deal with the physical servers. You don't deal with the Linux environment. You can't configure that. You don't deal with the Apache or Nginx configuration and routing. You don't even configure, for example, PHP, FPM. There are so many things that you do less. Of course, again, there are new things. And this can trip you up at first. But promise is it gets much easier. You can scale more easily because of the execution model, which is stateless, which means there is no concurrency. And you pay for what you use, which can save you sometimes money. I've talked about Amazon Lambda, AWS Lambda. But there are many providers out there. There is IBM running Apache OpenWisk. You have Microsoft Azure functions, Google Cloud functions. Some are based in terms of technology on Docker containers. Most of them are not. Like here, there's only OpenWisk, which is based on containers. The major providers like Amazon, Microsoft, and Google are using close technology. But then you have a Linux environment. And every month, you have a new provider that comes out. So there's a lot of choice. If you want to get started, my personal recommendation is to start with Amazon Lambda. This is the provider, which is the most popular one. So this is the one where you will find the most resources, the most help online, and the one with reliable and good performances compared to, for example, Microsoft and Google Cloud functions. But there is one funny thing here. Amazon Lambda does not support PHP. And this is a bit ironic, because again, PHP is perfect here, but no support for PHP. So that's fine. Since November 2018, we can create our own custom runtime. It's like a Docker image, basically. You can create, like, add support to any language you want on Lambda. To do that, you can download PHP sources, compile them for the specific Linux version running on Lambda. You can add the extensions you want, compile them at the system libraries, write the specific bootstrap files and integration with Amazon Lambda and everything. And you probably don't want to do that. You want to do less. But that's fine. If you want to get started, there is BREF. BREF is an open source project that brings support for PHP on Amazon Lambda. But BREF doesn't stop there. The goal of BREF is actually a bit larger. Its goal is to give you everything that you need to create serverless PHP applications. And this is not just a technological problem. It's not just about tooling. The first thing with BREF, when you go into the website, is that it helps you make choices. Which provider to use, which way to run PHP and whatever, which tool to use to deploy, how to do this, how to do that. BREF makes a few of those choices for you, or at least guides you, so that you can get started easily. Of course, you can then make your own choices. But it helps you on that front. Of course, it comes with documentation, because again, serverless is a bit new. If you start from scratch again, it's a bit intimidating. And yes, finally, tooling, like the PHP runtime for Lambda, but also CLI tooling to get you started easily, a logger made specifically for Lambda, integration with frameworks, and stuff like this. So if you want to run PHP on Lambda with BREF, you have three ways of doing it, not just one, because it would be too simple. The first one, the first way, the first runtime, is the PHP function. And it looks a lot like the JavaScript example and the Python example I showed at the beginning. And this is intentional. The goal with this runtime is to provide something that would look like what Amazon would provide if they were to support PHP. It's very non-opinionated. It's basically a function, an anonymous function, that takes an event and returns a result. Now, with the latest version, we've added even more tooling if you want to write workers that are triggered by message queues or even buses you can even write instead of functions, objects, and classes. This is really fun, but I can't get into those details right now, but really powerful wants to get started. But again, if you've never used serverless before, you don't really care about it. That's great, but what do we do? How do we go from here? So that's fine. If you want to get started, I really recommend you start with the HTTP layer, the HTTP runtime. That way, you can run APIs and websites. And the great thing here is that it runs PHP FPM on land. So your favorite application framework, Symphony, Laravel, whatever, runs the same. There's index.php. It's called that for every request, you have $get, $post. You can set headers, cookies, whatever. It works the same. So that's why it's the best way to get started. And finally, there's a third runtime, which is the console runtime. It's a bit less important, it's more of a tool to help you run Symphony console commands, or Laravel artisan commands on lambda, because you don't have SSH into lambda. There is no server running. So if you want to run database migrations, administrative commands, whatever you want, this will help you. You can run the same command as you run locally, but run it with the brief CLI command, and it will run in production or staging. In lambda. Now this is great, this is how PHP runs. To deploy, you can go through the UI and with the zip file method, but if you've used Amazon before, you know that this is not the right way. The AWS UI is really confusing, really complex. It's horrible, honestly. And so this is the first reason. The second reason that you don't want to do it manually is that you probably want to automate stuff, have continuous deployment, and things like this. So for that, BREF recommends using the serverless framework. If you've used, or if you know about, Ansible, Chef, Puppet, Terraform, CloudFormation, all those tools, this is the same, except it's a little bit simpler. At least to me, it's much simpler, and it's specific for serverless applications. This is today the most popular tooling that you can use to create serverless applications. So that's why BREF, for all these reasons, BREF, we chose to use that tool. And this is also why BREF doesn't deploy your project. This is the serverless framework that does it. Instead of reinventing the wheel, we use perfectly good software. The way you set your application up, and you say, oh, I want a database, I want an HTTP API and everything, is via a YAML file. Yeah. This is a very short example. So you set the name of your application at the top. You say, hey, I want to use AWS, and here I create my functions. And here I'm creating a smaller API, so I have just one function, because I want to send all the requests to index.php, then my framework will take over. So just like you would do with PHP, FPM, or HT Access, or Nginx, you say the handler for all requests is index.php. Then you have a bit of boilerplate, where you say, I want to use the BREF PHP runtime. And then you say, I want all routes, all URLs to be sent to that function. And that's it. If you want to get started quickly, you can, so BREF is a Composer package, you can install it and run BREF in it, and it will create that file for you. Once that's done, Composer install, without the development dependencies, so that it's lighter, and you run serverless deploy, single command, and that's it. After about one minute, your service is online. You get a beautiful URL here. This is Amazon's signature of making things really simple, but it works. And of course, you can add your own domain in front of that, so if you're writing like a waybook that you want to put into somewhere, you don't really care, it's an ugly URL, that's fine if you want to build a proper API or website, you can put a domain name. Now I've mentioned it, but yes, you can run Symfony, you can run Laravel, you can run your favorite framework on Lambda, but there are a few things to consider. This framework will run fine, except that you are now running in a distributed environment. So I'm not talking about vendor locking here, it's still a classic PHP application, but just like if you were to deploy on many containers that could auto-scale, with Lambda, you don't, for example, write logs on disk, because if the container is destroyed, you lose the logs. Same with the sessions, so all those problems that you can get with a distributed application, you will get them here. Fortunately, it's not that hard with the frameworks, if you want to send the logs for Symfony and Laravel into, you want to send them to the standard error output, just like in containers, and then they will all be centralized into a system. That's usually an environment variable or a configuration line that you set. Same with the sessions, you can store them in database, in Redis, in cookies, whichever you want. We have guides on the Pref website, and a few examples as well, but yeah, you have to be aware that you need to configure a few things, mostly logs, cookies, the sessions, and I think that's the most basic part. Now, I just want to take a moment to talk about performances, because I'm sure when I showed you before how, like the execution model of Lambda, you were all thinking about how does it work in terms of response time and performances. So let's cover that. First, you need to be aware that when you deploy to Lambda, you can choose a certain amount of memory for your application. And the more memory you have, just like if you were to choose a digital ocean server, the more memory you have, the more powerful the CPU is. You can have very small Lambdas, like 100 megabytes of RAM, but those are really slow. If you have more RAM, you have a more powerful CPU and your code runs much faster. The one here, the one gigabyte Lambda is the one you want to use by default. It's the one that has about the same performances as any server, like any standard server. Fortunately, this is a default with Pref and the serverless framework, so you don't have anything to do, but be careful if you try out Lambda manually and hear about that a lot. Lambda by default may give you the smaller Lambda and you may be thinking, I don't need one gigabyte of memory, I will use less, but then it's really slow and you don't understand why and that's confusing. The larger Lambdas are actually quite fast. So this is, you can try it yourself on your computer. This is the bench.php file. This is a file that is inside the PHP code source and it benchmarks the CPU operations. When I ran it on my computer, I'm about 0.5 or six seconds, so I guess you can have pretty good CPUs in Lambda. Obviously, the more powerful is the more expensive. Now, if we look at response times, we need to separate two things. First one is the warm response time when the container is booted, ready to handle requests. And the second one is the cold response time when the container is booting from scratch. With the warm response time, so this is a PHP hello world, symphony hello world, both should be about zero millisecond or so. Be careful with the small Lambda, you don't have really good numbers, but with larger Lambdas, it's okay. You get about one millisecond or four millisecond for a symphony of overhead. It's like the time for a symphony to boot and you get the same numbers if you run PHP or symphony on digital ocean or EC2 or whichever server. So this is great. That means that for warm response time, we've got the same performances then on another platform, but obviously there are the cold response time, the cold starts. And those are about 250 milliseconds. They can be even more depending on how large your application is. So obviously, depending on your use case, this can be a deal breaker. And that's fine. That's fine. If it's not a good fit for your application, don't use it. But this is interesting to put that into perspective. Take any website, refresh it, open the developer toolbox and see how long it takes to load completely. Most of the time, it's 20 seconds, 30 seconds to load all the assets and render everything. You need to put things into perspective. You get a cold start about 0.5% of the time in an application that receives some traffic where containers are usually live and responding to requests. So this doesn't happen a lot, but it can happen. Another thing to consider as well is let's say you have a website where you sell concert tickets. And at midnight, there's a sale that will open. Everybody rushes onto the website at midnight. And with your server setup, you don't get cold start. That's great. But whenever your server is overloaded, you get much, much worse response time than that. You can at least. Here, the promise is you never have server slowdown because each container is isolated. And that's the worst that you can get. So again, depending on the application, this is a really good deal. This is the worst that you can get. Now if you really want to use Lambda and you are worried about that, you can also provision instances. So you can provision containers, like say I want always to have 10 containers alive and they will always be one. You pay for that, obviously, but that's possible. You can even provision with different parameters. You know that you have a peak time at lunch and at dinner so you can anticipate that and avoid cold start entirely. So that is possible. That's something I would really recommend for larger projects. But I talk with web agencies, startups, small projects, or small that will be big. Everybody will be big someday. But for most of them, it's perfectly fine. Now I have a few case studies to show you some applications that run in production with Lambda and with breath. So that will give you an idea of what to expect and what can actually run on Lambda. I have a mix of use cases. The first one is worker oriented. Workers are awesome on Lambda and I hope to help you understand why. So pretty CI is a website that I built. It's a SAS for continuous integration for coding standards. So whenever you push to GitHub, it will run a CI job that will run PHP code sniffer or PHP CS fixer for you. Then on your pull request, you can see it's green or it's red. And I implemented that as a Laravel application. I deployed that on a digital ocean server. Five dollars per month, that's great. And that's it. I used workers so that whenever there's a commit, I run a worker with the Laravel queues. And it was fine until I got more, let's say than five, four people pushing at the same time. Four jobs running at the same time because I have a small server. I have only like three workers. So whenever the workers are running, the queue will pile up a little bit and people will be waiting. That's how queue works. So I don't like managing servers. I don't like scaling that. I didn't want people to wait. I didn't want to deal with the servers. So I moved the workers and just the workers at first. Never moved the website in the end. Just too lazy for that. But I moved the workers to Lambda. So yeah, sorry, as I said, this is Laravel, GitHub API. I used the function runtime because I have no need for HTTP on the workers. And now whenever there's a new commit, there's a new message in the message queue, there will be instantly a new container running. And it's transforming a little bit how queues work because instead of messages piling up, instead the lambdas will be piling up and processing the messages immediately. Of course, there's a limit, but by default the limit is 1,000 lambdas running concurrently. So I can handle 1,000 pushes at the same time. So I'm good. And so this is what I see often now whenever I push to GitHub. The pretty CI check runs in less than five seconds when there's not a huge project, obviously. So it's pretty instant, but the rest of the checks are just pending, waiting for a container to be free. So for me, moving to lambda in that use case was really great because for the user experience and even for me, developer, not having to maintain servers, the cost is actually also extremely low. I think I pay close to $0 for that. So it's a win-win for everything. The second use case is a website to show a different side of that. Externals.io is a website that shows the latest threads in the PHP internals mailing list. So this is where you can follow new PHP features, RFCs, and yeah, how PHP evolves. It's a website. I'm pretty happy with the response time. I optimized that a little bit. There is a Chrome task though that will run every five minutes, fetch the new emails, insert them into a database. So this is a MySQL database. So I use both runtimes, HTTP for the website with PHP FPM, and I have a second function running with the function runtime for the Chrome task. So you can see that you can mix both. I use the custom PHP framework that I built that I'm not really proud of, but it's okay. And MySQL to store the messages. I get about 100,000 requests per month, so it's not a huge website, but it's still getting some traffic. It's interesting because it's like a real use case. And at first I was running that on digital ocean. I had a few problems with losing the database, not having backups, running PHP 5.3 at the time or something like this, and it was really painful. So I moved to a pass platform as a service, platform SH at the time, so they were happy to sponsor me, so I did not pay platform SH, but the plan was about the equivalent plan for the service of not having to deal with all of that. I went from a plan for $5 per month at digital ocean to $50 per month at platform SH, and then I moved to Lambda, everything, and now it costs me $17 per month for about the same service, which is not having to deal with the infrastructure. So this is why I like to compare things that are comparable, not really the server costs, only by itself, but also everything that you get with that. Of those $17, $15 are for the database, because a MySQL database, you have to pay a fixed cost. I wished was scaling and pay per use as well, but it's not. So that means I pay $2 per month for serving the website, which is to me, I think, a pretty good deal. And what's interesting as well with this website is that it let me see what happens when you get a traffic spike, because everything is running great, and sometime around like last year, last summer maybe, Ziv Zorasky posted a message in the mailing list about creating P++, which was a fork of PHP or something. Anyway, people really wanted to read the emails, and I got like more than 10 times the traffic, the usual traffic on external.io. It was a really good case to see how much would I pay for that extra traffic. Fortunately, it cost me four cents. So I'm good. Turns out the compute time on Lambda is, honestly, it's cheap. What you will pay for is mostly the database, and if you do stuff like have a lot of bandwidths, or videos, or images, you can pay a bit like this. But most of the time, I don't want to say all, of course, but most of the time, running PHP on Lambda is cheaper than on the server, but you have to be careful with other services. I made an estimation. If I were to run the same thing and get two million requests per month instead of 100K, I would go, if I did the proper optimization, not change the code, but instead of using the version one of API, get away with the version two, just a few changes in the architecture. I would go from $17 to $39 per month. So it's not like I would go to a few $100 per month. The equivalent platform message plan is $500 per month for that. So, again, really interesting, and it puts things into perspective. I have another worker case study, more money talk. Noptia is a French startup, and what they do is they run a lot of background jobs because they process a lot of energy bills. They try to help you switch providers. So they don't have a lot of web traffic, but they have a few jobs. And they are a startup. The numbers are now a few years old, but still interesting. They were growing, selling more and more clients, running more and more jobs, and they used to run PHP workers on EC2 with symphony workers. Thing is that the Amazon build started growing as well, and they had trouble scaling servers. They started losing messages or having done time. I mean, it wasn't really great experience for them. So what they did in the middle of the summer in July, they started migrating some services to Lambda. And as soon as they started, they started decreasing their Amazon bill. In the end, I think they divided by three their bill while growing as well. And they did not share publicly total numbers, but it did share numbers for a specific microservice that they migrated entirely on Lambda. And they went from $800 per month to $90 per month for a better service. Because now jobs were running almost instantly, scaling automatically. They had some processing time going from two hours because of the queued jobs to a few minutes. And also the important thing for them was going from a place where they were afraid of their tech stack and afraid of growing to a place where they could sign any new client without having to worry. And as a startup, this is important. Now, again, another website. I'll make it quick on this one. This is a Brazilian website, which I had no idea existed before someone talked about it in the brief issues. So this is a website that runs with a website and an API. The website calls the API. Pretty good response times, to be honest. It runs with Falcon, the framework. It's PHP. I mean, the code base is PHP. It uses MySQL, so MySQL at scale with PHP and Lambda, it works well. And when I say at scale, I mean, they get 40 million requests per day. They are in the top 3,000 websites in the world. So I think this is a good use case to see how it behaves at scale. And yeah, when they get peak traffic, they get more than 2,000 Lambda instances, containers, alive at the same time to serve the website. They are really happy with the migration. They used to run on ECS and EC2. They saved money by moving to Lambda, even for a large website with continuous traffic. They saved 25% of their bill. And they are really happy because they don't have to scale anything. They can spend their time doing something else. Now the last one isn't really a case to the per se, but I've talked about until now about APIs website and workers. But what's really interesting is where we are going with Savaless at the moment and in the future years. What I mean by that is that instead of viewing our application as a large box, receiving requests, return on responses, we can view it as separate. I don't like the term microservices here, but maybe that's the correct term, but using Lambdas for specific jobs. And unlike microservices, we don't have to deal with the infrastructure, which makes it much more interesting, at least to me. I don't like dealing with servers and scaling. Here is an example of a project I did for a client. I mean, it's simplified, but here you have a back office, which is a PHP application, let's say Laravel or Symphony running on Lambda and the user clicks a button, send an alert. Instead of calling the APIs like Facebook, doing a Facebook post, a Twitter post and sending emails in my controller, where things can go wrong and I can send, publish on Facebook, then mess up Twitter and have done half the publication and ask the user to retry how to deal with it. Instead of that, I can use services, you know, Savaless services like EventBridge, which is like to me it's very close to what Symphony Messenger does, except as infrastructure instead of code. So you can publish messages in there and have subscribers that will execute separately. They can fail separately. I can post to Facebook, send my emails, have Twitter fail because of a downtime and set up auto retries. And automatically with the cloud, my thing will be retried without having to use the PHP library or code the behavior myself. So Twitter is done for a few minutes, it's okay, the message will be sent later on. If Twitter is done for the day, so this is a pattern, a design pattern called the fan out, sorry, the fan out pattern. You have many architecture patterns like this. If let's say Twitter is done for the whole day, I can't send the message on Twitter, that's fine, we have another architectural pattern called the dead letter queue and messages can be then stored separately, the developer will be alerted, oh, there's actually a bug in my lambda, I fix it, I can republish the messages and automatically the behavior that failed before can be replayed again separately and individually. So I'm not saying we should all do that, but that's really interesting to see where we are going because that means doing less and less of server management, but less of all the wiring and wiring stuff together orchestrating our application. This is what is called, I'll try to say it right, undifferentiated heavy lifting. I use a framework because I don't want to write a PHP router and a logger and all that stuff. We are used now to using frameworks, it makes sense. And I think these services, here is just one and two, but there are so many available, crazy stuff, and it looks to me like this is going to be the next evolution of our frameworks, like an architectural framework. So this is really interesting to look into. Okay, let's conclude, what should we conclude? Should we go to work tomorrow and move everything to serverless? I don't think so. Serverless is starting to be mature. We have countless case studies of companies running stuff on Lambda. I've mentioned Amazon S3 use S3, it's really mature. You can definitely create serverless applications with PHP as well. The thing to consider though is your experience with it. So I guess this is more of learning at the moment, growing how much we know as a community about this, growing our best practices, and this is why sharing I think is so important. Is the future serverless? I think it would be a bold statement to say that, but I'm convinced personally that serverless will play an important part in the future. Yes, those architectures and those execution models are not for every applications. Obviously, just like PHP is not for all applications. But it will play an important part because serverless is not just a technical solution. It's also a business problem. We started using frameworks and now we use frameworks and this is now a business problem to use frameworks because we don't want to spend developer time to rebuild again and again the same components. Just like spending time on scaling servers and maintaining them is a business problem. Instead of hiring an ops person, you can hire a developer, work on the product, work on what makes your company different and successful. I used to talk about an example of the iRobots company. They make Roomba vacuum cleaners. So those things are connected to the internet and the new versions and this is a large architect infrastructure that they manage. They are two developers. They run the company with two developers. And companies and projects like these are becoming more and more possible just like again when PHP was born years ago. And PHP, I think serverless could play a huge role in PHP's future because of an alignment of technologies, of the execution model, it's the same thing. It's easy to go from PHP to serverless. It's harder when you're not developer and you think about asynchronous stuff and coroutine execution and concurrent execution. It's easier for us developers. Serverless is here for us and I think it could play a part in PHP's future. But I also think again there's an alignment in goals, making the web more accessible. We have so many web shops everywhere around the world and being able to run PHP at scale easily without having to deal with servers is amazing. So this is why I guess you get it now. I'm passionate about all of this. So I really encourage you to give it a try. Play with it. See for yourself how it runs, how it works. And share what you've learned so that we can all as a community grow from this. Thank you very much. If you want to learn more about breath, this is the website. I run a newsletter, a monthly newsletter where I share news about serverless that relates to PHP specifically. And this is my company now. I am not employed by AWS. I'm not employed by large companies so I do consulting work to help people get started with serverless and help them build their applications. Thank you.