 At the start, I have to make a speaker and audience selfie because my German colleagues have to believe me that I was in Singapore. So everybody say hi, hi, thanks. So, yeah, I'm from Germany but actually I'm not here in Singapore only for this talk. I'm on holidays on vacation here in Singapore and... Okay. And my wife and me did a cruise ship tour from Dubai to Singapore and we arrived two days ago. Say hello to Katharina, my wife. Okay, so, technique works. Great. So tonight I want to talk about containers versus serverless. The good, the bad and the ugly. Who of you know this movie? The good, the bad and the ugly. Not that much. You should really look it. Watch it. It's really worth. One of the best Western, I know. Okay, some facts about me. I'm pretty long in the computer business right now. As a child, I started with some Commodore. We see 20. Anybody of you know we see 20 computer? You do? Cool. C64. Pretty much. Yeah, some more people. Okay. The we see 20 was much more smaller than and we have only had a thing called data set. It's a cassette recorder for restoring data. No floppy disks. Okay. What else? Yeah. I'm working at the freelance consultant in Germany and every time I think I have seen it all. A new problem, of course. So I don't call myself front-end developer, back-end developer, full-stack developer. I call myself problem solver. Doesn't matter which problem occurs. I try to solve it. Yeah. Since 20 years in IT business, I'm also a lead of a local Java user group in Germany. It's called Java user group Darmstadt. Darmstadt is the town. It's nearby Frankfurt. So you don't have to know Darmstadt, but perhaps you know Frankfurt. Yeah. We do also regular talks every month and I'm speaking also at tech conferences all over the world. I spoke also at a conference now calling Code One, formerly known as Java One by Oracle in San Francisco and in Norway in Sweden. I also attended a walk-stay conference. I spoke at the walk-stay conference in Zurich in Switzerland. Yeah. And I wrote a book about solace computing. Unfortunately for you guys, it's only in German. So you have to learn German. You don't want to? Okay. Okay. Great. Great. Yeah. So perhaps I will translate it in the future. Perhaps not. I don't know. And if you are on Twitter and like to mention me, it's Dasnikow. And if you forget it, just say to me, I have to turn around. That's on my back. Okay. Solace computing. Who of you uses serverless computing? One, two, three, four, five. Oh, that's pretty much for user group events at this time. Who of you know serverless computing? Some more people? Who of you doesn't know about serverless at all? Okay. Yeah. Not that much because I want to do a complete introduction to serverless. What I like to talk is for people knowing serverless and just wanting to difference between serverless and containers. And I really like serverless. And so there might be a round on containers. So don't, don't judge me because of this. And don't judge me of the, because of the color of my laptop. It's not my laptop. It's from Kathy's laptop. More light and smaller to travel with. Okay. So there's a fuss about the name serverless. Actually, all you have, all of you have heard about this is crude name serverless. There are also servers. Okay. But it's just name. Don't worry about it. And if you still have problems with the serverless and you have problems to explain it to your coworkers or to your family, just think about it. I'm going to some, it's not called takeaway here in Singapore. It's called Hawker Center. Just go to a hawker center, eat some delicious food. And this is kitchenless because there is, there actually, there is a kitchen. The food is made in the kitchen, but you don't have to clean it. That's great. And that's exactly how serverless works. Of course, there are servers, but you don't have to care about all this service. But that's not the point I came up for this talk. There was another tweet of Tim Wagner. He's product manager, I think so, at HWS for all the serverless stuff. And he retweeted a tweet from Ben Kehoe. He's Ben Kehoe's engineer at iRobot. The company is doing the robots. It's called, you know, the German word Staubsauger. Yeah. And he tweeted, if you have a container that is active and it's not handling data, this is a server. So just powering up a container and executing some code, a function, a method or whatever once a day or twice a day. Or sometimes that's not serverless. That's a running server. And Tim Wagner said, hey, that's exactly, that's beautiful. What it is. This was at October 27 to 17, end of October approximately a year ago. And end of the day. It was German time and half past 11 p.m. German time. It's in the afternoon of San Francisco time. So it began. When I went to bed at 12 o'clock and when I wake up at 5 o'clock in the morning, 1st November, Adrian Cockroft joined the conversation. He's Senior Vice President Cloud Strategy at HWS. Nice title. Senior Vice President Cloud Strategy. Formerly as at Netflix. And he's, if you stop paying when there's no traffic, that's serverless. That's one of the key features of serverless. You only pay for the time you use the infrastructure. So we move on at the day in the evening of 1st November. Adrian Cockroft again responds to a ban on my choir. What if it's on premise? What if I have a serverless environment on premise? Serverless on premise. Think about it. If it's on premise, it's not serverless. You have chargeback economics and that doesn't work with small companies. Only if you have really, really large company, then all the chargebacks will work for a real serverless environment. And then Ben McGuire also asked if I run lambda styles on premise. They're not serverless, but they're not containers or VMs or servers. Yay. That's great. And Adrian created an expression called fast on containers. And I really like this expression fast on containers. You see it's 2nd November, 6.50 in the morning. Long conversation and was pretty fun to read all this stuff. And again, someone is paying for idle time. You have the container infrastructure. You have to pay for idle time. Even if there's no executed code. Even if there's no code written by you on what's executed. Of course, there's code executed from the container runtime, container management system. But there's no code executed was written by you. But you have to pay for the uptime of the whole cluster. And I really like this fast on containers expression. And they're pretty much fast on containers frameworks. They call themselves serverless. I call them fast on containers frameworks. They're pretty good, all these frameworks. I don't say don't use these frameworks. It's pretty good. If you have the need to use these frameworks, if you have the need to have more control over the environment you want to use, then such a framework might be a good choice. Because all of these frameworks, what do they do? You're writing the code. And at combined time, a container is built. And you have to care about this container for execution. OK, perhaps OpenWISC is working a bit more like serverless. But you have to provide a real big infrastructure to run all this thing. And again, serverless is not fast on containers. Serverless is much, much more than just having a framework running a function on my container infrastructure. We will see that later on in the talk. For example, I picked one infrastructure from fnproject by Oracle to show you how the infrastructure is built up or what infrastructure you have to maintain for running a serverless environment. First, you have a load balancer for all the requests coming in and balancing to all the servers to the machines actually executing your code. So you need plenty, much of them. Depends on how your system works. And for all the management of this, you need a database for some metadata. You need a messaging system if you want to have asynchronous execution of your code. You need some object store, files, log files, whatever. And of course, you need a container registry for your containers. So it's pretty much of infrastructure to be serverless or infrastructureless. And who wants to manage infrastructure of you? No one. Thank you. OpenWisk is pretty seamlessly in the same direction. You always have this kind of stuff. You have a registry, object store, messaging components and the database. You have all to maintain this thing. And beginning of this year, there was Simon Wadley, this man on the right side. Simon Wadley from United Kingdom, a researcher. He did a keynote at a German conference and he talked about serverless and building your own serverless. And he compared building your own serverless environment with building a toaster on your own. If you want to toast a bread, what do you do? You go buying a toaster, buy a bread, put the bread in a toaster, wait two minutes and eat the toaster bread. What don't you do? You don't buy all the single parts for a toaster. Of course, you can buy all the single parts for a toaster and assemble all the parts together by yourself. But are you experts in building toasters? No, you're experts in eating bread or eating rice or eating noodles or whatever in Singapore. So in the end of the day, you have a self-made toaster, which will explode. And then you have spent too much money for having nothing. That's not what you want. So if you want to go serverless, you shouldn't build it on your own. You should use a ready-packaged or ready-run, ready-managed environment. And if you use your own environment, it feels like packaging one suitcase in another. In the end of the day, you have a real big suitcase but really small content. You have Function, JVM, Docker, Kubernetes, DCOS, Mesos, Virtual Machines, Hypervisor, Bare Metal. That's pretty much. And you have to maintain it. You have to manage it. You have to patch it. And at the same time, I did this picture. I saw a tweet of Sam Newman with pretty the same expression. So all this stuff needs patching. If hardware, operating system, Hypervisor, VMOS, Docker, ContainerOS, and finally your app or your Function, your code. All this needs patching. Who of you likes patching infrastructure? No one. Me neither. And does it feel secure and stable with all this stuff? Some Kubernetes, some DCOS, some Mesos. I know guys from a company in Germany who started to think about Mesos, DCOS, and Kubernetes or Kubernetes or whatever in which combination ever three years ago. And today they're not yet running anything of it because they're saying it's too complicated. It's too much stuff to learn. The learning curve is too steep, high, and we don't have time to learn all this stuff. We are developers. We are not system administrators. So if you use containers, and containers are still a great choice, I also love containers. I use containers in some projects at some customers of mine where serverless is not a choice. But containers are. But if you have used containers, you have a great power, but you have also high responsibility of the running system because not only is it possible to run an Oracle database in a container, it's a good idea to run an Oracle database in a container or a WebSphere application server or whatever. You can do that. But all the guys I know who packaged Oracle database or WebLogic container into a Docker container, they say it was horrible to do this. It's not a good idea. So also if you run a container with a full-packaged operating system, you have many attack possibilities from the outside to this container. It's not secure. Perhaps there are containers with open debug ports because I just want to debug the production system. You're laughing. I know guys who are running production systems with open debug ports never ever do this. Production systems must not be debugged. No, no, no, no. Never, never ever. So if you have containers but want to go serverless and serverless isn't the right name, perhaps you should call it containerless. That's what Lynn Lanshet herself created, the word Lynn Lanshet is one of the serverless superheroes created by a cloud guru and she said containers are the new VMs, considered legacy. So don't use it. Only if there's no other way. And if you really have to use containers, just use containers in a serverless way. How to do this? Because if you're serverless, you just write your code, throw it into the serverless environment and say execute this code depending on event A, B and C and just give the function some memory and the environment will do the rest or the provider will do the rest. And also this is possible for containers. It's at AWS, it's called Fargate. Also, Azure has the possibility to put containers into the cloud and just have to execute it depending on events. So you just have to build your container, choose an orchestrator because AWS has two orchestrators for containers, ECS, the Elastic Container System, the AWS own container system and EKS, the Kubernetes system. And you define your application and define your application means define the events which are responsible to execute the container, how long the container should run, how much memory the container will get and how the container infrastructure should scale. And that's all. And then you can launch all the containers and run it and Fargate will do the rest for you. Will the container will push up all the infrastructure for running a complete cluster environment. And you just have to build your container and say how the container should be executed. So now I'm talking pretty much about AWS stuff, Fargate or Lambda for serverless. And especially in Germany, there are people saying, hey, Wendor log in, it's bad, Wendor log in, it's bad. Also in Singapore, Wendor log in, it's bad. Yes, it depends. In Germany, Wendor log in is bad. So better we buy your Oracle database and have no Wendor log in. So I call it, if you buy your Oracle database or IBM database, I call it golf course decision. It's not based on technical requirements, it's just based on sales and some relations between managers. So how is this Wendor log in compared with all this cloud stuff? And again, Adrian Cockroft comes into the scene. He said at the times at Netflix, his Netflix times, he said we did a comparison, we did a calculation. If we would move our complete infrastructure running on AWS to two other clouds whatever they will be, Azure, Google, whatever, it's still not that expensive as doing on our own. So providing the whole infrastructure on our own at the same level the cloud providers do is too expensive for, I will say, 98% of the companies. So Wendor log in is not that bad and you can build your functions, your serverless functions in a way that they are portable between the cloud providers. So just implement your business logic in a regular way and then put on top a thin layer for the cloud specific APIs. So if you want to avoid log in and have total control, you don't move fast. You won't be able to react in a fast way to changes on the market. And I think that's one requirement we have today. We have to be fast if there are change requirements at the market, if our competitors are coming up with new ideas, of course they won't because we are the market leaders. We have to be fast. We don't have time to care about infrastructure, to care about total control. We just have to release features. Anybody of you know the book? What is it called? Phoenix Project. Phoenix Project is, I really recommend this book. It's a book about DevOps, about agility on a high level and coping with internal requirements and external requirements. Really, really good book. It's written as a novel but has really much information in it. The Phoenix Project. Go out and read it. Also in Germany, we want to be cloud-native. Also in Singapore, everybody wants to be cloud-native and nobody knows what cloud-native actually means. This is the landscape from the Cloud Native Computing Foundation and this picture is approximately three months old. It's very old. I think today there are much more, I call it tools. The Cloud Native Computing Foundation and Foundation sounds very good and they just want our best and don't want money. They want to earn money also. There are many tools and if you use these tools, you are cloud-native. That's what the Cloud Native Computing Foundation says. No, not really. There's also a serverless cloud-native bubble. We can enlarge it. There are also many, many platforms and tools and frameworks and whatever. Still not all the tools on it. No, not because you're just using some tool, you're automatically cloud-native. With Kubernetes, it seems to be the worst thing I ever heard. Kubernetes for me is just like the J2E application server. Kubernetes itself is a great framework, a great tool, but I'm a developer. I don't want to care about Kubernetes. Luckily, there are more and more people coming up with this thing. Kubernetes is not for developers and other things the hype never told you. That's a treat of a talk at the Velocity Conference in USA two months ago. Kubernetes is very complex and Kubernetes will be the cloud operating system. I strongly believe this, but again, I don't want to care about Kubernetes or something else. I just want to use a ready-managed environment. Who have you heard about the public Kubernetes cloud from Tesla? No one. Tesla thought it's a good idea to start playing around with Kubernetes. It was the beginning of this year, I think around February or March. I don't know exactly. The engineers installed a Kubernetes cluster on HWS and they didn't secure it. It was public. They thought we don't have to secure it. Why? It's just a standalone cluster, no connection to our internal systems, so no problem. No problem in the case of data, yes, but there's computing power. It was accessible publicly. Some guys detected, hey, there's a cool cluster, have really much power. Let's use it for mining bitcoins. Tesla paid money for other people mining bitcoins. I think the Tesla guys are not that stupid, but they didn't secure the Kubernetes cluster that good. Think about it if you install a Kubernetes cluster. But back to Cloud Native. What it actually means? This took me quite a long time to find it on the Cloud Native Computing Foundation website. It's on the Frequency Ask Question page, but this is also just linked at the bottom. It's very small and it's very hidden. It actually means it's containerized. Every code, every application, function, whatever, I execute running in a container. That's okay for serverless, because serverless I upload my code and the cloud provider will package it into a container and execute the container at runtime. So I don't have to care, but it's executed in a container. So it's containerized, check. Dynamically orchestrated, for sure, because I have no influence of that much influence how the code, the function should be executed. In HWS Lambda, I just can say at max 1,000 or 2,000 instances of my function can be executed. And one function can have memory in terms of 128 megabyte and up to, I think it's now 3 megabyte or 6 megabyte. I don't know exactly what's the latest news. But it's automatically orchestrated. For every event occurs, a function is executed. And if there are plenty much simultaneous events, for all these events, our containers started. Also, if there's a denial of service attack, it's not your problem. It's handled by the infrastructure, by the cloud, and it's handled by your credit card. But at the infrastructure side, you have no problem. Last point, service-oriented, microservice-oriented. Of course, serverless functions should, by design, be much very small. Just a pure function, like the definition of a pure function. No side effects. And with the same input parameters, you have also the same output parameters. And a microservice mostly is much bigger than just a function. So if a Lambda function or a serverless function is smaller than a microservice, it's also microservice-oriented. So all these three bullet points are exactly the case for serverless. So you don't have to use Kubernetes to be cloud-native. And there's also a serverless working group in the cloud-native compute foundation, which released cloud events in our first version. Cloud events as a project for aligning the structure of the events initiating the functions. So at this time, all the cloud providers have a different structure for their events, which will invoke the functions. And the project cloud events tries to align this. So this was the case version 0.1, I think, in early summer. And today, no one of the cloud providers is really supporting this project. Why? Because the cloud providers don't want you to have an aligned event structure and moving from one cloud provider to another. They want to bind you at their cloud. Naturally. And if you're still a fan of cloud-native, you should reach this cloud-native, a bullshit bingo style. I read it loud because if you hear it, it's much more confusing than just reading it. Cloud-native architectures take full advantage of on-demand delivery, global deployment, elasticity and higher level services. They enable huge improvements in developer productivity, business utility, scalability, availability, utilization and cost savings. Sounds good. Cost savings. Most important point, cost savings. If customers ask me, Nico, what do we save if we move our infrastructure to the cloud? I respond to them, nothing. The first month or first one or two years, you will spend more money than now. Because at this time, you have a running infrastructure. You can use it. And moving this infrastructure to the cloud costs time, costs effort, costs manpower to build up the infrastructure again. This will level out in a few years. Just to move the cloud, whatever just VMs or serverless infrastructure, won't save you money. You will get more possibilities. You will get much more flexible and perhaps much more secure for changes, market changes and other things. But you won't save money. Actually, just lift and shift your infrastructure to the cloud. So if your boss says, let's go to the cloud to save money, say you won't save money. So enough of the rant. Let's get a bit more sophisticated. Let's do a comparison of the advantages and disadvantages of containers and serverless. Containers, you have control and flexibility. You can put whatever inside you want. Your vendor agnostic. As long as you use a Docker container or some other container run times. Docker is today the most known platform. You can execute it wherever the Docker demon runs. You have an easier migration path because the Docker container is still a server or a machine. And just put your application into a container and you can port it wherever you go. Disadvantages. You have administrative perks. You already covered that point. You have to maintain all the layers to execute containers. Everything is lower. It's not that currently the case because Kubernetes did some really good stuff at scaling the infrastructure. So I should delete this point for future talks. You have running costs because your infrastructure runs all the time, even if you don't need it. It's hard to get started. Remember the Tesla guys? So to get started, of course, simple installation of Kubernetes is not that hard. It's just few lines at the console and then you have a running cluster, but how to maintain it in the right way, that's the hard point. And you have a real high manual intervention. If you want to support 24 hours, 7 days a week infrastructure with Kubernetes, you need manpower. And if I do calculation based on the German legal restrictions, I need at least 6 to 8 people just for maintaining a 24-7 system. And paying 8 people a year, that's pretty much. That's just the people. You don't have any container executed at this point. In comparison of serverless, you have near to zero administration of infrastructure. It's all there. You only pay what you use, pay for execution. Therefore, you have zero cost for idle time. Of course, if you don't use it, you don't pay for it. You have autoscaling. Autoscaling to your credit card. I also did this. Faster time to market because you can focus on your business. You don't have to care about technical things. Microsavers nature, of course, you have clear code-based separation. Yeah, you can say you have a clear code-based separation, but functions are much more smaller. So you need more functions, and more functions mean sometimes more problems. So be aware of all this managing stuff. Reduced administration and maintenance, that's the same as the first thing. Disadvantages. Yeah, no standardization yet. All the cloud providers have different structures, different standards using your functions. And that's not that fun, but with a bit effort, you can handle it, but it's not that good at the moment. I think in the future it will get better. You have a black box environment. You can't look inside the environment which executes your function until last week. Last week there were re-invent keynotes, and they announced lumped-down layers and custom run times. So I just read the headlines. As I was on a cruise vessel on the sea, I didn't have the chance to look into the keynotes itself. Lumped-down layers and custom run times are the possibility to not only deploy your own code you write, but also provide own run times so you can have a PHP run time if you really want to have. So I expect there will be many, many WordPress lumped-down style functions in the future. And you can share codes between functions with the layers. So you could, you could, not you should, you could build a lumped-down layer with Spring Framework libraries and then just deploy your own code based on the Spring Framework. So you have to deploy the Spring Framework libraries only once to the layer, and the layer will be used in all the functions you're running. This can be a good idea, but this also can lead to problems because of cross-dependencies or dependencies in wrong versions, whatever. And also it's not a good idea to run, to execute Spring functions in HWS Lambda. I wrote about that in my book as an example, also running a Java EE example in HWS Lambda. So it's possible, but it's not a good idea. Forget about it. Just wrote small functions. When I look in, you still have this yet. And I think even with cloud events, the login thing will stay in future. So we have to cope with it. There's a thing called code starts. So this is really bad code starts. Code start is when your code is the first time executed, the cloud provider will package it into a container and power the container up. So your code has to be initialized. And depending on your environment and the code, this can take some seconds. In case of a Java function, this can take some more seconds. Even JavaScript functions can take up to two or three seconds. I already saw this. If there are too many libraries deployed with your function, you should always take care of the dependencies deploying with your function. So use less dependencies. Sometimes it's better to duplicate code to write the code for yourself instead of using a dependency or a library for just using one little function like left pad. Exactly. And complex apps can be hard to build and manage. As I said already, more functions, more problems. Still, there are no good tools to manage all the functions, all the resources you need to execute. This is still a bad thing. But if you're happy with this advantage, this advantage, this advantage is difficult for me. And you like all these advantages for serverless, then you should go with serverless. And now you're asking me, we heard about the advantages and disadvantages of serverless and containers. When to choose what? It depends. Consultant answer. Let's do an excursion to Kelsey Hightower. Kelsey Hightower, Google evangelist for Kubernetes. Is it pronounced Kubernetes or Kubernetes? I don't know. Some in Germany, some people say Kubernetes, some say Kubernetes, I don't know. Kelsey Hightower is an evangelist for this tool. And last year, also October last year, he said, I don't see any advantages of dealing with source code, loading up source code. Containers are a good thing. And I understand containers. I don't understand this other thing. And in April this year, he said, oh, perhaps I have to change the way I'm thinking. Because serverless is a real good approach for all eventing stuff, for all asynchronous stuff. Perhaps not the best stuff for using synchronous applications, but a good way for using asynchronous stuff. And perhaps we have to rethink our thinking. If you have a look at some use cases, we have this serverless web architecture. This is a standard web architecture. As three serving static web content and some dynamic web content via the API gateway, Lambda and perhaps DynamoDB or other databases. Actually, around 80% of all the deployed Lambda functions in AWS are synchronous web functions. You can do this, but that's not the best way. Because you have this cold start up latency. And there's no final solution until now for this thing. So I prefer using asynchronous stuff. You have some data, some mass data, machine learning data, IoT data, whatever, streaming data. Perhaps with Kinesis streams or some other tools. And then process it with Lambda and store it in some database. So this is completely asynchronous. There's no user interaction involved. And it doesn't matter if the Lambda function takes 100 milliseconds to power up or two seconds to power up. And that's only the first time. If one function is warmed up once, you won't have the cold start up latency. But if you have a cold start up, this won't bother you in an asynchronous way. Also, I created a serverless analytics scenario. Perhaps you have heard about this European stuff, GDPR. Crazy, really crazy. So some customers didn't want to use Google Analytics anymore. So I created a smaller solution, serverless analytics, a small JavaScript library, sending some data to Amazon API Gateway, collecting real stream, processing it via Lambda and storing it in DynamoDB. So this is great usage for processing data. That's only half user interactive because it's an asynchronous request from the web page. So the web page isn't blocked while the Lambda function is powered up. And also the Lambda function is hidden behind a Kinesis stream. So there will be no cold start up problem. And also, perhaps you have heard about TJ Holloway Chuck. That's a guy creating the Express framework for Node.js environments. And he also says business with Lambda shines. Fantastic for pipelines and data processing. And that's really cool to all the serverless stuff for processing data and not for doing user interactions. I also advise my customers to use Lambda-style functions for asynchronous data and not user interacting stuff such as converting documents from Word to PDF or just collecting data, whatever occurs. And yeah, serverless is the next step in the evolution of containers. And if we think further or have already looked further, there's a serverless relational database cluster, serverless Aurora. Aurora itself, it's a database cluster. It's written by HWS and has an interface for MySQL and PostgreSQL drivers. So you can use your PostgreSQL or MySQL code accessing Aurora database. And Aurora itself is a distributed cluster. So you just use it and the data is stored redundant via multiple availability zones. And if you want also in multiple regions automatically, so you don't have to care. And this database cluster comes now with serverless architecture. So we have the database storage where your data is stored and the connection between your application and the storage, the computing capacities is empty if you don't need the database. And if your application needs a connection to the database storage from a warm pool of database capacity, a compute engine is put in there, your application is able to access the data, to read it, to write it, delete it, whatever. And after you access the database storage, the compute engine will be put back to the database capacity pool for other customers that can use it. So you don't have to pay for a running cluster 24-7. You just pay Aurora serverless in seconds. If you need it only for one or two or three seconds, you just pay these amount of seconds. You don't pay the whole month. That's really an interesting approach. And at the re-invent, HWS also said because Aurora still was only accessible, serverless Aurora was still accessible only with MySQL and HWS now announced at re-invent, there will be also PostgreSQL interface for serverless Aurora. So that's really an interesting approach. Yeah, serverless. It's not a question of if serverless is already here. HWS Lambda as the most popular solution is now four years old. I started to use Lambda three years ago and it's really amazing what they developed in the meantime. If you still haven't used it, try it out. It's amazing. Serverless is just a question of when. I think in the next two, three or four years, you poor guys have to deal with Kubernetes. But I think in five years, nobody of the developers is talking about Kubernetes anymore. All this serverless stuff is widespread and widely used in all the applications. So it's already there. Go use it and don't think about should I really use it? Is it really the next big thing? It is. Just give it the time. And there are also the first companies doing a serverless first approach. This is trust pilot of Denmark. If serverless is not available or practical, containers are recommended. But only if serverless is not available or practical. Virtual servers are considered legacy and should be avoided. So no more EC2 at all. And use serverless is very applicable. And I think starting from tomorrow, all of you are using serverless, right? Hopefully, perhaps. Thank you so much for listening. I really enjoyed it. And if you have questions, just ask. Thanks. The slides are available at this URL, if you need the slides. Are there any questions you can answer? Not a question. So you said that around 70% or 80%, right? Yeah. All the functions, the gluing, are being in a synchronous website sort of operation. Yeah. On the other hand, it looks like you and the guys, like the creator of Express, right? T.J. Holloway-Chuck. Yes. So you're sort of aligned that it's not a very good use for some of the functions, right? Yeah. What's the problem? Like, why does it happen? It's the same problem with, it's the same problem like LeftPad. Okay. Just because there's NPM, it's easy to publish a package and a library on NPM and have this 15 minutes of fame for everyone. All the script kits, sorry for this, go out and publish some lambda functions. I don't know if all these 80% of all the deployed functions are actually executed. Just a value, HWS announced or communicated of all the deployed functions, which events they're depending on. And 80% of the functions deployed are depending on HTTP events. And perhaps they're never executed. It was just a test or whatever. So, yeah, it's just a statistical issue. I should probably sort of reverse my question, right? So, I'm sort of in the process of building that kind of application. Okay. With the lambda functions serving, like backing the whatever front application doing something. Right? So, am I doing something wrong? Should I go to containers? Or what are the problems? I'll be facing a new question. If you have the request for having fast response times 24-7, then perhaps containers could be a better fit today. This company TrustPilot, which does this serverless first approach, they have pretty much engineers focusing on reducing all this cold startup latency. They're just investigating each function, which bits they can eliminate to power on. They're also having much web functions running. But they have many people focusing on exactly this thing of cold startup problems. If you have the manpower for doing all this thing, then it could be a bit complicated. So, if it's just yourself, today containers might be a better approach. But then think about Fargate or such things, or at least a managed Kubernetes environment. Because I don't know if you want to care about Kubernetes. It was a properly managed environment. Yes, I think so. Thank you. Welcome. Other questions? Yes? You discovered that there is a soft limit. With my assumptions, you can run. Yes? And with that soft limit, at that time, you told me this limit was set by pay levels. So, is that still the case? There is an initial level number of simultaneous possible executions of functions. This number is at 1,000, parallel executions of functions in total. And you can expand this to any number you want by reaching the HWS support. Yes, yes. If you're a new customer, the number will be 1,000. If you're an experienced customer, having used Lambda for several months and always paid your bill correctly, then HWS will expand it automatically to 3,000. But if you want to have a higher number, you have to call support. But they will do it without any problems. Any other questions? No more questions? One question? I think one of the major challenges is serverless function is monitoring. Do you have some good recombination of how you can monitor different functions and how they play together? Monitoring is one of the big challenges in serverless, yes. There is a service called HWS X-Ray. And it's not bound to serverless. It's possible for all HWS resources. And X-Ray gives you a deep look inside all the resources communicating with each other. And so you can also see a percentage of failed and successful requests to each other and exactly what the case for failed execution and all this stuff. So X-Ray is one of a good solution to do this. And on the other hand, it depends on what you're used to use in the past so long. So you can, from every function, of course you can write your own code to send data to your own monitoring solution. But in HWS itself, it's X-Ray. And I think they have announced a new solution at this re-invent last week. But I don't know the name. I just have to look up it again. But HWS itself, they are aware of this problem of monitoring all the functions and they're working on it. They're not that good solutions already at them on the market. Okay? Then, thank you so much for listening. Have a nice evening. See you.