 So yeah, just to give you some background, I'm the co-founder and CEO of Koyeb. I basically spent the last decade building cloud service providers in Europe mostly. So I started with some engineering at the cloud service provider called Outscale. In 2011, we were kind of trying to bring the feature of AWS to the French and European markets. I spent six years building another cloud service provider called Skelway. And in 2020, I started Koyeb with my two co-founders. And basically, I'm expecting to spend the upcoming decade building other cloud service providers because I tend to apparently do that. Maybe to know a little bit more who is in the audience, I would like to ask you who has already set up a service machine production. Is that the case for everyone? So we have a few people, but not everybody. I saw a dozen of people, for the people who are following from remotely, and who has a network engineering background, maybe? Not more, actually, even lesser, five people. Because from my perspective, service measures are the modern network engineering, I would say. And we'll go into a little bit of how we implemented it at Koyeb and what it encompasses and what it also brings to the user. To give you some context on what we are doing, so at Koyeb, we are basically a cloud service provider, and we are providing a completely managed platform. And we basically are developers to deploy a full stack application in minutes. And we do that in a way where we basically don't want you to learn anything specific to the cloud provider. So we don't want you to learn anything specific to our own platform. And we take care of basically everything infrastructure. So you can deploy both container and native code, in which case we will build the application for you and deploy it in production. And coming back to our service makes subjects, we also completely abstract the networking part and the service mesh. So from our wide perspective, that's what it looks like. So we have this Koyeb serverless platform, which abstracts several different functions. We abstract both cloud providers, orchestration, continuous integration, continuous delivery, networking with both our global components and a service mesh and also some monitoring and storage. And today, obviously, we will focus on two parts. So one is the service mesh. And the other one is the global part and how we basically deploy globally. The key question we will try to answer is how are request processed when you deploy an application on Koyeb? As mentioned, we have two different components. We have our global components because we are providing our platform where you have 250 edge location built in and 25 core locations. So that's the global part. And the second part is the service mesh part, which deals with load balancing parts, which provides a completely encrypted network to the user. And also provides service discovery. So as I mentioned, from our perspective, the goal was to provide a completely zero configuration deployment options of services. And we want that to be easy in multiple regions. So we want developers to be able to deploy in two or 20 locations without having to think about how to make them say different microservices communicate together. So the first goal is to have multi-region services and to provide completely transparent inter and intra-region networking. And obviously, this needs to be completely secured. So there is two components in that. So one is everything needs to be encrypted because we are also running across the internet. And we want to be also completely multi-tenant because we are a cloud service provider. So we have multiple customers on the same machines, which we'll see bring some challenges because not all technologies are designed for this and with some scale in mind on the multi-tenancy side. And the last part we're bringing is the edge acceleration. To make it more concrete, I will just give you a short demo, which is a prerecorded demo on how we deploy our next year service in Paris on CREEP and what it looks like. And it works. So here we're basically deploying an application which is called Hello, I think. It's going to be deployed from GitHub. So it's a demo application we provide. It's going to build automatically. We can select how many machines are running, which size of machines we are using, in which location it's going to run. And that's where the magic happens because we are taking care of the build process and we automatically provisioned a completely ready-to-use service mesh. So you have at the top a public URL, which is TLS encrypted by default. You can add your own custom domains later if you want. And you also have a private domain, which is ready to use. So if you deploy multiple services, they will be able to communicate together, as we'll see in a minute. So the build succeeded, which is a good news because it's a prerecorded demo. And it's going basically so and that's the demo you'll see. So you see through which edge location it's going and on which call location it's ending. And it's Paris and Paris because it was recorded in Paris yesterday. You can use a test. So if you go to demo-createb.createb.app and not .com, you'll learn on this demo application. And that's what I got yesterday when I landed. So actually, and that's where the internet magic happens and the BGP magic happens, you might not go through the same exact edge location. So yesterday night, I was going through Madrid. Two days this morning in the cab, I was going through Marseille. So that's another story. I will not focus deeply on how BGP works, but it's mostly business negotiation, which don't provide always the best latency. And the question we will try to answer is how we go to this app and how we will reach it. So the first step is basically the edge. So we basically, when you type the URL, you have the DNS resolution, which is going to return you three different unicasted IPs. Your brother is going to use the first one. And wherever you are in the world, it's going to be always the same three IPs. So that's BGP and BGP unicasted, who is doing that. And in this schema, you have a user in Valencia and a user in New York City. So the user in Valencia is going to go to the nearest ad location, which might be Madrid or Marseille, depending on peering agreements, mainly. But it's supposed to be kind of the nearest geographically, hopefully. And the second user is going to go through New York City. And what it brings is basically the TLS connection is going to be terminated at the edge. You will be able to provide HTTP 3 for all people who have dozens who support it. And you will get also caching at the edge. The second step is our edge network is always going to route you to the nearest call location, where your service is actually running. So sorry, not where your service is actually running. So it's the nearest call location before routing you to the call location where your service is actually running. So remember, we deployed the service in Paris. So if you are in New York, you're going to go through the edge location in New York, then through the call location in New York, and then to Paris. Because we'll see how it works behind the scenes, but that's the way we implemented the mesh. And to go to this call location where we have the complete service mesh running, I will just give you a few more details on the global context of the technological stack we are using. So we are using, for the orchestration side, Nomad. And our technology, who is basically communicating with all these components, Firecracker for the virtualization side and to deploy micro-VMS. It's running on top of all servers. And for the networking side, we are using Kuma and Voice, who are dealing with the service mesh and discovery. So what Kuma and Voice basically bring to us is that they provide completely fully automated service deployment. We get all the basics, which you'd expect as an end user. So you get secure private connection. You don't need to think about encryption between your different services. You get completely automated DNS provisioning for your internal services, layer for layer seven load balancing. And it's really completely transparent as an end user. So we have, on each core location, we have the stack running. So it looks like this. You have like thermal servers, the hypervisors with the CoreS and micro-VMS on top of the hypervisors. And the container you deployed are running inside of the micro-VMS. So that's pretty standard in terms of service mesh deployments. And Kuma is basically taking care of all the service mesh control plane. So what it does, it's going to broadcast the configuration to all instances of Kuma, which are embedding Envoy technically. And it's broadcasting the configuration. It's not broadcasting it, but it's distributing the configuration where it needs it to be. So on the control plane side, you have Kuma who is dealing with all this configuration. And on the data plane side, you have the Envoy, which are embedded into managed by Kuma. And that's the key thing. And so going back to our service, which is located in Paris, the question is, where does it go through? So we stopped at the edge earlier. And we said it was going to the nearest core location. Then we have two components in what we call the DALAPASS. The first one is, and both of these components are not standard actually to Kuma. So it's things we had to add up. The first one is what we call the GLB. So it's a global load balancer, which is going to identify where in which core location the service is running. So if you landed in New York, this is the component which is going to tell you your services in Paris. And it's going to reduce traffic to Paris. And then we have a second component, which is the Ingress Gateway. And it's going to identify on which exact machine and the core location the service is running. And actually, I was told that Ingress Gateway component is something which came up pretty recently in Kuma. So we are not using it because we needed it earlier. And it was not yet released. So we have our own implementation, which is dealing with multiple customers and multiple meshes. So that's an overview of the whole DALAPASS. So on the left, you have the Edge servers, which are connecting with MTLS to the GLB, which is located in the core location. So that's a scenario where you land directly on the right core location. The GLB is going to send it to the Ingress Gateway in the same location. And the Ingress Gateway is going to route the traffic to the right sidecar where actually your micro-VM is running outside. If we go to the scenario where you have two services in the same location, so in Paris. In this case, the use case is you have a web application and you have an authentication service. So there are both containers. So the authentication service could be a Rust service, for instance. And what it brings is basically it's going to provision completely automated DNS provisioning. And it's also going to bring you native layer for load balancing. And as an end user, as a developer, you can do a curl on Oath that's 3,000. And it's going to be completely transparent for you. So your web service is going to be able to reach the Oath microservices without having to deal with any of the complexity. So in this case, it's pretty simple, like the data pass for communication inside of a region is going to be direct. So it's not going to go through another component. It's going to go from each service car to the second one. And if you're basically in a scenario where you're deploying across multiple regions, there is a two of them. I'm not sure what it means. I have two minutes left, maybe. What I was going to say, if we are deploying these two services in different regions, it's going to rely on another component which is called the zone ingress, which is actually pretty similar to the ingress gateway we had earlier. But it's for interregion. So it cannot go directly to the site of the located in New York. It's going to go through the zone ingress, which is located in New York, before reaching the right site in New York. When we built this platform, we had to choose the right technologies. And we had two key requirements, which were mostly filled by Kuma at the time. The first one is multi-tenancy. And we rely on open source technology. So the only one which was able to provide this was Kuma and the multi-zone factor. So that's why we ended up with, basically, Kuma. We still have several challenges and limitations with this technology. So one is multi-tenancy scaling. We have about 3,000 different service measures already. Because each customer has his own service mesh. That's one key point of our implementation. And basically, when you add a platency in the current implementation, it's going to, when you add up meshes, sorry, it's going to add a platency. So we ended up having sidecars who were taking five minutes to boot. So that's part of the challenge we are trying to tackle and solve. We have several performance issues there. And we also have the challenge of memory of overhead for sidecars, which is a huge concern for when you deploy tiny microservices like functions or services with 120 megabit of RAM. And the two things we are looking for, which are part of the technology already, outbound IPs and TCP-UDP support, which we need to implement. So we are basically looking to expose way more features than we have now. In the future, we already have built-in observability and the platform, which is going to be released this week, actually. And we are basically expanding right now to 25 core locations. And that's going to be it for now. I skipped really quickly on the last slides. Thank you very much. So we are going to basically announce the public preview today of the platform. It's already available. You can sign up. And that's all. And if there are questions, I'm happy to take them if I have some time.