 Hi guys, nice to meet you everybody. My name is Liditha Vin and I say you're presenting. I'm the founder and the CEO of Flora IO and this title doesn't look very technical but this talk will be technical. There will be heavy with their words and so on because that's what I'm going to do. So hopefully it will be interesting. So let's start by talking about the evolution of what we're doing right now. So here's what we see. When we are talking to our customers, we get no exception. They're all trying to become or intervene. So they do kind of like a transformation journey. And they always start with what everybody's talking about today which is microservices. They have a little complication, it's one big binary and they're really, really interested and integrated through microservices. And again, we're talking a second about why. But then when they're moving there, what they will discover is that there is the challenge of rewriting that application but that's when it's only stopped and actually it was way more complex after. For instance, simple stuff like now you have way more API. How do you manage that? Or some problem like how's something simple like do services, are they communicating to each other? How do you make sure that they communicate in a really, really safe way, in a secure way? And then how do you make sure that you understand what's going on with all those microservices right now? And all these problems today are solved by technology for my service mesh. So that's kind of like the necking that everybody's talking about. So we'll go on it and we'll do quite a lot of this. But I personally believe that actually what's special about service mesh is not the three problems that people claim that it's all in today, which is security, routing and observability. I believe that the beautiful of service mesh is the technology itself. What we just did, we're basically obstructing the network. And if we're obstructing the network, there is way more we can do. And that's the beyond, right? We're specifically focusing, for instance, on the health, but there is a lot more to do because the way that technology works. So this is really, really high level just to explain what two-day people are doing. Now we're doing it in two and we're trying to focus on understanding the education. Okay, so microservices. So microservices improve us a lot after Dr. Barley, right? First of all, you're just getting better agility, right? I can go so fast. In our company recently, seriously, I don't know what we're doing, like 10 full requests a day. We're going really, really fast. And the reason is because we can't, right? Now you're obstruct everything, right? Everything is way small and it can go very fast. It's the right design. We actually, we have to design an application. That's the right way to do this, right? It's obstructing. It's making sure that it's encapsulating the logic in the right places and so on. And it's very flexible. I don't care which language we use. I don't care what you're doing. It's just really, really letting you go. Take all the constraints that you want. That's great, but there is a lot of problem that's created. So suddenly everything is in the network, right? Your two microservices, right? Before that they're always in one binary. You didn't really take, needed to take care of the traffic thing, right? Because one binary is pretty simple. But now it's distributed. And with distributed there is a lot of problem. So the complexity, understand what's going on. Now you don't end up in one place with all the logic that it's really pretty simple to figure out. There's a lot of microservices. There is a duplication in there, right? You have a replication in each of them. So how do we understand what's really going on? Of course performance, again, we have that. Before that, the majority of the problem that you have with your application is logs and you see that it's working well and so on. But now we're putting everything on the network so latency could be a big problem for your application. You need to make sure that your microservices are resilient to that. And that's been, this is an all new story. In terms of cost, of course, it's harder. There is a lot of new tools that need to be distributed. It's complicated. And the last one is the testing. So before that, I had to run some unit tests, and be very happy, some integration tests. Now it's all of the integration tests. It's really complex. There is a lot of moving part. So again, everything that we're doing in our community, and I'm in the open source community, really active is that we always solve the problem. But then we're introducing a field of new performance and so on. And that's where the opportunities come up for companies like Solow, like Reddit, right? Like here. Okay, so again, just kind of like I will go and iterate just to make sure I don't know the level, so I'm going to make sure that everybody understands what I'm talking about. But again, everything was before that in one big binary. All the logic of the application will be there. If you want to scale it, you have to scale all the application, all those binary. Even if it means that only one of the little logic is the one that you need to scale, you still need it to scale. Which is very, you know, expensive. Now we're doing the spread of code. But we kind of like lost the state of the application. Because if before that they could just, for instance, attach a debugger to my binary and understand what's going on. Now I can, because there is so many and each of them doing different stuff. I think just attach a debugger. The spread is, the stage is spread all over the place. And giving a very nice plumber here, like how many we have, like seven microservices. But this is the real world, right? This is what we do. This is what we do. And this is an example of the amount of microservice that you're supposed to manage. And then you're going to Twitter and you see that kind of stuff, right? And this is true. This is what we're dealing with today, right? So again, really, really important. We did a huge move to actually go to the right direction and put a lot of complexity with that. So we need to help fix that. It's not finishing on the application itself. Now it's also going back to the doing, right? Most of the people that I know, most of the customers that we have today has a monolithic application. And the way the meeting is not by the right way to the right, is not to just rewrite it. Because, you know, they're working on this and they continue at the future and there's a big formal organization between the new team, the old team. There is a lot of problem. What we really need to do is to do what Lyft did, what Twitter did, which is somehow find a way to glue those applications together and more microservices and kind of like look to the user like it's one big application and then start ripping it out. So that means that in the current situation, you may have an application that's been more than only microservices of protection, but also still the monolithic or maybe you want to go all the way to the serverless. And that means that you have a different way to make it. So this is an example. But maybe before that APM did the work for you, now there is way more load. A lot more metrics, a lot more business in your cluster and therefore permitting is probably a right solution. When we're talking about logging something before the head, you're not just the log from the host, it should be fine. But now it's all distributed and replicated. So how do I know where the transaction went? Where to take the logs from? And when you're doing it that case, you're doing something called transactional logging. And that's mean that you need to give an idea in the beginning of the transaction and make sure that it's followed all the way from all the microservices and then take the log based in this transaction to make sense of the way more or less. So you need something like open telemetry. So there was, I'm just going to say that there was before that they're calling it open tracing, then Ruby decided to compete and they called it open consensus. And now they merged together under the open telemetry, so that's my review. It was of debugging. I don't know, but I was debugging my microservices and I'm going to take application and just attach the debugger, deal, B, you know, GDP. This is the one that I know. When I came and I had one of the engineers I worked in EMC before and I had one of the engineering people that and basically by the problem, I said to him, okay, let's attach debugger. And he said to me, debugger. Because you're not debugging, right? What you're doing is you're printing, you're troubleshooting, you're taking log. We're not debugging anymore. That's not what we're doing anymore. So if you're looking in both styles, we call, you know, you're going to do it with native debugger. Today, you can't apply that to Microsoft. Architecture, you know that, but from SOA, we moved to Microsoft. There was that. For deployment, so you know, again, if before that you couldn't use something like configuration management, that's not going to make sense anymore. So you will need to move to something like L or anything to an object in Kubernetes. And then the last one is testing again. Before that, it was a lot of unit tests and a little bit of integration tests. Now it's all different, right? Little bit of unit tests, but a lot and very massive integration. So again, it's all different. And in the nutshells, this is what happened. We took this monolithic application, we cut it into small pieces, we're moving it, the connection right now is through the internet and simple plan like, as I said before, how Microsoft can communicate to each other. Now they're going to do it in a very secure way. And how we understand what's going on and what's going on in the system is a problem that we need to solve. It's a new problem that we introduce. And that's the problem that ServiceMesh is trying to make. So ServiceMesh, anybody, like, are you aware of ServiceMesh? Anyone? Yeah, obviously. We'll go a little bit faster. So what is ServiceMesh? So ServiceMesh today is trying to solve three major problems. The first one is security, right? So what it's going to do is going to provide certificates for you, MPLS and policy. If we're talking about observability, we're going to do, if they're focusing on metrics or logs, and if you're talking about routing, we're going to talk about traffic control and resilience, something like fault injection, like breach rights and so on. This is in the nutshell, I have just two people that don't know, and trust me, I'm going to go very, very fast on this and then we'll do stuff really really interesting and innovative. So in the end of the day, you're on microservices, and what we're doing in ServiceMesh, we're putting what's called a site card proxy, next to it. And what we're doing is we're treating the IP table that every communication to the microservice and every communication out of the microservice is going to go through this proxy. And this proxy is something that I can configure, but then configure it, I can control a lot of this. Yes. So this is in the nutshell, how your cluster will look right now. You will have five, that might have five microservices. The first thing that you will need to do, you still need to eliminate the IP idea, right? In the end of the day, you need to make sure that the entry to the cluster is very secure, it's very safe. You don't want to, doesn't make sure specifically in microservices that the people and so that the developer would take care of security and so on, probably can abstract it from there, and that's exactly the IP idea I get for this thing. And that would take care of everything that's related to the traffic that is north to south. The idea that it's going to do, you would going to put the side comics to every microservices, and that's going to take care of the traffic of east-west. So you have a lot of proxy now, so I need to manage them. And API get will manage the proxy on the edge, right? Because the conversion should be different, you know, it's a little bit more dangerous because it's coming and you don't know for where. Versus when the communication is between two microservices, which already go through one of the API get way, and that will get the configuration for the service mesh control there. So what will happen is that every time that the traffic coming to your cluster, it will go to the proxy of the edge, and every time that the traffic moving inside your cluster, it will go from the proxy that's configured by the service mesh. So it's pretty simple. Now you're abstracting it, right? It's pretty simple. Every communication between them is being catch. This can go be logged, it can be able to make it as big definition. For instance, what, you know, can you allow them, can you allow them to talk to these guys, is that secure and so on. Okay, so in the nutshell, because a lot of you know service mesh, let's not focus on this. I mean, I'm explaining what it is. It's really cool. There is a lot of companies that are doing it right now. I didn't feel as a company that I want to put my research into it. I mean, we have it, but I didn't want that to make you think of it. Because I believe that it's everything we're doing. We, but actually introducing this technology, there is way more complexity than we're giving to the future. But what I try to figure out is what will be the next problem after service mesh will be in order. So let's look right now in the model. Here's the marketing in the nutshells, right? This is what my customers, they don't go any customer that I have. They started with STL. Why Google? Why it's pretty simple? A lot of marketing, they will start with STL. So STL is a service mesh based on Anthropoxy that came from Google. They will start with it and very quickly they will discover that it's a way to work with it. Like everything Google is really good in Geneva. Oh my God, everything is over complicated. And to be honest, most of my customer doesn't have the Google, I think, as in their infrastructure. So, you know, you don't care so much to make it so complicated. So they will start with that and they will, most likely they will fail. That's the only way to do that. They will wait for rather than a company to have the solution. Then they will move to Inkradi. So the first server less actually coming in service mesh actually came from a company called Boyan. The people who worked in Twitter did a lot of service mesh there. They basically got out, they announced this to Inkradi and the first version of it, they created with a Java sidecar. Spread to God, if people are calling usually a sidecar, a sidecar, this Inkradi was a bus car. It was huge and it didn't actually work out. It was crazy. So it wasn't really working, that it wasn't perfect solution. Therefore, STL and Google started and started to compete. But they created a new one and they call it Inkradi Q now. And basically they based it on their own box, you're reading in last that the control planning go. So people wouldn't try that. But here's the problem that Google discovered. It's not built on Android. And here's the thing. We are contributing to Android. Google contributing to Android. AWS is contributing. I should call everybody is contributing to Android. And therefore you don't want, we're going to go way faster. This is the power of the community. And therefore they just cannot keep up with the feature that Android is on. So they kind of like a little bit behind. So people will be very afraid to go back on that and they will move to something like console. We specifically are very good friends with as you call, we help them build their first version of our console, but they are behind. They just recently announced a several layer support. And to refer to that in several, several of them might be service match. The only thing that is interesting is that they are separate, right? Everything to the application. So if they don't have it, it doesn't really matter. They're also behind and they're not giving their support on the proxy itself. But they're actually not contributing to Android. They're just operating it. And you will see more and more, and maybe if you're in AWS, we'll go to App Mesh, which is their solution. It's free, it's integrated with everything that they have. But it's really, really young. So again, our customer is trying. And you're going to buy, and they're never having any production because it's too complicated. So all this investigation was way, way easier. It will put some absorption on top of it. And that was always a few problems. The first one, we can come with an easier API than SD inputs. That's right. I'm sure we can do that because it's so complicated that we don't even know. Then the second thing that we can do is we basically can, and I believe that if a customer wanted to try, they don't need to learn all those APIs. It's enough that they will learn one API that is very solid and have them fall off. The other thing is that even if they wanted to start with NKV, but they find that NKV will not be the winner one, and we all know that people choose before that map and mess those, and plough foundry, and docker swarm, they needed to grow away all the stuff that they didn't know. So most likely, again, what they will do is that they will want to switch with this abstraction layer, they can do that. And then the last one is that, again, just keep it dead simple. So that's what we did. And actually in the last, we announced our own project called Superglow. A year ago, the vision once again, is the first one is to make an abstraction that easy to work. When I'm using SDO, I need to create four different objects to define the service in order to give a read-write. Why? That doesn't make any sense. Let's just let them do a read-write a little bit back and all this stuff. So we created those API and our product was doing installation of service mesh, our product was doing discovery, if you already have installed, a very simple way to, now when you have a list of all the service mesh to manage that, and also grouping them together. Because what if I have two instance of service mesh on-prem and I have AWS AppMesh, I want to treat them as one big cluster. Service mesh will let you do this if we will be able to actually group them together and flag the networking because all of them using endpoint, it can do a very interesting stuff. So we did all of this and Microsoft reached out to us. They are the only cloud that basically doesn't have a story for a service mesh. So they basically reached out to us and said, we really like your story and we wanted to actually take these things and put it out. So we announced in the last group this is the game for Microsoft, we announced SMI, Service Mesh Interface. That's basically an API that's making way simpler. There's been SPO and others and just making the interface of if someone wanted to service mesh, that's what they do by owner. And because it's Microsoft, they managed to convince a buoyant, former linkoD, and I should talk to John Dees and a lot of other companies that are included right there and we become creating this nice community. And the reason we did all of this as a company is because what I think that is very interesting is what is going to come at the moment, right? So this is what we did with Service Mesh, right? The first thing that we did is before that, all this logic of the application, all the operation of it was built into your application. And at the time that someone needed to change in a library to upgrade, you need to upgrade your microservices. Now we took it out. And if we took it out, there was way more we can do. And that's what we, right? Basically, putting the staff on top of the service mesh, we make it an easy to work with. So the first thing that we're doing, we have our API get ready that is very innovative, where you feel like, you know, you go up, you make the API, you're in the base, microservices, build a local friend called Glue. So, you know, basically we added a plugin that will work seamlessly with every service mesh. The second thing is that we believe that everybody should do this, community and customer and so on. And when we're doing this, basically if you think what we did, we kind of like created a platform, right? And iPhone, something like that. What do you have in iPhone? You have the ability to some application that's coming built in. And the reason it's coming built in is because that's giving you the ability, that's the basic functionality that you need to perform, right? For my opinion, the basic functionality that you need right now to your infrastructure should be an API get ready and service mesh. To get it, you have structure on your network. And that's really, really important. But like iPhone, there is an app store. You can actually extend it. And that's exactly where we're going with this. So Chaos Engineering on top of service mesh. Sounds way more sense because you don't need to put an input line or a deal of code. It's not that code specific. You can actually just inject it on a proxy level. Same thing, something like Canary deployment. It's make a lot of sense. If it's not done to service mesh, what do I get? But we need to make sure that we'll get them, right? And that's exactly the next thing I'm gonna do. So I will share a moment with what we're doing. It's all open source. Everything is open source. So go and try that. And I will show you some demos and demos to see if I'm working. So far, you're still with me? You didn't ask me. So I know it's a little tough. So, okay, so let's see. Yeah, there's nothing here. We don't know what we said there, but we can see. Fantastic. So what I have here is a very simple, but just a little CDL-8, that you're all about to see what I have. Basically, I'm just showing you everything that I have in the cluster right now. So I have a Kubernetes cluster. And when you can see that in my cluster, I have an instant reports of cool system, right? I have a Kubernetes. And I already installed STO. Pretty simple. I just took the young one and did it, right? So now, let's look at what we did. So we, it's free again. You can always go to do it. You're going to date with something called service mesh.io. We can just get it, we're the only version of what we do. And basically what it is that I'm just managing app mesh. I'm managing STO. And I'm managing my third degree. But this is, of course, a really only one because, you know, so let's just go live, install it on our cluster. So it's pretty simple. The only thing that I'm going to do right now is so it's an empty cluster. Oops, sorry. Maybe I'll do this. And I'm just basically installing right now on my cluster. So I'm taking the service mesh out. And so it's looking in my cluster because that's what makes sense. So basically, I'm just installing it with download.summay container. It's pretty simple. We can just verify right now that it's installing it. It's in movement here. It's fantastic. And we'll do the next thing, which is basically forwarding. Now everything that I'm doing right now is in UI just because I'm going to demo. But actually everything is based off CR, the M behind the scenes. So everything that I'm doing right now can be done with POOP CDL, with RCLI, or with PPUI. So I'm waiting for a little bit of a thing at one or two. So last one and we'll set it. Until I demo, I'll say no. So let's look for PPUI. Okay, good. Okay, so now what I'm ready to do is forwarding to bring it to my computer. And the last thing that I would like to do is to come here. And basically look at the logo outside. Come here. And what you can see is that in my cluster, they already recognize that there is an STO that they already discovered the STO that run it. They see that it's healthy and they actually also recognize already which service they have a site file injected. Right, so it's pretty simple. Discover that and it's great. The other thing, just to show you how simple it is, like they come and just put it in, let's go install another one, let's do it for retail, and let's do it in QD just for fun. Let's put it on main space, literally, and just go and do installing. Just install an STO, I had it already installed, now I'm installing QD on a main space, and I'm ready to go. So it's starting, as you can see, and what it's doing is what we're going to do, we're going to go to STO itself and we're going to extend it. So what does it mean, extend it? As you know, a lot of the time that points to scale is coming already with it. But there's a lot of stuff that service mesh is not coming with. Points to scanner it in one. Slider is an open source product form we want. So what it's doing is exactly this, it's looking at your service mesh and it's made clear that you'll be able to canary your deployment. You know, it's watching a radius, if you see any problem, it's rolling back. Pretty simple, right? But what we can see is that when you deploy and actually the flagger, there's a lot of work that you need to do. So what we've done is pretty simple, we already know which one you're using. So we took out plus customized and we added a very strong VNA deployment system that will allow us to basically go there and understand what basically customized. So basically we'll layer the moving radius. We can, again, we can just for fun, go and make the alley right now on STO and basically to be as simple as an obstacle, right? It's just we're going to come here, we're going to choose a main space. The alley is a single example because it's already coming there, right? And when you can see that we're going to the mesh as we already installed another mesh, which is the depth, it's already working, it's already scouting all the service orders and go sidecar and what we're going to do next, we can just install an API gateway, between install and API gateway, let's place it out for R1, the built, can install it on STO, can install it on LinkedIn, let's install it on LinkedIn because LinkedIn doesn't have an API gateway. So coming here and again, it's pretty simple. This is like straightforward. This is going to be the future eventually for then. Now everything that I need right now is basically a ridiculously amazing technology behind the scenes. So go, look at it, it's already in go and it's really, really neat. So it's really, really pretty simple. And you know, I can show you that actually install the API, but that's kind of like all in the beginning. So that in the natural that you get right when we're going here and just say, we have pulverized the meshes. In the end of the day, that's what we need to do. You need to group them together, also we'll flag them and there was way more movement. So just in that shell that makes sense so far, take more. So I'm going to move to what next and then I'll ensure it's working really well. So what next? So there is a lot of problems with service mesh. This is an example of a tweet that came in the internet. Actually, that was a long time ago because right now I still is way more than 40 feet. I think it's like 60 or 70 API call. So you've done a couple of energy stuff, wait until you actually need to figure out what to do with STL. And if you think about it, there was two things that you need to do in service mesh in terms of configuration. The first one, you need to actually configure the mesh itself for get way, internal, external, MPLS, that's one thing. But then you have also the day-to-day configuration that are more active, for instance, you're putting in your application, so routing and so on. So for the routing and the day-to-day day, we already make it better in the summer, right? It will be way simpler and you'll be able to lose. But what we didn't do is for the configuration set, and that's where most of the pain for our customer are is that when you're actually deploying Docker, you're not starting from scratch, right? I mean, you never started from scratch anymore. There is those layers in a Docker file that you can actually start from every point you want. But when you're installing service mesh, you're starting from scratch. So we think that we can do better, and what we did was basically layer the configuration, we're calling it flavor, and you can actually, you can discover a flavor of the cluster that exists, or you can create one, and then basically, you can actually go and apply. So you can say, I want this service mesh, and these clusters, with this flavor, this configuration, with this everything, and just, yeah. Okay. The second thing is that is the biggest problem that exists today in every organization for a Docker service mesh is this. Every time that application wanted to push an application, the first thing that we need to do is to deploy those, create the old library, networking platform policy telemetry. And that all from the, this is the example I'm going to see. And then I just click on the networking, and that's some of the file, you couldn't put all of them people to play, for a YAML that you expected application owner to create. Now, that doesn't make any sense at all. I don't know an application owner that capable of doing it. I don't want my application owner to even know that I have a service mesh. So the question is how to get this is the biggest problem. Now, it's not that surprising because as I said, as you're being done by a Google engineer, a Google engineer probably can do it all. But more so than the enterprise, you do not want the user to know that you don't want the application owner to understand that. So one thing that we're doing is we basically need to come and it sounds like a CI-CDC standard flagging maybe, and kind of like an interface between the application owner and what they are in translate that to all those YAML stuff. And the last one is that service mesh troubleshoot. So here's what we just created. Something is wrong, the application is not working. Here's where you should need it. The first one, maybe it's a proxy problem. The second one, maybe it's service mesh problem. The third one, maybe it's an API, a gateway problem. The last one, maybe it's a service mesh interface problem. Where do you find, how do you find, look like it's crazy. So this is another something that we're working on right now. We're basically taking all the CRD from all of them and all this natural, we're making sure and we can actually understand where the problem is and notify that. Okay, so that was a lot. I'm trying to get the better. So I will do this, I really, really wanna show you something really, really cool. And that's something that I'm passionate about and we open, so to work on it, we believe and we open source it very soon and I'm really, really excited about it. Okay, so when I looked, when we talked about the debugger before, I was, when I saw that my engineer doesn't know what debugger is, the first thing that we did is created a project called Squash. And what Squash is doing is basically orchestration for the debugger. So I will show you how it's working with demo and then I will show you how we do it to the service mesh board and to the production. So that will be a more difficult one. You don't really need to worry about it, it's a script. So in a second, what you will discover is you're not just spinning up the plastic, spin it up, in a second it will be up. And what Squash is basically doing is very, very simple. It's giving you, bringing the irregular debuggers that will work with it all the time, all the way to your IDE and be able to actually install them. You know, basically use them in service, in a, in a Kubernetes environment. It's not related to service, it's just Kubernetes. It's free and it's open source, really offering to go and look at it. And in a second it will work and I will show you what's that going on. The application you're going to have to get it is a very downy application. There's not a lot of work to do about it. Sometimes it's working, sometimes it's not working. I want to debug it. When I came to my engineer and said, let's probably show you the, like, print app, let's figure out what's going on. We decided to bring it all the way to here. So this is an example, this is like Visual Studio Core. And this is like one service of this application. It's a very simple code service. What I'm going to do next is I'm basically going to use the squash on this condition. So basically it's an extension for the Visual Studio and if you write it in squash, you can actually debug it. So what I'm thinking on this is immediately going and setting me which namespace we want to debug. So I'm going to say, how? Because that's the application. So we're going to look in how you have two pods. Which one can you want to debug? And I'm going to go into it. And then it will tell me, well, in service two, you have to think, you have the STO sidebar and you have the actual service. Which one you want to debug? Let's go with the service. And then it's setting me which debug that you want to attach to it. And that's really new code, so it will go with the only. And that's basically what I do. So you ask me some question about what I want to do. And in a second what you see is that there's some magic behind the scene and I'm attached. I can go right now easily. Go to an application, go back to my application, put some numbers. I'm like debugging it. So that's pretty simple and this is helpful. Of course if you have more than one microservice you can attach a different, like for instance, a lot of the demo that I'm doing is a Java debugger, a Go debugger and I'm jumping between three and show you how to do it. So that's a pretty simple one. And it's really successful for you. But here's the problem. The problem is that, and you know, you can't use any production. This is a debugger. No one is insane enough to take this, it is a debugger and you know, GDP and attach to the microservice in the production. So the question is, how can we bring this easiness to production? And this is a cluster of production and this is what happened, right? This is how your world is going. You know, you have that pod, you have micro parameters, maybe they're going through a database and they're going through. If you want to know what's going on and how to solve it, today a lot of the things that people are doing is they're basically using open preliminary. That's mean they want all the logs that exist in this application. But here is the problem. To take all the logs from all of this is very expensive. So there is two limitations for open preliminary raising. Number one, they're only going to take the errors. You can't take the bytes, it's too expensive. All these stuff is flying on your network, it's going to take them down. The second thing, sampling. Sampling is the art of open today tracing. You can't take it all, choose it, right? Okay, so that's optional. So maybe service management solves this problem. Now if we go to the site now and do exactly the same thing. But actually, it's not going to solve us the problem. It's exactly the same problem, right? And the problem is that you still cannot take everything. You can't take everything. So it's let me think, do I really need everything? I don't need everything. Why do I need everything? I only care about the one that went wrong anyway. So what if I can be able to record every truck, all this traffic, inside the box because I have a service. So that means also the response from the database, from S3, everything that was in production. Now in the end of the transaction, everything is good. I really don't care about that. I'm just passing it. It's never going out for my know, right? It's not going to the network. But if there is something wrong and something won't be defined by the user, we're getting the tool to do this, then I want to save it. I want to save the body. I want to save the area. I want to save everything. Right? Including the data that went through the internet. And we're calling it loop, right? It's basically a recorder. We're doing it with Leap. Leap is using it in a different use case. If it's something, someone calling them because the Leap is not here, they're clicking on that and we just support them. Basically recording everything that related to this specific user to what they're engineering to rerun. And that's exactly what we're doing. Loop can help you rerun it in squash. So again, last day, I just swear I'm done. And it's really, really quick. So what now I'm going to do is something really, really simple. So what we're going to do is something very, very simple. We created a CLI, a loop CLI. Everybody can use it. If I'm looking and I'm doing loop CTI list, we'll see. So anyway, I can show you how it's working in a second. My name is just that. I didn't record that. I will show you in a second. But basically, what we have is we have this application and every time that something is happening, everything is good, it's not doing anything. And I'm sure that in a second. But if something is wrong, we're basically recording it. And then we can replay it, we're touching the debugger, injecting the information that happened in production where that's happening. Just basically like you rewrite it in a second if you want to take it. While this happened, I think it was a lot of material. So I'm done. There's any question? And I can show you the difference. There's no debugger whatsoever. Hey, so wait. So in production environment, I did no debugger anymore. That's too dangerous. I'm not going to attach it. But there is, there is a top filter that we extended in N4E that basically record all this information. And when it's needed, when there is catching something because it's matching to what I asked or whatever, response of 500, saving it outside. So now I have all the data that was in production. I can spin up another environment. I can attach the debugger. And then I can inject the data like it was in production, including what came from the database. And it's a simple reference. It's pretty simple. And I can show you that. Okay, so now it's better. So let me just show you the demo. So what you can see is basically that I have loops in the app. There's nothing there. There's nothing important yet. I just define it. And now there is this application that we talked about in N4E. And basically it's working most of the time as we saw before. But sometimes people are saying that it's doing some internal error. And I don't know. So let's just play with this a little bit more. You know what I mean. Let's see. It's working. I don't know what they want. Let's see. It's working. Okay, so that's working. And right now, let's do here. Now what you can see is demo. Okay, so there is a problem with application. So let's figure out what's happening. What you will see is that now we'll actually catch it. You didn't catch all the new ones, right? You did a few. You didn't catch it. You only catch the one that goes 500. And now I can just go through here. Very simple. Attach to the body like we did before. Right? So how? You go again through. So still we'll go again through the container itself. You'll see. And the only thing we really need to know when it will attach, I'm just going to come here and do look at the error briefly. Minus minus ID equal. And now basically what you see is that if you hit it right, and I don't know if it was a thing exactly again, but it gave me the data that was in production. So now basically I can just, you know, step into and figure out what's going on. So what you can see that I just put that Easter egg here. I basically just set it to 500. That's basically the problem with application. So that's the point, but the idea is that now I can actually debug in production what I really want to debug. I don't care what the rest. And that's the beauty of the abstraction of the network that we're going to do here. So I'm sorry, there's no way to do that. Any other questions? So, I'm going to put a limitation on my company. One of the issues that we had as far as the, sort of you alluded to, it only captures the bad things that happened. Let's say 20 nodes are involved in a request and they all have state. And I have failures across three nodes. And I actually, the transaction is across all 20 and I actually want the state to go across all 20. Have you got to solve that problem? Yeah, so staging is not something that we will, if there is a stage to your application itself, no. But in microservices there's a 12-factor app, hopefully the owner state, it's really bad in your state. So it's not so much that the state is like shared, it's more like you're passing parameters, right? Let's say I have the whole show. Yeah, but the problem is why that's a problem. Because I'm on the site, all right, I can actually, everything is going through me. So every parameter that came through here, I mean, whether it's going anywhere, that I get, right? And what, where it's, you know, and again, what is bad is like, I assume that there is a cascadero. Maybe it worked somehow to first plan that there's something, that's why we're saving all of this, right? Because it's important to us. And in the end, we're making a decision. But that's really a problem because I'm saving very little in the proxy and we have some, a lot of things like that, waiting to make it very optimized. Okay. So when you capture it at the failures of the nodes, it doesn't, you can't trigger like capturing from the node that we're successful to be easily in the request, like that's not a thing. No, but what I can do is this, a request coming to the node. I don't know if it's failure yet or not. It's just coming. We are saving all of it until the end of this transaction. Now that transaction is finished, we evaluate it, right? You're saying, well, you know, there is a problem with node 3. I'm taking all of this, all this huge transaction and I basically saving it and letting it rewind. So I have all of this even one that's successful in this transaction. But if all of this transaction is good, I toss it. I don't care. I'm not begging. Does that make sense? Yeah, it makes sense. Thank you. We'll take that time for one more. I'm just saying that if you're in Boston, we are hired, we can't actually, so join us. So any other questions? Well, thank you very much. Thanks guys.