 Hi, everybody. Thank you for coming. We have tons of content to cover, so we will start quick. And hopefully, we will manage to do it all. So we're going to talk about a little bit more than what we said that we're going to talk about. But it's also going to cover everything that we talked about, like service mesh and open tracing. But let's start before with this. So when you look, you're probably all aware of this announcement last year. And this is great right now. Cloud Foundry is working with Kubernetes. This is fantastic. And this is the natural direction. So you're kind of looking like this right now. You have Bosch, right? It's actually managing it. And then you have Kubernetes and Cloud Foundry that manage by a Bosch. But the most customer actually also have monolithic application. And actually, there is also service mesh that is really interesting in this world. And it's kind of filled that all of this should be managed by something. Because otherwise, you need to go to a lot of different type of interfaces for the user. So something is missing here. And the first thing that we decided that we're going to do is basically to glue this environment together. And I will explain why it's important, because I feel that this is the nature, way of where that all the application is going. And therefore, we need to know how to manage them and debug them and so on. So let's kind of like drill into and understand what I mean. And that was really, really high level with what we saw. Basically, today in the ecosystem, as I said, there is three groups. One of them, people who's doing monolithic, that's mainly the enterprise. It's managed by something like Puppet or Chef or Ansible. They're using their own tool for logging and debugging. And they're usually running architecture like SOA. And then there is the microservice. So everybody want to move there. And that will be something like Cloud Foundry or Kubernetes or any others. And it's going to use different tools, like Open Tracing, for instance, for logging, and Prometheus for metrics, because you need to scale it very well. And your architecture will be something like microservices architecture. But it's also the serverless. And the serverless, actually, most likely will happen in the cloud, because that's more of the most mature one. And that's because of it, you will get a provider to manage that. Then you're going to use the provided tools, because you have no choice, right? Something like Xray and CloudWatch, if you're using Lambda specifically. And that will be an event driven architecture. So you have all of this. Yeah, it's a lot of choices. But the real question is, how do you migrate between them? How do you debug these things? How are you dealing with this? So when I took a look at it, what I thought that would be very useful here is to find a way to kind of find the smaller compute unit and basically cut everything to it. So if you will look into it, you will see that basically serverless is functions. And we can look at microservices monolithic as an, and I'm talking only the exposed API as a functions. And if we will do something like this, and we actually be able to discover all this ecosystem, and then also route to it, we can do something like that. And then we can actually migrate or mixed application that running in different place. So, OK, so again, I'm starting with solving this problem, we kind of are going to glue it. And then we'll talk about how we're going to use open tracing and service mesh and so on to kind of like get the observability and so on. So what do we need to do in order to do that? The first thing we need to do is to break. But it's not really break, right? Because we can't really break. What we can do is actually discover them and route to them, all those function. Then we need to basically be able to assemble them to what I call I would app. And then the last thing that we need to do is make sure that we can actually debug them and so on. And this is exactly what we build. It's called glue. It's built on Envoy. That's why it's really, really close to service mesh because like you know, STO is also built on Envoy. And what it's doing, it's basically we extended Envoy to route on the function level. So A, we discover all those functions. So if it's in serverless function, it's kind of easy HTTP, right? If it's a different application, we're using Swagger, OpenAPI, GRPC and so on to discover it. And then we can actually route to it on this level. So I'm not going to go too much level. I'm just going to show you demos and explain why it's really, really useful. But the idea is that if you are capable of using it and it's already integrated with Cloud Foundry and Kubernetes and a lot of other environment, you can do something like this, which is basically create an application that some of it could be a still part of your monolithic. Some of them you will extend it to a microservice and maybe to serverless. And it's all going to look like one big application. And of course, we don't care where it's running, right? Because the beauty of it is that it's all layer 7 because Envoy gives us this abstraction. So again, what we did is we basically extend Envoy. We built Glute to manage that. It's really, really extensible. So we can discover everything. We can route on this level. So everything that Envoy giving you on the level of a service, because today, every proxy in the world is routing from route to either service or to host. You can give you that in the level of the function. And also, if it's going to AWS, we wrote filters in Envoy that basically already giving you all the signature and everything and making you easily routed. So I want to show you demo. I think that will be the most easy way to understand. And this is only one use case because actually we built it very flexible. So it's showing us a lot. Okay, so, okay, so I'm going to say H2. Can you see that? No, you can see that. They can see that. Once again, you figure out why you can see that. Ah, you know what I will do? We'll move you here. Okay, now you can see that. So the only thing that I did is I, well, now I can see that. Cannot see that though. Okay, yeah, let's try. Sorry, guys. I guess preference, right? It would be much better. Perfect. Now you see that? Now you see that? Yeah, perfect. Okay, so I just SSH to a machine. And this machine is actually running Cloud Foundry. So we just show you that I'm not lying. Oh, ah, sorry. Let's sudo minus s. Okay, now let's do CF. Ops, so you can see I'm running it. And right now, Glow is there. We're not doing anything. What we'll do next is I will show you, and again, it's more like, that you will see. This is Alexa. And what we did is basically we wrote a skill that is connecting to glue and waiting to see where to route. So if I will write it right now, it's not going to work, right? That will show you. And Alexa, one glue demo. Hopefully it will work. Hello from Google Functions. Can you hear me? Brought to you by Glow. Hopefully you can hear it. Can you hear it? Hopefully you can hear it. Okay, so what you can see is basically right now, I just connected to a Google Function. I can easily go right now and just change the route. And I have some to go right now here. And changing to the same thing to do, as you can see, we already discovered everything. So let's try right now the Azure one. Oh, come on. I'm doing it right now as you will see right now if the copy paste will work. Is that we already discovered all the functions in Azure and now what I can do is just change the route. Do exactly the same thing and then it will go and basically transfer the language. So now it will go to Azure. So if I will do exactly the same demo. So yeah, in the summary I said what I did is first of all when we're putting an upstream, we're going and discovering all the function. That's what we did. The beginning we did it with Google, discover all the function and created route to it. So Alexa basically called this route and now we didn't change anything. The only thing we will change is the route. So we'll go to AWS Lambda. We'll change the route. It will go to Google Azure and to Azure. We'll change the route if we'll go to Cloud Foundry. And that's basically showing the migration play. So we'll do that again. Hey Alexa, run glue demo. So now it's going to Azure. So exactly. And that's the beautiful. In fact, we'll do the same thing right now with Cloud Foundry. Because we're in Cloud Foundry Summit. So let's do it real quick. And basically right now, if we'll run the same demo with there, it will go to Cloud Foundry. We can do the same thing with Azure. So the beautiful is kind of like the way that we can regrate cross cloud, cross function, and so on. So let's do it again. And Alexa, run glue demo. Hopefully we did it right. There we are. If I sec, I'll make sure you're right. Yeah, I just put it in the wrong CLI. That will never work. Let's do it again. Now when I'm putting it on the right CLI. Hey Alexa, run glue demo. Sorry, I don't know that. You will assume that it will work. I mean, I can do that. It just will take in a second, but. Hey Alexa, run glue demo. Right, so basically I regrate it. It's really, really simple. We have a lot of demos, so we're going to go next. And glue is an open source project, so we can go and look at it. But basically again, there it is, immigration use case. And I will show you more demos in the next. So now let's talk about what you came out, which is the debuggable and the observability part. So, that better? Yeah, okay. So let's first understand the problem, right? So way, way on the back days, we had monolithic application. If I wanted to debug the monolithic application and attach debugger, that was really, really simple. Once I did that, I actually saw all the states of the application. Can see the variable, we see the memory. I can actually have a full picture of everything that are running there. The problem with microservices is that now I took it, the same one binary, and actually spread them to a lot of different ones. So now the state is spread all around. And it's really, really hard to kind of like follow what it's belonging to what. So this is definitely complicated right now when you're running some microservices. Definitely it's more complicated when you actually run it cross platform like Kubernetes and Cloud Foundry. But even if we take it to another level, this is how really your application look, right? Your application look like 500, 450 microservices. And to understand what's going on on the full application, to understand the full picture, it's really, really hard. This is just a funny joke that someone tweeted and I found it very, very hilarious. But this is true. This is what people are actually going through. So what is that solution? What can we do? How can we actually divide it? So the first thing that we can do is use open tracing. So let's understand what it's open tracing. So what open tracing is, is basically a transactional logging. What it's mean is that when someone, when a request coming, open tracing giving it ID. And then what it's doing is basically with the context just propagate it down to all the microservices use this request. And then when it's getting it back, it's looking at the set of, okay, now I can actually aggregate all the IDs and I can get a story that looks something like this, right? So this is basically how it looks, right? You have a Spence, that's what's called Spence. And basically in Spence you have function, you know, microservices A, that run, call microservices B, that call C, D and E. And now you have the full picture of how your stuff is working. The open tracing itself is basically a standard, right? I mean, it's only an API. And that's hosted right now by the C and CF. And there is a lot, a lot of product like Jager from Uber and a lot of other that actually implemented the actually UI and the thing itself. So I think that again, if you have time, I think we have time, yeah? We have time? We have time. Maybe we will see a demo. So I mean, this demo is basically, I'm just going to, if you want, then we can skip it if you want. It's basically I'm going to teach you what is open tracing, just showing you why we need it and so on. Hopefully it will be useful. If not, we can go to the other demos. So, this again, where is this tracing one? The tracing one, okay. So what I'm spinning right now is a very, very simple. This is Jager, right? This is one, this has been created in Uber and basically it's a very good open tracing implementation tool. It's also hosted by the C and CF. So this is the UI and it's very, very simple. And what I'm running with this is basically, it's an application that also Uber wrote. It's basically write on demand, okay? So you will quest for an Uber and basically is going and doing what it's doing. So the first thing that we're going to do, we're just going to request one, okay? That's all we're going to do, quest one. And now that was very simple, right? We go to a ride, but let's go one second to the CLI to see the log. So look at this, all I did is click one thing. And I wanna show you how much logs I got for this. This is only for one thing, right? I didn't click, there's no like, it's very simple. As you can see, there's a lot, a lot of log for this. Specific, now, if we would go to Jagger, we'll see what Jagger gave us. So the first thing that we can see is that if we're going to the Jagger UI, we're going to the dependency, is that immediately you realize that, without me needing to do anything. You basically realize that I have frontend that call route, that call customer, that call drive, that call this. And all of this you understand by just me clicking one button. So this is one call. Usually you're doing much more than this. So what we're going to do next is just try to run it a few times and see what happens. Once again, first of all, what I will show you with one, because I think it will be much easier is that if we'll go here and oh well, what's going on, so maybe we're all looking at fine trace. So you find the trace, right? We click one time and you can see that. And we can actually go and see what's going on there. So this is what we got, right? We just click one button and this is what happened. You can see the request. You can see what happened, who call who, how it's happening. And this is only for one click. I just want you to understand what happened when I clicked one time. And you can see a lot of stuff. Like for instance, I can come here and see actually logs from going to the logs, but they are all in the transactional level. You can actually see what's happening and understand. And because it's pretty at the context you can even see more than this. So that's that, but you know, this is kind of like boring, let's go and do a lot. And let's see what happened when I'm clicking a lot of time. And what you can see is when I'm clicking the latency going up, okay? And so let's go try. Let's take the one that is actually the most time. And let's go back here and try to look for this one. It's driver, hopefully. Driver, let's see if I'm right. So I don't know you adjust. It's not right, but what was it? I'll wait a minute, one second. I will find in a second what we need to write. I think it's the driver ID. We can actually just go back and ask them to give us all of this. So as you can see, we got all of this. And if we can go right now to one of them, we can see that we can see what's the problem actually. And what we can see is that we have a problem with latency. First of all, we can see some arrows. And the second thing, if we're actually going to look and see, we can see that there is a lot of, let's see one of the arrows, for instance. And you can see that the problem is that there is actually is waiting. There is a timeout errors in the Redis. And we can actually see a lot of other stuff like that. There is a thread pool here. So I mean, it's really simple to go. I can show you how we're going and change the application. We'll run it again. We'll totally change the spam. But the idea with what open tracing is giving you is the ability to actually see problems in the latency and so on. And let you actually analyze it and so on. Now, I don't have time, so I mean, if you want, I will do the demo offline. But basically, the idea is that that's what it's giving you. And we'll talk right now about what it's giving you, what it's not giving you. And then we'll continue back. OK, so we talked about, so what actually it's giving you? It's giving you logging, right? This is actually, if you're thinking about it, that's the equivalent for logging that exists in monolithic application. It's just a transactional one, because that way it can give you a view of what's going on. It's giving you metrics and alert, because theoretically, we can pipe it back to Prometheus. It's giving you a context propagation, and that's what we're talking. We can go actually and drive it as much as we want down. It's giving you critical path analysis, right? Because you can see that something is not working and you can actually go and debug it. And it's giving you topology, right? We can actually see how the system looks. So that's great, but there's a lot of stuff it's not giving you. And we need to look at it and understand. The first thing that it's not giving you is not giving you runtime debugger. Usually, as you see, I print a lot of stuff. I mean, when I show you the demo, you saw a lot of logs. The reason it's happening is because I wanted to show you what you can get from it. But actually, think about the network throughput. You can't log everything. It will kill the networks. So what you need to actually do is not log a lot. And therefore, you can't, and then you need to basically sample it. You can't basically collect every log, so you sample it and you take it out in once a minute. And what's happening in this is that eventually, you're not getting the old picture, right? It's not, you're getting some of it. And you're not getting a runtime debugger. There is also, you need to wrap the code somehow, right? Because it's a library in the end of the day. There's no holistic view because you only see what you print. If you didn't print it, you can't see it, right? It's logs, it's very simple. Now, the process of, let's say that I didn't print it and now I want to print it, it's mean the regular one, edit print, go push it, build again, push it to Kubernetes, wait 10 minutes until you're getting the logs and so on. And you can't change any variable on runtime because it's not a debugger. And also it's giving you quite a lot of overhead on the throughput of the network. And we need to be realized that you have to sample it. You can't see all the logs all the time. And that's what brings us to build another tool called Squash. And what is Squash? So I mean, I will explain what it is, but I think that the best way just to show it because it's damn simple. So as I said, if you're looking today on the monolithic, you have the ability to, something like Splunk or any other to see the logs, that's the equivalent of Open Tracing. But if you're looking at a regular debugger, there is no equivalent in the microservices world. So that's why we build it. And again, I will show you the demo because it's a small demo that explained this very, very well. So, there's the Squash one, this one. Okay, so in a second, I'm spinning it in environment and AWS. We'll take a sec. Okay. So this is a very simple application just to show the case of what we're doing. It's a very stupid application. You're putting two power meter, doesn't really matter which, and then you're creating calculator. And what you can see downstairs is that it's not really working, right? 76, like 32, it's not equal to 44. So what do you do? You need to debug it. How do you do that? So that's, you probably know. Usually you're going, you're writing some logs, you're printing the variable, you're trying to understand what it's not working. If you're using open tracing, you maybe add logs and then 10 minutes after you're getting them. But what we thought is that it actually should be much, much simpler. So let's understand what this application is doing. Actually, this application, it's really, really simple. Once again, I'll close this one. This is actually a Go microservices that basically is serve me the UI. So it's basically an HTML wrap inside a Go. And then it's taking this power meter and it's sending to the next microservices written in Java. That's what you see here. And this microservice is very simple. It's getting the two value and either add or subscribe. Very, very simple and bring it back. So again, just notice, this is Go application and this is Visual Studio Code ID. This is Java application and this is IntelliJ. So what we did, we basically created an extension. When I click in this one, I hope you can see that. Basically what I did is I created an extension for squash. And what you can see, I hope you can see that. It's basically a squash debug container. And we will choose this. And what happened here, it went to Kubernetes and brought me all the pod that exists in my system. Only the one I can see, of course. So that's what we will choose the first one because the first one is described service one. And then it said to me, in this pod, there is one container. Do you want to debug it? The answer is yes. And it said, which debugger do you choose to attach? This is Go, so we will choose DOV. And that's it. This is running on AWS right now. And in a second, that's it. I can debug it. Now we'll do the same thing exactly on the other one just to show you that you can do it also in the other one. So this is IntelliJ. Again, squash debug container. Now we will go to the second microservices, which is service to Java. Here is the Docker code. Here is the container. And we will choose Java. And that's it. It attached. So now the only thing that we really need to do is go back to my application and click. And what will happen? I'm live debugging it. So what you can see, I have all the parameters here. I can step into it and do everything. Now what will happen if I will continue here? This will jump to the other microservices because I put a break point there as well. So basically, we took the Java debugger attached to it. We took the Go debugger attached to it. And we basically orchestrated all. And then what will happen here is you can see. I mean, I'm going to click some next. And you can see that we're getting the right parameter. I don't know if you can see it, but this is 76. This is 32, like we wrote. And this is true. But look what I did here. I introduced a bug specifically. What I did is if it's true, I did minus, because I wanted to show you the point. But this is a debugger. So the only thing that I can do is just go and change the value to see if it will fix the problem on runtime. So I change it to false. I'm going to do next. It will jump to the other microservices. And then I will click it again. And we'll go to the application and see that this is fixed the problem. Now I can change it, push it to Google, Google Cloud Foundry, and so on. Makes sense? So this is what, basically, score supports to do. And let's continue the service mesh. So we're basically just orchestrating the debugger and giving it the, it's also open source. Really recommend to go and check it out. And I said, the idea with Squash is that, and I will show you that later, is that it's not only a microservice debugger. Basically, we can work with any environment, including monolithic. So theoretically, if you have an application that is part of its monolithic, part of its microservice, part of its serverless, you can actually jump between it, like it's one big application, which I found very useful. OK, so now service mesh. So I don't know if you know what is service mesh, so I will start just to explain the idea. But the idea with service mesh is to obstruct the network. So right now, before that, if two microservices needed to talk, I needed to put some code there that would know directly how to go to these microservices. The problem is that if after it, I want to change something, I need to go and change the code, which is really, really not useful. So the idea was, what if we can actually obstruct that? And the way to obstruct it next to every microservice, we're just putting what's called a sidecar, a sidecar of Envoy, which is a proxy. That's the one they choose. It doesn't have to be Envoy, but most likely, that's what it will be. And what happened is that all these microservice know is how to talk to this proxy. And then, if all those microservice know how to talk to an Envoy, now basically Envoy created kind of like a mesh, and they can talk to each other. And because they're on the request path, they can make a lot of decisions. For instance, maybe it's not allowed for those microservice to talk. And that's where, for instance, Mixer is coming into picture. It's going every call to basically interrupt and remediate, decide if it's actually supposed to go, or it's OK to do it or not. So that's basically what a service mesh is doing. It's very simple. Sidecar next to any container and so on. So I'll show you where I think that this will play a lot, and then I will show you. OK, so the solution that I see that is best is I do believe that service mesh is really, really important. It's giving you observability, which is really, really important. It's giving you security. That's the two use case that people are pitching, and I really, really agree that it's really, really useful. What we did is we basically also added another option, which is we wrote a filter for Squash to Envoy. So now, theoretically, think about like this. I have a mesh with a lot, a lot of microservices, and I discover that I have a problem between those. I see from open tracing, or from service mesh, I see latency between two microservices. Now I want to zoom in. And that's exactly what Squash is doing. Basically, Envoy on the request call, if you're getting $5,000 or so on, it can actually go to Squash and say debug me. And then, basically, you can debug it live. And that's the advantage also to debug it actually in production, because if you think about it and you have actually a service and not only one container like usually you have, so now you can actually basically what we're doing is shadowing the request and attach a debugger to the request, but all the rest is continue running. So we're basically not posing the service on the cluster. So what I will do is the last demo. And I know I'm killing you, I'm going fast, but we're trying to show you a lot. So what we would try to do right now is basically, let's take a minute and we'll close some stuff. Okay, so the only thing I will do is right now run this demo. So what we will do right now is the service mesh plus everything together, plus the Squash together, everything working as a one solution. And it will take a second because spinning up the environment, any question for the meantime? No? Okay, so in a second it will happen. And then hopefully, okay. Okay, so I will tell you what we're going to do right now. We'll start by basically taking a very simple application that I guess most of you know, which is the monolithic application, a pet store. We'll take it and we're basically going to transport it to the microservices and serverless and then debug it all. That's the purpose. Hopefully it will be ready in a sec. The network will play with us. Okay, so I can show you already that this is the code of the monolithic application. It's a regular, here you go, it's starting. Okay, so this is the application that I guess everybody knows, right? There's not a lot of surprise here. Here is the code of the application. It's a regular spring application pet store, right? Okay. So now what happened is that, as you can see, it's all working and when I'm coming here, I really want to modify it. What I want to do is add another column for location. And I have a new engineer in my team and I really, really want him to go and actually head it, but he doesn't know this, so we need to learn that. And then after it, you need to modify it, then you need to test it, then you need to regression test it, then you need to redeploy. That's a lot, a lot of stuff. But most of the thing, what this page is doing, is only basically showing, right? It's only representing. So what we can do is I wrote here a microservices in Go, very simple one, that all is going, is basically going directly to the database and showing the data that I want. So now that's great. And it's really, really even simple to do that if I'm coming here. So I want you to see what I'm doing in a second. Okay. So as I said, with Glue, we already see the environment and I don't know if you see it big enough so I will tell you what I'm saying. I'm saying, Glue CTL, please create a route that every time that someone going to slash vet.html, I want you to go to the Go microservice that I wrote. And that's it, that's all I did, right? And now if I will come here, this is basically still Java. This is still Java. But if I'm going here, it's already the microservice in Go. So basically, this is what I call hybrid app, right? So basically, you saw Quikitas, by the way, is because we're using the V2 API of Envoy, really, really quick, right? So now this is an hybrid app already. But look at this here. We actually have arrow in the contact. Again, I can go and ask my engineer to fix it or I can do something simpler. What I can do is just route to a Lambda. So what I will do right now, I will just make it bigger that you can see, is I will do this again, a Glue CTL and I will go to the right function. Okay, so see what happened here. Again, it's really simple. Glue, create route, every time that someone's saying slash contact, what I want you to do is to go to the upstream, which is basically the region in AWS. And I want you to call the function contact us. Now, notice I didn't put any password, any security, any ID. And the reason is because glue are already taking care of it. It's listening to all the secret. And every time that's something changing the secret, or in the upstream, or in the configuration file, you're immediately taking it, sending it to an API, two plugins that generate the language and basically then end we're getting it. So again, very simple. That's all I did. Now we'll do the same thing just for the, so this is show me the, you will see in a minute a form, but now I want that everyone, that someone will fill the form and we'll click it, it actually will put it in S3. So for that I did another function, which is call contact tree. So we'll find the sec. Here you go. So again, the same thing exactly. And the only thing I'm saying now, is run this function and it's done. And now when I will go, you will discover that it's not going to work. And I will show you why. So this is a Java. This is a Go microservices. But when I'm coming here, this is a JSON. And apparently Lambda doesn't know to return something else than JSON, which is a problem for us. So you can either use the API gateway of AWS and pay them money and they will transfer it for you. Actually there is no other API gateway that giving it for free or you can just use glue. And what we're going to do right now is just apply the transformation filter that again is open source. So you can go and look at this. So a glue CTL. I will do the one with the transformation, which is this is exactly what I'm doing, applying the transformation. And now just to prevent cache, I will open it in a different browser. And now what you can see is this. So every time that I'm clicking it's a Java, right? Java, regular monolithic. Every time that I'm clicking here, it's a Go microservices. When I'm clicking here, I'm actually running Node.js Lambda. And it's all look to the user like it's one application. That's what I call hybrid app. And we can actually field information and I can show that's actually going to S3. But I'm guessing you believe me. And it works. And I can go to show you if you really want. But the idea is that the beautiful of all of this is now you can actually attach the debugger of each of them and actually debug it. So what we're going to do real quick, we'll go to the monolithic application and we'll attach this debugger and we'll do exactly the same thing on the other one. And now what I will show you in a second, it will connect that if I'm coming back to my application, I put a break point somewhere here when I'm adding a visit. So when I'm clicking this one and I'm clicking adding a visit, this is my monolithic application being debug. Let's release it. And now when I'm coming here and I'm clicking my microservices, basically it's my Go application debug. So that's the idea and I think that it's really, really powerful and help you to move in your free time. So again, instead of taking all your monolithic application, rewrite it, we will take you a lot of time. And you're not getting new feature, just extend the feature, basically extend it with wherever you want microservice to server less and then on your free time do that while you actually gradually moving. Hopefully it's helpful. All of them is open source and I will love, love, love to get feedback if you want. And I think that's it. I mean, yeah. And if you have any question, I would love to answer that. No? Was yes useful? Did I lost you all? Okay. Okay, awesome. As I said, it's all in the open and we're going to release a lot, a lot of other cool features, so go check it out. To protect it. You're talking about, can you, again, can you hear me? No, so you said protection that time when you're talking about which thing that I did. The point is running a product. The squash one or the glue, it depends. So if it's squash one and what you ask is about debugging, what we're doing is basically re leveraging the technology like Kubernetes and Cloud Foundry and they already filter what you can't and cannot do, right, so if you are not supposed to see this pod, you will not be able to debug it. If we're talking about debugging and I hope that that's what you talk about. So I want to, yeah. So sometimes we can see the problem in the production, so I want to debug it. So, as I said, if you cannot get there, you can see that you can debug it. What we also doing, as I said, with these filters is that once we're getting the error you wanted to send, we're just creating a shadow request that basically not affecting the cluster. It's like, imagine that we're doing a snapshot there, okay? So it's not influencing your environment, your environment continue running. But what we're doing is basically forking it and then letting you debug this, but we're basically giving you a snapshot of the environment. So you can see it all, to debug it, make sense? I think so, I'm not clear. So you want me to, you want to take it offline? I don't know if I have time. Yeah, we can see it later. Yeah, yeah, yeah. Any more questions? No? Okay, so go check it out. I think it's really cool. And as I said, we integrate it with Cloud Foundry right now, seamlessly. We use the co-pilot that they did and we integrate it with this. So it's really clean and I hope that you like it. Awesome, thanks.