 Hello, hello, everyone, and welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Taylor D'Olazal, Head of Ecosystem at the CNCF, where I work closely with teams as they navigate their Cloud Native journey. Every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things. They will break things, and they will answer your questions. In today's session, Chin Chin has joined us to talk about how you can let your developers easily connect, secure, and enforce policy for your microservices. This is an official live stream of the CNCF, and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful to all of your fellow participants and presenters. And with that, I'd love to hand it over to you, Chin Chin, to kick off today's presentation. With that, please take it away. Thank you, Taylor, for the introduction. Hi, everyone. I'm really happy to be here. And it's really amazing how the CNCF ecosystem and exceptional programs, especially the live streaming sessions, I've really enjoyed it in the past. So I hope to provide you with an engaging session today and provide more details on what we are doing. And I hope you find it exciting. So we'll start off with that. My name is Chinthal Thakkar. And I'm the founder at a company called Sars. And we have an ingress controller API gateway built on our Proxy. Now, one of the key things that we differentiate on is simplicity. Simplicity might sound simple, but it is extremely difficult to get it right. So what we have made it, what we have done is we have made it extremely simple for the developer to run and extend functionality and ingress. And today we'll touch upon some aspects of that. Today's session is going to be how a developer can run and ingress very easily and at the same time, extend it using WebAssembly. So what we are going to talk about today is a very high-level outline can also be found on the website. So we just reached this article out. And the plan for today is that how do we use BASM to validate and transform a request? So essentially, when a developer installs and ingress and wants to extend it, you don't have to get expensive plugins, custom plugins, or a gateway that does enterprise plugins. You can just run WebAssembly and totally customize it according to your needs in a language of your choice. So WebAssembly is extremely powerful in that. It provides the flexibility to use a language of your choice. But at the same time, the WebAssembly runtime is extremely efficient to run all this code. So we'll just go over at high level of what this talk is about. And then we'll dive into the code as to how today we are doing all of these things. So I'll just go over the. So essentially, on route is Kubernetes ingress API gateway. And it can also run outside Kubernetes. But if you run it at Kubernetes ingress, you can perform validation verification and transformation of your request. So validation of your request means validating the JSON schema or the XML schema. You could validate your query, which might be a GraphQL coming in. Or you could validate, if you're integrating with a CDN on the edge, you could validate if the CDN headers look right. If the metadata that you're expecting from CDN is right, there are no loops. You could do security and identity kind of validation. Of course, you could also do data and governance kind of things. But the idea here is the flexibility provided by programmability can essentially let you do any of these things. Of course, one of the very key aspects is open API. So organizations that first do an API developer contract, if you have an open API spec, you could even validate that. And you could just take it to the next level by implementing your own custom validations to tweak it. And again, we'll go more into the code aspects and how the web assembly is integrated, how it plays a role over there. And how you can easily achieve this. So the idea is you shouldn't have jump a lot of hoops. So typically, anything and everything in our route can be done using one command. And once you execute that command, it sets it up. And you can just customize it. The verification aspects include like you want to ensure that or verify that there is integrity of the message, which is incoming. So you could perform like HMAC verification. You could verify some signatures if you wanted to. So essentially, any verification code you wanted to run in an intermediate Ingress gateway, you could do that. And eventually, say you want to transform a request. Transformation may involve running accessibility for an XML to perform transformations. Fairly standard use case. You could want to remove some sensitive information so you could essentially parse a request and remove sensitive information from that, some form of data loss prevention, or any kind of custom request transformation. So that's more or less the power of web assembly. It essentially opens it up completely to run your own custom logic and run your own business centric use case at Ingress in a language of your choice. All right, moving on. If there are any questions, I'm happy to answer. Otherwise, I'll just move on with this. OK, so what today we'll cover is we are going to, at a high level, understand how the VASM support looks like in Android, install an example workload, make the service externally available, create a web assembly filter. So Enroute very closely follows the Envoy filter architecture. And what we have realized is it makes extremely simple. So you don't have to learn two different pieces of software. Envoy is fairly popular, graduated CNCF project. And when you look at Envoy, you go, OK, how do I enable this filter? How do I run with the following things in Envoy? So what we have said is you don't need to learn an external Ingress gateway or the abstractions over there. And what we have done is we have made it declarative. So you could just say, OK, these are the three filters I need to enable. And they'll just work for Envoy. So Enroute sort of forms a very lightweight shim and makes it extremely simple to run the Ingress API gateway. So what we'll do is we'll create a web assembly filter. Again, Envoy has a web assembly filter plugin. So we'll just enable that using the Enroute shim. And then what we'll do is we'll run our custom code. So there is some detail here in how the web assembly interacts with the code. So web assembly has a proxy wasm interface. I'll quickly go over it. But we can talk more about the details if there are questions. But let's run through the basic stuff. I'll just touch up on a few things here and there. And then if you have more questions, I'm happy to go over in more details. But the idea is web assembly is a runtime that runs inside web browsers and Envoy proxy now. So how does web assembly interact with your program? So there are a proxy. So there is a proxy wasm standard or specification that you can look at, which essentially defines how web assembly talks to a proxy or the callbacks between web assembly and proxy. And then there are SDKs in different languages, like we are going to use the Golang SDK today. And there are SDKs in JavaScript. There are SDKs in Python. You can choose a language and an SDK. And then today, we are going to use a Golang SDK, which essentially compiles a Go program into web assembly, which can then be loaded by the web assembly runtime. So the code is going to perform the validation, verification, and transformation. But this is how the whole thing comes together. So taking a step back, you have your Kubernetes cluster. You have your ingress service, which allows the North South traffic to traverse into the cluster. Inside that ingress, you run Envoy. The Envoy is a web assembly filter. The web assembly filter has proxy-vasm interface. The proxy-vasm interface talks to your SDK of your choice in your language, Golang, in our case. And those Golang callbacks invoke our code. So as traffic comes in, the code gets executed, which is custom code for validation, verification, and transformation. So I'm ready to jump into the demo, but I'll just give a few seconds if there are any questions. OK. OK, so moving on, I hope everyone can see my screen. But if you don't, I can increase the font size. But for the sake of this session, we won't go into the installation of Android. It's fairly well documented on the website. But you can assume that when you have a Kubernetes cluster, you can install an ingress on top of it. And then you can expose a service, which is, again, very well documented. So just go over some of those steps by looking at how the installation looks like. But the installation is fairly straightforward, and there is a lot of detail. So if you go to the website on the Getting Started. This is about the time I'd make a UDP joke, but you might not get it. So this is the Getting Started guide, of course. I mean, this talks about how to install Android using Hill, and how you can quickly set up. And then there are sections on when you set it up using one command, essentially, what are the different pieces that are getting programmed. And this score relates to what you have used when running that command. So you could say, I want to enable these five or seven filters like JOT or L7 rate limits. Again, talking about rate limits, for Android, the L7 rate limit function is completely free. It's typically a paid feature in every single other place that we know of. But the Getting Started guide covers exposing the service externally, enabling SSL on it, again, using one command. So typically, if you're using a server, which is fairly common. I'm not sure if it is a CNCF project, Cert Manager, but maybe it is. But the idea is that you could use that. So we won't go over these aspects, but we already have an instance of Android running. And then we are going to go over the WebAssembly aspects, like how do we enable WebAssembly? How does the code look like? How do we compile that WebAssembly? How we create a container out of that code? How that container gets loaded? How the WebAssembly plugin is extracted from that container? And how that code gets loaded into the runtime and executed? So those are the pieces that we'll cover. But this is sort of the prerequisite for those pieces. So you can take a look at it over here. And we'll jump right into the WebAssembly aspects of it. So let's verify the installation. So what we see here is we have on-road installed. That's the instance in the namespace on-road demo. And then we have an example workload installed. And we have exposed that workload externally using the Helm chart again. And it's called HTTP bin service policy. So what we have said is for the HTTP bin service, let's set up a policy by declaring, OK, let me enable these four or five plugins for that policy. And we have run that Helm command to expose it externally. So of course, we have the service of type load balancer, which has an external IP, and there's a deployment. We have just created one replica for now, but you can create more replicas if you like the same way. So we have an HTTP bin service policy. We have also set up an external auth service in case you want to run Jort against Auth0. It gets automatically set up. But if you don't need it, you can remove it. So that's how the installation looks like. So the next step is checking the VASM plugin. So again, like I was mentioning earlier, on-road is a lightweight shim. And the idea is that there is more or less one-to-one correspondence between the plugin that Anwai has and on-road exposes. So WebAssembly is a plugin. And you can very simply enable that plugin. So here is how the filter looks like. So this is a WebAssembly filter. You can program on on-road. And all it says is, let's download this image. It's essentially a Docker container. And this image has your VASM plugin. So the idea is it provides you the need to externally load an image. Now, there is a fair bit of documentation out there. But the community has come up with a way to package VASM plugins and load them into the WebAssembly runtime. And this is also a popular mechanism that Istio uses. So what we have done is we have taken a similar approach to downloading the VASM plugin from an external container. And there is a way to package that container using the Compat variant, again, which is documented in the community. So you can look it up. So on-road is the first gateway which runs VASM for free. I mean, there's no enterprise license or anything that you have to buy. You just point your filter to the right VASM container. It'll download it. It'll extract the plugin and run it. So what you see here in the filter is essentially saying that, hey, there is a container, VVX JSON. So validate, verify, transform for a JSON body, which you can download from a Docker repository and then totally extract the plugin and load it into the WebAssembly runtime on-board. Questions? Yes, so far there's nothing. But if anyone does have any questions, please feel free to throw them into the chat. And we'll definitely get those surfaced and asked. All right, so moving on, this is how the WebAssembly filter is created doing. And then let's continue with the demo. So what we have here is we are first going to show, OK, how does a request look like? And then we will be able to run that request normally until we enable the WebAssembly filter. And then we'll say, OK, what if the digest for the request doesn't match? So the idea behind the demo is the following. When a request comes in externally, we want to be able to verify that request. The way we will be verifying the request is by doing a hash calculation, a SHA-256 hash calculation of the message body. Now, the expectation is that in a header, you will be passing that hash. So do you want me to take this question, Taylor? I just see a question here. Absolutely. So the question is, the generated wasm is based on Go, right? Does that enroute also support Rust generated wasm? That's a great question. Absolutely. So that is the beauty of a WebAssembly. There's nothing special about it in Envoy or enroute. They both leverage the flexibility of WebAssembly. So if you write your code in Rust or C++ for that matter or Golang or JavaScript, and once you compile it into WebAssembly, it's just WebAssembly bytecode. And that WebAssembly bytecode can be run inside the WebAssembly runtime. Now, the idea is there has to be a proxy wasm ABI interface to that language. Maybe there is an SDK out there for Rust. I'm fairly confident there is. I might have to look it up. But I know that there is a good community around the Rust proxy wasm, the C++ proxy wasm, the Golang, and the JavaScript in the community. So I think absolutely. So you can run Rust programs inside the WebAssembly runtime when they're compiled into WebAssembly bytecode. Yes. So going back to the validation verification and the transformation step, the high-level flow is going to be following. So we are going to first say, when an external request is coming in, I don't trust that request. I don't trust that the request has been modified in the middle. So I'm going to verify the warrant. So the implementation details are the following. So when you get a body for a request, you are going to compute a SHA-256 of that body. So you'll compute a SHA-256 hash, and then you'll compare it against the hash, which has been sent in the request. Now the hash which is sent in the request is in the header. And there can be other implementation details, depending on how it is implemented in a stack. But for the very basic case, we're going to say we are going to verify the hash against what is coming in the header. So initially, we'll start with no WebAssembly. We'll send in a request. We'll send something without a header. Then we'll enable WebAssembly and we'll see how it does not like a request if the hash doesn't match. Next, what we'll do is we do JSON validation. So if the incoming body is a JSON, we need to validate that it contains a specific set of elements that we wanted to contain. So again, we threw away requests which are not good, which don't match our schema. So we'll see, okay, how the code that we have written is getting run into WebAssembly is going to perform those validations for JSON. And the last step is going to be that there could be personally identifiable information, which I don't want to be sending to the backend server in this specific case. It may be allowed in other cases, but maybe in this case, we don't want to do it. So in the last step, we show how we can transform the request very systematically, programmatically, very pinpointed modification to show that how you can redact certain set of information or perform any kind of transformation at this step. Again, done using Goulang code, which gets compiled into WebAssembly, or it could be Rust code for that matter. So that's the high-level thing. Now, let's look at, let's send a request right now. So that's the request. So what we did is we sent a request and we can see that the request is going through right now. There are no issues with validation or verification and there is nothing in there. Like what we are sending as a body of the request is just an empty JSON and it's not complaining. So even if it is not a JSON, it won't complain because the idea is that right now the WebAssembly validation and verification is not enabled. So everything is going through it. Everything is going through the ingress. We are going to now enable the verification aspect of it. So maybe it might help a little bit to go over. Okay, let's go over this first. So what we're going to do is we are going to enable this WebAssembly plugin that I showed you in the last step and the way to enable it is saying that for all my requests coming in, just pipe them through the Wasm filter. So again, similar to the piping mechanism, right? Like when you get an output from one, you send it to the other. So what we are saying is we are going to pipe it through the Wasm filter right now. This is how it has been set up. So when we expose the service externally, we exposed it using a couple of filters. One of the filters was the rate limit filter and the other filter was the Lua filter. Now this was basically programmatically when we set it up, we said, okay, set up these five or three filters. But for this demo, we set it up only with a couple of filters, the Lua filter and the limit filter. And now we are going to enable the Wasm filter. So let's, so we'll just add these lines too. So what I'm doing here is I'm adding the Wasm filter to all my traffic. And now I have the Wasm filter enabled. So let's send a request again and does. So it says I failed to validate Wasm, validate JSON because it doesn't have a valid JSON. What happened here? So what, let's go take a look at the code. So the message we are getting is, fail to validate JSON schema. We'll go ahead and look at the code. So here we are hitting the validate JSON code. So again, going back and let me take a step back and explain what the code is doing. I never got a chance to do that, but the code is at a high level organized into callbacks provided by the proxy Wasm interface. So anytime the body comes in the proxy Wasm interface defines a request body callback. And that is implemented by the SDK. So now inside the SDK, using that SDK, what we are saying is when the request comes in, I'm just going to accumulate the request. And I'll continue accumulating the request until I get the complete body. So what I now have in this piece of code is that I'm saying, let's keep accumulating the body until I've seen the complete body. Now I say, okay, once I have the complete body, I'm just going to get the complete body into a wide stream. So JSON body is a buffer where I'm going to store the complete body. Next, I'm going to say digest value read. So this is the verification piece. Again, this code has been set up in such a way to say that if I do see a digest value set inside the request, I'm going to perform the validation. If I don't see a digest value read, then I'm not going to perform the validation. And I'm going to try and verify the digest with the JSON body. And if it matches all good, if it doesn't match, then we are just going to send a 400 year. But we are hitting the validate JSON code because we did not send a digest. So now let's go ahead and send a digest and see what it does. So here is how you add the digest by adding that additional variable. And again, this can be specific to your implementation, but this is how this implementation is. So now you're getting an unknown prefix and hash. So the expectation of this code is that it can only verify digest in SHA-256 or SHA-1. So we are just going to make... So again, let's take a look at this. So I can take questions if there are any. I'm just going through this, there's a lot to cover. So I'm just going through this, assuming folks are with me, but if you are not, feel free to interrupt and ask questions. But I'll take a moment or a break, like a few seconds to take any questions. I think one question that I had for you was kind of when getting involved with Wasm and kind of understanding how all that fit together, I'm kind of new to this space myself. I feel like it was a big thing to discover that Wasm is more of a targeted run time for things, understanding the bindings and kind of like how you put all of this together. Can you share some of the things that you found interesting when kind of starting to work with Wasm? And does my language support this? How do I start building with this? Do you have any insights on that front? That's an excellent point, Taylor. Thank you for asking that. So Wasm, if you want the simple answer, then I would say pick up an SDK and start implementing your code around that SDK. And then there will be ways to compile that Wasm into that code into Wasm by code using the SDK or some compiler. Now, the Wasm support is still coming up. So again, I think it's an excellent point that you make is because Wasm support is still coming up with languages. So today, if you see Golang, Golang has a Wasi interface. So essentially, if you wanted to compile Golang into Wasm code, the compiler has to support that. And today we are using a compiler called a tinygo compiler which does not have support for all the libraries. So for instance, I'll tell you an example of what we encountered while we were building this demo piece is that encoding slash JSON, which is a fairly popular and extensively used library in Golang is still under development because it uses the reflection package. Reflection package is not yet ported in there. So today, if you use reflection, you cannot build a program using reflection in Golang which you can compile into the Wasm code because the support is coming up. That's why I think it's a great question. This will eventually happen, but today maybe the support for C++ or Rust maybe more or JavaScript maybe more, but it is still under process and it's very promising. I mean, this things will eventually get in there, but one thing to keep in mind, like you mentioned Taylor, is that when you pick something up or when you pick up a language, one is how comfortable you are with the language, but also understanding the kind of support that the language has when it comes to Wasm. So today, as of today, there is a possibility that you might be trying to build some code that might not compile or those libraries may not be supported for Wasm compilation. So you have to be cognizant of that fact because it might happen that you might go down that route and you might not find that support. So let me add one more thing. So what we are doing is with JSON validation that we started with encoding JSON package, but we quickly realized that the reflect package inside Golang is not supported today. So we took a step back and we said, okay, are there any JSON libraries today? Let us do the validation we do while still being able to compile it to Wasm. And the beauty of it is that we did find some. We were able to express our validation requirements using the libraries that were supported today. So we were able to achieve this today. But again, there is a lot of action in the community. There's a lot of interest in Wasm. It'll eventually get there. But it takes some form of evaluation before you jump into it and say, I'm going to use Rust for my use case. It might help to just check the language support or JavaScript for my use case because the support might still not be there. So great question, yeah. Thank you, thank you. And really enlightening too. I think that like you said, as we see different languages start to adopt these things and we get higher levels of support for different frameworks and libraries, it's really going to be an interesting place. I remember when WebRTC came out and was like, can all browsers support this? This would be fantastic. But it looks like we're finally getting there. My hope is that we're going to see that much more in a abbreviated timeline for that much faster adoption on that front. But fingers crossed, thank you to everybody working on that as well. I did see a couple of other questions come in. Like the way you added Wasm plugin as a filter, what other kinds of filters can be used? Like an auth filter or something like that. So again, a great question. The idea behind EnRoot was how transparently you can add any filter that Envoy has and extend it using custom filter. So the answer to that is any filter, literally. I mean, today we have enabled Cores, we have enabled Lua, we have enabled WebAssembly, we have enabled external auth Z, all these filters are out there very easily. You can enable it on Envoy using EnRoot. So absolutely any filter. And tomorrow, if you say, I'm going to add my custom filter, again, that is all incorporated. But the idea is that how do you make it easy? Because all these, if there's a lot of extensibility, you need to be able to wrestle with that complexity too. So the idea is, how do you make it simple so that it is intuitive? Like when you go look at Envoy today, or when you go look at your architecture today, how are you going to easily say that I know that my service or my solution, because it's an edge service, it needs SSL, it needs circuit breaking, it needs Cores, it needs Lua, it needs Vasem, maybe it needs auth, how am I going to like selectively and quickly say, these are the five or seven things I need, at a high level and transform that requirements, very easily and intuitively into what is being run for my service while you're continuing to track all this. So the answer to that is any filter, right? So yeah, so that's, so there is, I think there's one more question around, there is any mechanism to gather metrics of this routes that you enforce? Yes, there are, so as the Vasem filter is developing in Envoy, generally when Envoy builds a filter, again, a great question. I think one of the reasons we see people when we talk to people, we see people replacing other proxies with Envoy is because of its stats, observability and the amount of telemetry you can accumulate from the proxy. For WebAssembly, I'm pretty sure there will be more stats coming out into the Envoy proxy, but you can also have additional mechanisms to extract those stats. Now, enroute for one integrates or in fact, uses Prometheus to export its metrics. There is no reason why any of these Vasem plugins could also be extended to do some of that. So absolutely, so there are several mechanisms to achieve that, so I'm pretty sure one of them is that Envoy proxy will have additional stats coming out with WebAssembly, and the other aspect is some form of instrumentation with existing frameworks like Prometheus or other stats like infrastructure to expose more metrics. Awesome, thank you everybody for your questions. Keep them coming if you got any to ask. I know that you have more of a demo to get through, but definitely please keep asking questions. Next one was, what is the performance impact to call one of one or multiple Vasem filters? Meaning probably copying all the request bits from the ingress memory stack to the Vasem stack, right? Right, so we haven't run any formal, we haven't run any formal performance numbers, but one of the key reasons that people like WebAssembly is because of the performance aspects apart from the flexibility. Now, is there a copy of all the body coming in? I think there is because I haven't really looked at the details and I could be wrong here, okay? I'm not claiming that I'm right, but when I was writing this, this did cross my mind that am I making a complete copy of the body of the request body that is coming in? And my sense is that we are because when you go back and look at the code here, right? You could potentially trace this call, but what we are doing here is that we are getting the request body here, which we have accumulated. Now, is the request body pointer pointing to a buffer which has already been allocated or is it being, I think there is a copy inside the Vasem runtime itself, right? Because all this code is running there. So I really am not completely sure about this, but if you ask me to take a guess, I'll tell you that yes, there is a copy of the whole body being copied into the Vasem runtime. And again, this is a guess, so I may be inaccurate here, but there is a potential cost to it, yes. It could be, I mean, we haven't run any performance numbers, but it could be so. That's again a good question. So maybe when you choose to go down this route, it might help to understand if there is a big performance hit coming in from Vasem. What we have seen is typically, again, in all the conversations we have with our customers or other prospects that we talk to, it's very easily, we see that when they replace Envoy or when they replace any other API gateway out there with an Envoy-based solution, they see at least 3x or 5x improvement in performance because Envoy inherently is like natively written in C++ and all these things. And it has this asynchronous callback mechanism to run a big event loop on top of LibEvent, right? So it's extremely performant. So my feeling is that generally, if you're coming from a legacy gateway, you're going to see huge wins in terms of performance. When it comes to WebAssembly, there might be a hit there. Again, I cannot confirm or say what's going on there because we don't have any data to answer that question. So cool, let me go ahead with the demo. So again, this time, we were... Yeah, so again, so validate, verify and transform. So we were seeing that the JSON validation was failing and we said, okay, you need to pass a digest header for the verification to kick in after looking at the code. So now we are running the verification piece. So when we give a digest header, we see that we get an unknown prefix in hash. And I think the code expects a SHA-256 prefix, so let's just give it a SHA-256 prefix. So now it's saying that received digest doesn't match your digest, right? So let's just go back to the code and see what's it's doing there. So we received a digest which was sent to us in a header. We computed the digest, but they don't match. So we are getting a 400 response for that. So let's just quickly compute the digest and give it the correct digest. So the way we can compute digest is by using open SSL to compute the digest and let's pass that digest, right? So instead of the random digest with us, the digest that we computed, and now we are getting a fail to validate JSON. So what it did was the same request with a random digest was failing. We computed our digest, right? And then we said, I'm going to send that digest now. And now it's happy. So now it gets the digest we are able to go through. It verified the request and said, hey, the request looks good. We are able to verify that it is the checksum and all matches. So let's just quickly look at that code. So the verify code, the verify code checks the prefix. Again, this was the prefix part and we sent it to a short 256 prefix. What it did was it calculated the 256 hash sum here, right? And then we said, okay, this is my computed value, right? And then we just get the compute value and we pass it back to the main program. So again, bytes in, so going back, we read the complete body here until we received the complete body, we said action pause. So now we have the complete body here. Once we have the complete body, we send it to the verify call. The verify call computes a short 256 and extracts the digest from the digest header, right? So now we have what we have received in the header and what we have computed, right? And then we just check the two of them and say either they match or they don't match. So I think what we saw earlier was that it wasn't matching and you could see this message, receive digest don't match. And now we see that the receive digest matches the computed digest and we are able to move forward. So now we say, now we are running into this problem. So we are not able to validate the JSON. So I'll just pause for a moment and see if there are any questions that I should take up right now. Or maybe I should just finish this and move forward. We can do it either way. I don't have any preference on my end. And I did just see that one question come in too. Is there any way to integrate R-based coding deployments in Cates? I think that's somewhat of a separate topic but definitely I've seen personally a few different R workloads kind of, how do you set up your workspace on Kubernetes as one topic that I've seen coming up? But not sure if there might be some policies to enact definitely on your data sets and things like that. But yeah. Yeah, so you're correct, Taylor. I think I second that. I think that makes sense. So no, that is not there. So yeah, I mean, R-based is essentially how you set up your whole studio and the workspace and the environment, right? To like run some ML workloads or ML programming. So moving on, I'll just quickly finish a few things on the demo side. I want to complete this real quick. And then I'm happy to, I see one more question which I'm going to take eventually but let me quickly finish this and we can talk more about that. So yeah, so this failed to validate JSON. So let's go back and see. So this is coming, this is because we use, so whatever body is coming in, we try to validate the JSON of that. So what we are saying here at a high level is that ensure that my JSON contains the following. So inside data, the first element contains a type and an ID and inside the data, the first element inside attributes that is title, body, created and updated. So the validation looks like that. And then the validation also checks that the time getting in created and updated follows the time format. And if any of these work right, then it will flag an error and say that, this doesn't look right to me. So I'm just going to discard this request with a 400. So that's what we are seeing here. And let's see what a valid JSON looks like. So, okay, so that's the valid JSON. So we say inside the data element. So let's, let me just copy this. It's just easier to look at it because. I did for some people looking at JQ, I know that's quite a popular tool for introspecting JSON and trying to filter and work with that. I did see one called ZQ that I'm kind of curious to check out. There's so many new command line tools I've seen lately, but so little time, so many command tools. I see. Is it what's called again, Taylor, ZQ? Yeah, I believe it's ZQ. Yeah, I'll check it out. I know about JQ and I think it's fairly popular. I just have tried ZQ, sure. So comparing how this looks, what we check for validation is inside the data element. So data element is, right? And then the first element inside, I wanna check there is a type, right? So there's a type and there's an ID. And inside the first element attributes, there's a body, right? So there's a type and a body. And then there is a creator and an updated, right? And then I wanna check that created and updated. So what this code does is that it defies all these are present. And if they're not present, it lacks a failure. And then what we have here is that we try to parse the time and say created and updated are in the correct time format, right? So this is the validation that it does for the value. Now, typically all the other popular gateways, again, they will, these are all premium features, which you charge, which they charge for your buy these features. You have to enable those plugins, go through that extensive configuration for those. This is just so much simpler. And it's just so much easier. And of course, it's free because WebAssembly support is native to Envoy. And Enroute just exposes that support. So we just flip a few switches and bits here and there and things just start to work. So there's really no cost to building this or creating this, right? So again, I think the flexibility is amazing. The performance is there and there's no cost involved in this because it's all Envoy native, right? So anyway, so this is our time dot parts. It verifies, hey, the creation date and the updated date. The should be actually valid dates, right? So this is what it is. So let's go ahead with our running the code again. So what we have here is the digest doesn't match again. The verification is kicking in. So we are sending some digest, apparently that's not right. Let's get the digest right, right? So let's compute the digest and verify if it is okay. So now we're sending an actual JSON request. So okay, so what we see is now two things happening. One is the verification went through. It likes the digest that gate is open for us. The digest matches its valid message. Next step we are going to do is validate it. So if you look at the flow of code, you have the validate call which is after the verification call. So your first call is a verification call. Let me if I have the message is okay. The next call is a validation call, right? Typically now I look at it, I feel the right place for this is over here somewhere only validate the request if I'm able to verify it. But anyway, the next call is validates. So it's trying to validate it and the third call is transform. So let's quickly look at it happen. So when I see a request coming through, what it means is that I'm sending a request and it got verified with the digest. So it likes the digest, it will allow it to go through. Next is the validation of the JSON. So in this piece, we also see the validation went through because it's a 200. Why? Because it has all the aspects that we are checking for, right? So it has the attributes created, updated. So both are essentially time values that we incorrectly parse. It has a title and a body and it has an ID and type, right? So we added these validation, we could add more validation, but these validations worked, right? So it was happy. The key interesting aspect now here is also the transformation, right? So remember I mentioned you could validate, verify and transform the request, but take a note here of the DOB, the date of birth. It has been redacted. So it has been removed. And if you look at the original request, there is a valid value for it, right? And after you run the request, that value is gone, right? So this shows the request as a mission. Let's quickly look at the code. And what we are saying is that for the date, for the data, right? So for the data attributes of zero, right? And date of birth, right? If I find that path, I'm just going to, if I find that path, I'm just going to set it to all zeros. So that's the transformation piece. So we are replacing the body with a new body. And again, going back to the comments, the memory copy, the body copies, it's a great question. I don't know if the copies are happening here. They could be, but definitely worth checking the cost of this operation and the copies happening. In my mind, Wasm is fairly performant, but again, we haven't run any benchmarks to look at this more and say, are there more optimized versions of it or is there a better way to achieve it? But anyway, so this is the transformation piece where you're saying that if I do find a date of birth, then I just want to remove that date of birth. And this is where you can see in the request body that it has been removed. So that's pretty much it. That's actually the end-to-end validate, verify, and transform, which we also have here. So again, this is the article you can check on the main webpage, which talks about the same thing. So anyway, so that's all I had. So let me take a step back and provide a summary of what's going on here. So what we are saying is that WebAssembly provides a flexible mechanism to extend your ingress, right? To envoy natively has WebAssembly runtime which you can actually using a filter. To write code that can be run inside the WebAssembly runtime, you have to use an SDK that complies to the application binary interface, which is provided by Proxy Vasm. So think of Proxy Vasm as a glue that joins WebAssembly runtime to your code, right? So you have Proxy Vasm. And what it means is you essentially have callbacks in your code, which you are overriding, right? So you're saying, oh, I'm getting this callback. This is what I'm gonna do. So you get those callbacks, you do your verification, validation and transformation in those callbacks. And those callbacks are the ones called by your ingress and the WebAssembly code and eventually your code to perform all these functions. And you can pick a language of your choice. And like Taylor mentioned earlier, it might help to look at what you choose as a language because one thing is the convenience of the other aspect is also how much support it has in terms of the libraries and how mature the ecosystem is when you start compiling that code in that language in WebAssembly. So that's pretty much it. Check out the space on the website. We're going to add more details into doing the, into the source, more of an overview of what to, what you can achieve with this, the capabilities, but we're going to add more on how to exactly do all of this. The code will all be open. So we'll just push it on GitHub. We just got it going pretty late. So we haven't had a chance to push it out. But one additional thing is loading code into the WebAssembly runtime also should be flexible. And what Andrew does is it uses the compact variant, which is again, fairly popular or rather defined in the community, Istio community, if you see the talk about the compact variant, which you can use to take a piece of code, convert it into a WebAssembly plugin and package it into the compact variant. So when you download that Docker container from a remote compact variant Docker container, you can essentially unpack it and load that plugin. So this is really flexible and it makes it very easy, right? So just to conclude by saying here is the container that it's downloading from the SARS-IO Docker Hub and VVX JSON, validated, verified, transformed JSON is the container in the compact format. It's downloading and unpacking and loading into the WebAssembly runtime. So that's all I had. And I'm happy to answer questions now. Awesome, awesome, awesome. Well, unfortunately, I do think that we are at time, but so we'll close things out and end them here. Is there any good place for people to contact you or the team, Chintan and that friend? Absolutely, so there is a Slack link on the website. Some of us will definitely get back to you quick enough. So Slack is the best way to reach us and there's also contact page on the website. Awesome, awesome. Well, wonderful. Thank you everyone for joining us for today's Cloud Native Live. It was great to learn from Chintan. And yes, please, please do reach out to him, if you have any more questions on Wasm, on Enroute and anything in between. We really enjoyed the interaction and questions from the audience. Thank you so much for joining us today and we hope to see you again soon. Thank you all so much. Thank you, Chintan. And hopefully you have a wonderful Wednesday. Thank you everyone.