 Hello, everyone. Thank you for attending my session. Introducing Metaprotocol Proxy, a near-saving proxy framework powered by N-Way. My name is Hua Bingzhao, and I'm from Tech2O. I work on N-Way, N-Way Gateway, and A-Steel. So first, I will give a little bit of background why we need a so-called Metaprotocol framework. Then I will introduce an architecture of Metaprotocol Proxy and how it works. And then some user kits, finally, a demo. So when we talk about microservice, most of the application, Microsoft's application, they are just using ATP for inter-save some communication. But for some user kits, as far as law, when I work in my process company, like for some internal application, for game application of some streaming service, they might have some opinionate protocol for inter-save communication because ATP originally are designed for transfer data, like documents or resource on the web, a lot for RPC. So they may want to use some other protocol, in some cases, like Swift Double or some internal protocol. And also, we have messaging, cache database, and other layer-saving protocol in our Microsoft's application. So if not all of the sidecar edge proxy, but maybe most of them don't understand this protocol, I mean, after the layer-7, basically they trade them as plan TCP data. So if we put a proxy in front of our application, what we want, really want, basically we want traffic management on the layer-7. For example, load balancing at a request level, and load limiting at a request level, and also routing, observability for the integration, or all of this functionality, we want layer-7 level. And what we really get is that just the layer-3, layer-4 level traffic management, like routing based on IP address, TCP port, or SNI, and the collecting level, observability, like status based on TCP send received bytes, open, closed collection, something like that, and security based on the collecting level authentication or relation. That's it. So that's the reality, but that's what we want. If we look into the different layer-7 protocols, we can find that actually, the processing of this protocol in our proxy looks quite similar. First, we want to extract some k-value pair from the layer-7 headers. And then we do whatever we want, like processing on these headers, like routing, like observability, security, or something like that. For example, we have ATP 1.1, AP2, JPC, TAR, STUB, and any other RPC protocol. Basically, the service discovery, we use something called designation for service discovery. For ATP 1.1, it's HOST. It's a HOST header. And for ATP 2, it's the authority. It's a pseudo header. For other protocols, we also have something similar in their header for service discovery. And for other processes, also, we use some layer-7 header, the k-value pair in the header. So that's very, very similar. So do we really need to create a dedicated proxy for every protocol? I don't think so. So in the world of layer-7 protocols, managing traffic is usually done in a similar way. So instead of building a dedicated filter for each protocol, we can just have a very common function. Or, generally, in our framework, we call it a metaprotocol proxy filter. That's what we want to create. So it's a two-layer filter architecture similar to the ATP collection manager in OE. So the metaprotocol proxies actually are layer-4 filters in OE, layer-4 filter chain. So all the common functionality, including the load balancing, the limiting, routing, both include the dynamic and the static routing, traffic monitoring, mirroring, tracing, magic, logging, et cetera. So all this functionality, they are similar. So actually, they are the same. So we built into the metaprotocol proxy in this framework. If you have your own protocol you want to support, you want to create layer-7 proxy for your own protocol, you just need to implement the code interface, basically that decoder and encoder. Decoder can extract layer-7 requests from the TTP-clicking string and get whatever header and then use for the data processing. And the encoder, encoding the data and send it to the upstream. So that's the basic idea of metaprotocol framework. So there are two important structure in this concept. First one is the metadata. So you can extract from a data package using the decoder. And anything you think it will be useful for your data processing, you can just store them at k-value pair in the metadata. And they can be used by the layer-7 filters in the metaprotocol proxy, like for routing much, for limiting much, et cetera. So let's look into the mutation destructure. So basically, if you want to modify your request, you put whatever you want to modify into the mutation destructure. The encoder will use that information to encode the request and send it to upstream. And then let's look into the request path. So first, when the decoder gets the request from downstream, I just extract header of this request and populate the metadata destructure with any value. Like if you have a user header, like you have a header for the environment. It's a test or production environment. You can use that in your header and store some information in the metadata. And then all the filters, all the layer-7 filters, like a router, or maybe a custom filter, or maybe a latimeter, can get that information from that data and use it to match against any configuration. And then use that for processing, there's some processing like for routing. And later, when the router has chosen the final destination for this request, the encoder can get all the mutation. Like if you want to add a header, like Trason, you want to specify Trason header value to 100%, then you can add it to the header. And the encoder will get that data from the mutation and create the request and send it to the upstream. So response path is similar, but just in the opposite direction. So that's how metaproduct proxy works. Let's say you want to add a new protocol. What you need to do is just to implement the code interface. If you look at it there, it's just three methods. The first one is decode. You get whatever data from the TCP collection buffer, and you populate the metadata with K-1 up higher. You extract from the header. And for encode method, you just use the metadata and mutation destructure to construct the outgoing request. And if you have any error during the processing, like maybe the request has been limited, you can create an error and send it back to the downstream. So that's it. So let's compare the work. If you use or with or without the metaproduct proxy, if you want to create a little same protocol proxy for yourself, before I think the work is huge, because you have to write a full-fledged little same filter. Just consider the effort writing an HTTP connection manager by yourself. It's huge. And after, it's manageable, very small work, like just write a code implementation. I know someone did this for a hundred of lines of code. Just in one week, by one developer, you can do that. So that's a comparison. Right now, we already support more than 10 protocols, including the open source and private protocol. For open source protocol, like DABO, Swift, BRPC, because the open source that we already support inside the built-in, inside the metaproduct proxy, built-in. And for some private protocol, the user have their own private get-up repository. So we don't want to open source. But overall, I think it's more than 10 protocols supported. And for the use cases, I think the most significant use case that has been used in the 2022 Olympic Online Streaming Services is private protocols for streaming services. And we also have use cases from Tencent Music, BOSS-SPIN, and a lot of cloud. And some more use cases can be found in this issue. So finally, our demo. So in this demo, we use metaproduct proxy as a sidecar proxy. Actually, it's part of a project, created called Alchemych. So the metaproduct proxy is on the data plan, serve as a sidecar proxy. And on the control plan, we have STL. And we also have another component called Iraqi to manage the long ATP protocol. So that's the architecture. Go to the demo. So first, we install Alchemych demo application. Just use metaproduct proxy as a sidecar proxy. So if we look into the part of the application, you can see that the used proxy, actually, the image is metaproduct proxy. So it can understand long ATP protocols. And then you can get layer 7 load balancing by installing just the demo application without any configuration. So if we look into the standard out of the application, you can see the request has been sent to two different version of the server, version 1 and version 2. So it means that the request has already been load balanced at the request level. Then you can get the X logs in the standard out of the metaproduct proxy. So that's similar to the ATP, but a little bit different because we have our opinionated logos format. It's different. The first one, you get the actual protocol. It's double. So then the status of request, zero means success. And the request and response size and time. And the request ID, the destination service, and the destination IP, et cetera. So if you look into the configuration of the in-way, you can get a sense of what's going on under the hood. Basically, the T3 proxy default is a T3 proxy, but has been replaced by a metaproduct proxy. And you get all the configuration, like the X log, the loader, the tracing, et cetera. Most importantly, it's the codec, so which private protocol, which protocol, specifically for this metaproduct proxy. So here, this protocol called double. And you can also have slave here. You can also have your own private protocol here. And there's some filter for getting the metrics. And the RDS configuration and the tracing. Yeah, that's it. OK, then let's see how we do routing, right? If we look at this CRD, it's similar to virtual service, but a little bit different, because virtual service is specifically designed for ATP. So we have our own opinionated CRD called MetaLooter. So it's quite simple. If you look at this CRD, it means that OK, we send every request to the demo service to the version one, version one of the demo service. That's it. So let's apply it. And then if you look into the client side of the application, you can see all the requests instantly have been routing to version one. So without any interruption on the user side. So it's a dynamic routing. The routing rule has been sent out by the Iraq and the East Hill on the control plan by XDS. And then we switch to version two and our rule. So that's the routing. And then traffic mirror. So we just send a request to the one version of the server under the client side, but we can get actually by mirror traffic to another version. So you get this. We're used for feature if you want to get the traffic from production environment and you want to fit it into your testing environment. So that's the rule. You can see that traffic will be sent to version one and being mirror to version two. So if you look at the client side, all the client has been sent to version one, but we fully look into the server side, you can actually see a request coming in both of the send out of the server side version container. OK. Then global routing machine. First, we configure the routing to server. So if the method is say hello, we have an imitation, like 10 requests per minute. And we apply the routing to the rule. OK. You get this result. Some of the requests have been rejected because it has been routed immediately. Of course, you can do local routing limiting as well. So it can be quite flexible. You can have a routing limit rule for the whole service, but you can also set some condition, like set a rule for a subset of the services. Now you get some requests rejected because they are already limited. And then the tracing. You can get a tracing auto box without any modification because for this protocol, the double protocol actually automatically pass down the header in the thread context, so you don't have to modify any code. You can get a tracing. And all the k-value part in the metadata have been populated into the tracing as a tags. So you can get all the information. And the metrics. So you get all the metrics. You can have a cluster-level metrics. You can also have, like, STL-compatible service level metrics. For example, you can have request total, this metrics, and the double services. OK, I think that's all for my presentation. Any questions? Cool, OK. Thanks very much, Robin. Oh, wait, sorry. Do you have some per-measurements, for example, for the regular plain, fully implemented proxy versus the meta-alternative? You mean performance comparison? Yeah. I don't have one right now. But I think we have one, not specifically for Swift, but for our protocol on our website by some of our users. So I think the main difference that is the efficiency of the codec implementation. So if you can implement your codec very efficiently, you can get a high swath. Otherwise, you have some problem. OK, thank you. Great, thank you.