 Hi everyone, today I'm going to talk about multiplex TCP over on YHTB2 stack, a little bit about me. My name is Liu Chengdai, I'm a software engineer at Google's Institute 2019, I mean the ESQ networking team, mostly work on data playing. And this is outline of today's topic, I've introduced the background, the problems, the solutions and the real-world usage. This is the traditional scenario of the service mesh. So on the left side we will have a TCP client and away as this sidecar of the client would run the TCP proxy network flagging related by stream to from TCP client to the other way as the server sidecar. And that would also use TCP proxy to relate the bytes to the TCP server. And the variation today is about HTTP2 in the stack. So the changes I marked use the red color instead of relaying the byte stream from TCP client to the upstream server side on way, the byte stream would be translated into HTTP2 client connect method with the data frame in calculating the byte stream that beef. Of course at the server side on way the HTTP connection manager as the network filter would terminate the connect request and extract the byte stream and relate to the TCP server. So the problem or what we can benefit from the varated structure I will explain in the further slides. So in this slide on the top corner there's a thumbnail of the whole structure and on this slide this on ways the TCP client side on way which is responsible to relate the byte stream to the upstream HTTP2 request. So this is done by our TCP proxy but with the H2 extension at the TCP connection pool which is recently developed by Alisa, thank you Alisa. And this H2 codec in the TCP connection pool would custom magic translate from byte stream to HTTP2 connect stream. And this slide is about the server side on way. So this server side on way would use a HTTP connection manager live in the tunnel listener on port 80 which is a common HTTP port. And the specialized configuration is that in the route configuration you can use the connect config field to declare that instead of relay the HTTP2 connect method please use extract the data from the connect stream and relate to upstream. And in my specialized design I would introduce another TCP proxy listener which is similar to the traditional architecture. This TCP proxy network filter would do byte to byte relay. You may wonder why I'm introducing a duplicate listener, the idea is maybe quite naïve because in service mesh especially in STU at the server side we already invest a lot including error for RPEC network filter including the access log, the monitoring pipeline which is a promise to the developer and the STU users so we don't want to mutate the structure to huge to break the existing structure. And this slide give introduction on the necessary config or the component introduced in this scenario I can explain in the further slide. So what we can gain from this complex structure so we can obtain the we can get the functionality of metadata exchange between the two unwaist so because the two unwaist is connected with HTTP2 connect stream so then we can use the H2 header to encode our metadata in this page I demonstrate as a full client ID which is my fake client node ID and server would respond whatever you like but in this example it's a server ID. And what we can obtain beyond the traditional TCP proxy connected scenario we can use the HTTP2 HTTP not filter which is far more powerful than the TCP proxy routing so we can match the headers we provided in the metadata to decide which upstream and the point we are the server side upstream we would redirect to and we can obtain the low cost handshake in the service mesh scenario the clients unwaist and server unwaist mostly would be connected with TRS handshakes connection and everybody knows that TRS handshakes is expensive in terms of latency and the CPU cycle what is even worse is that this traditional TCP proxy use the TCP connection pool but the connection itself is not reused so for each incoming connection the connection pool would establish a new TCP connection to the upstream and introduce another handshake but with HTTP2 stacked to incoming TCP connection can be encapsulated in the same upstream TCP connection and the boundary is the data frames the HTTP2 streams between the two unwaist so you handshake once and you use the TRS connection for many many TCP connections between the client and the server and you may wonder with this actual layers would it be expensive yes it is without optimization there are many copies introduced between the two listeners at the server side unwaist so each we basically create two extra connections and the kernel space would maintain two socket buffers and connection user space connection copied to socket buffer in the kernel kernel would do the copy between two socket buffer and socket buffer would copy to the user connection again but remember the scenario that that's two listeners sit in the same unwaist process so I introduce concept of internal client connection internal listener and specialized IO socket implementation to eliminate the two socket buffer with the two connections extra connections so the data is not copied instead we use unwaist building buffer to move chunk of data in the pipeline in unwaist components so not many data are copied so the real-world usage will be in use-deal 1.8 which will be released in WEMER 2020 and I see you don't have to introduce the whole stack in your system you can use the connection internal connection with very little config change and gain the change listener with that this is chained TCP proxy to HTTP connection rendering I also use connection TCP to TCP or HTTP to HTTP to or other protocols so the code is still upstreaming you will see that in along with my upstreaming so this page I provide some links in that to the RFC of HTTP or hand life of unwaist request and the building component in our way to support the full picture and thank you for the time