 Good afternoon. Welcome to the Envoy presentation. I'm going to be done from the Envoy. I'm based in the US. Today I'm going to do a presentation about Envoy. Before I introduce Envoy, I would like to first talk about Envoy physical. In the first half of 2015, it was the beginning of development. The founder of Envoy is like fire. The Envoy project was over-sourced in 2016. It shattered a lot of attention from the community, Google and IBM and other large companies all paid attention to this project. When I was in Google at the time, I also joined the development of Envoy CNCF. Since then, Envoy has made a lot of contributions to their community. And last year, Envoy became the third graduation project at CNCF. So that's the history. So I would like to now address why Envoy was developed. In the industry, in the micro-service networking, in many cases, multiple languages or infrastructures and protocols are used and also there are different levels of load balance and different levels of observability. But it's very difficult to get unified observability. For example, retry, circuit breaking, rate limiting and other distributed systems. Meanwhile, Envoy also needed to implement authentication and authorization. That means we need a per-language library in most of its code, but it's not very practical, unless this is multinational companies. And you only use two or three languages. And also Envoy is designated a team to manage the libraries and language framework. Otherwise, this is extremely difficult. And also debugging in micro-service is very difficult. It's highly probable that you won't fail. And also there's limited visibility into the components, such as hostages and load balancers, databases, cache, etc. And of course there are features such as circuit breaking and retry. These are multiple and partial implementations. It's very difficult to achieve. Also libraries are incredibly painful to upgrade. For example, if you have CVE, you need to upgrade for the applications and build them before upgrading in this code service. So Envoy is developed to address these issues. The main objective of Envoy is to make the network transparent to applications. And when network and application problems do occur, it should be easy to determine the source of the problem. And you can do this via Envoy. So applications don't have to pay attention to these features. And Envoy can help you to achieve all these functions, such as authorization and retry, etc. Now Envoy uses sidecar. Here each rectangle is here where net is part. On each part we will have a sidecar proxy. Under each application we can connect to the network and then communicate through sidecar proxy. So Envoy can do this order copy of the application. And then it can offer all those functions like this nation. These are the considerations when Envoy was designed. As we can see in the diagram, Envoy is an out-of-process architecture. It has an epsilon-2tcp filter. And it also has htpl7 filter architecture. And it's also designed with htp2 first. It has its own service to Envoy. And there are routine management including house checking and other load-balancing functions that other proxies don't support. It has as-in-class observability. Other stats are sent by Envoy, which means the format of the stats is unified. Another presentation at three o'clock. I will talk about more about observability in service mesh. This is another advantage of Envoy. Envoy is a universal data plane. We call this XDS API, which means Discover Service. Discover Service includes LDS, Listener Discover Service, CDS Buster Discover Service, and also Secret Discover. There are altogether seven or eight. And the latest one is a TDS and out-of-checking LTDS. This API is based on GIPC streaming. We use protocols to define these APIs and send them to Envoy, so GIPC. The advantage of this is when you extend your Envoy, you can avoid a pre-prosecutive. You can control the rest with your control plane. In the dynamic environment of cognitive, you don't need to manually restart different envoys so as to read the different configurations. This is the basic architecture of Envoy. In every Envoy, there are multiple workers. Each worker represents a process. Each worker has their own filters. The traffic is here. The connection is generated. Then they go through the filters, for example, L4L8 filters, and then it's connected with HTTP, ALC, and then it will go through the service route too and connect with the upstream connection pool. Envoy supports a lot of exposure. As mentioned before, there is L4L7 filter. Actually, Envoy itself works for small portion of Envoy code. Most of them are for L4L7 filters, for example. It can be a complete HTTP L4 level filter. The service filter is also another kind of filter at HTTP level. There are settings as filters. We also have logos and tracing and house tracker. For this, it's expandable. As we all know, for example, in Envoy, there are using filters and logos, which are also transfer software. Retro analysis can also be used by extension app. Since we open-sourced in 2016 in Asia, a lot of companies adopt Envoy. Here it's just a small amount of adopters. Actually, in China, a lot of companies, including Armin, that also adopt Envoy. Now, I would like to invite Mr. Tsing, who seems to have a lot of news about the use of Envoy in Envoy. First of all, I want to thank you for the introduction of Envoy and Leo Zhang for support in his early stage. His Chinese actually is not the same as what we used to speak. I would like to talk about how we use Envoy in NetEase. Because time is limited, I will just talk about our use case first. Internally, we have a service mesh, a macro-service spring work, and when we use Envoy, what kind of work we have done. And second, internally, we have some demands and we want to achieve through Envoy. And how can we expand and have the expansion of Envoy to meet our purpose? Internally, we have a department responsible for e-commerce. And they have the service mesh structure, but it's different from the whole existing service mesh. And they are using the engines. And they use console for service discovery. Engines, as you know, they simply plug in to extend. But they connect the console and they have a console plug to achieve service discovery. So, a service discovery cannot be healed by the business. But for their part, at that time, they do not have the service mesh concept and they are more practical. And they do not have clear demands, for example, for control board. And they do not have a configuration interface. And actually, based on the current status, we hope that we can use open source to combine with our internal resources. So, with that, we come up with a plan so we want to shift from engines, from current plan that is in the list. So, at the top there, this is data for each service they have a proxy and that they have the control board here that we use instead. And we adopt the version of 1.1 to 1.5 and with our test, we found some problems. For example, we do not use main component mix and so we create it here. And we mainly use component called pilot. And for business, previously the business side is on the cloud frame. And in terms of service, we need to connect to K-Buds. And for this plan, in terms of data listening, previously when we were connecting to service, we need to use engines and we need to use 1.8.7.1 to go to process because we want to shift from old plan to new plan. So, we do not want to have much change. At the invoice side, we want to have the configuration. As we said, we need to use proper configuration in the URL. We have a service name and we need to get the service cloud first. If you are familiar with engines, you know one way in engines, for example, e-write, we can get the service name, service cloud but actually we do not have this function here so we need to have the extension on the existing filter through the collection managing filter we made some minor changes and then we have a rewrite of host and pass. The data level change is not enough. We also use the control level. We need to cooperate at the control level. This is for the pilot from Instail. We need to have a configuration of RDS for the current scheme. For the e-commerce website, it is mainly on ATV protocol because invoice support listener so we only use one of the portal that is enough for us. This is a specific access route and first we have a URL to visit 127 route and directly point to 8.5.5.0 portal and then visit URL service path. Then we can get the service cloud and then through RDS, we can transfer to the cluster and the cluster is KBUS workload and it corresponded to a work point. Next I will talk about internal demand because for e-commerce, when they do have literally development, they have a lot of small environments because our service has been divided into around 300 microservice. If developers just change a few services when they do test, they do not want to deploy all these hundreds of thousands of services in their testing environment. So we have multi-environment governance internally. We have this kind of concept. So for every part, they only test the part they revised and they just upgrade the existing service and for the rest, they will use the original service for a lot of developers or QA. Every environment is different. So there is the problem of public using of environment. So when we connect to mesh at the proxy level through invoice or sidecar, how can we solve the problem? How do we do it? First, our scheme is that through control level we can have directed fruit through control level. At the development level, we have source labor to have some certain directions through this label. We can, through source label, we can point to upstream cluster. But for that scheme, it is suitable for the previous scenario but there are some white box here. The white box means for some services they do not have a correspondent part that have the same color. So if the color do not have a correspondent color service then they need to be downgraded. So it's not so feasible. So we come up with another plan. So from the slide you can see here. First of all for this scheme, we want to support L4 and L7. That means it can support ATP and TCP protocol. And for the coloring through Ingress, we can add some black and cookie to add the color for a specific service at the invoice side. They have inbound and outbound. Inbound and outbound are different. Inbound is at the website, at the entry level. Through cookie or header, they mark the color and inbound will get the color and through IDS and find the correspondent cluster. Then invoice, when transferred to upstream invoice, they are not using HTTP cookie. We are using H8 Proxy protocol. We do some extension. From the website picture you can see this is the principle. When we have the TCP connection and we will send PP package and there is a parameter here. It will state the attribute of the color. So between invoice through TCP protocol, they can identify the color attribute in outbound side. When they do the traffic transit, there are some problems that cannot be solved. So later I will explain why it cannot be solved. Here there is an example and there are two services. There are two services. The first person in four services, we will get the person information. From the slide you can see there are two different colors and the white means public version. Red and green correspond to the developer's version. The green want to get the information in the green database and the red want to get the database information from the lab to white. There are motions on problems to not be solved on outbound side. And there is no context but public. And on the outbound side. That means the color information from inbound, then we need the service for the instance. in the sense that the color of the work should be otherwise out and cannot get very professional color. And also, it is a natural color of your query or if it's a service called, if you have a C1, you need to have a center. Also, if you have your intranet, you can have a C1, you also need to present a federation. So that's a fundamental thing. Thank you, Ms. Witte. We are in introduction on the introduction of A&Y in the 90s. So the naturalization of the fabric is very unique because of the protection of the environment of seeing this kind of utilization. And it also allows the rewrite supported by the invoicing. So most of them can be implemented by doing this and use this function as a format. So this is the content of our presentations. Thank you for your listening. And if you would like to join any project related to A&Y, you're welcome to talk to me. And the project is also recruiting with your attention. Thank you. Thank you very much. Please take your time. Please take your time. Okay, thank you very much. to achieve low-high-endness and low-latency. CoreBase is a return-by-safe bus bus, and a lot of it has been used for a long time. And also there are some out-of-the-end should be to use the library for the NJ2TP and a lot of course supports have been used for a long time. I want to ask you about the performance of the material writing with the addition of raw material. Well, the performance of the material writing was the addition of raw material. There's no problem, but it's a trade-off between Amazon and other functions. You cannot have as many functions as possible without being able to try to make sure there's kind of almost deterrence and there's a wide support of all protocols. I'm wearing a coat right now. There's a team in Alibaba. That's working on this Alibaba. We're open to each other. Do you have many covers? Please send me an email. I can reach the person in Alibaba, and I'll have some future opportunities. Thank you. Any questions? Can we be more humble? Do you have a deeper-ever matching in-house form? Or do you have several... So, on the portal, can we have a different release? So, you need to do in-depth to get several parts of what you want to see. You need to do the matching before you know what you want to match with. So, you have different protocols to mention. This is a matching approach. You can compose the parts of this protocol and you know the function of the protocol and then it's given to the conversion. Now, we have this in the virtual, but it serves the TLS. You can understand the protocol. You can see the link, whether it's TLS or not. So, you can separate them and send them to different approaches. So, the different protocols in the protocol. Do you have a presentation? So, of course, you can do a presentation of the extension through a loaner. I don't know how difficult it is. It is too cold. If you have a loaner, you can also do an extension through a loaner. And currently, I would choose... What's the next step? Oh, what's the next step? You need six months to make some progress in the center.