 Hello, everyone. My name is Wenbo. I am one of the software engineers on the GRPC team. I work at Google Cloud. Together with Eury, we prepared this talk on GRPC web for the Google Cloud. Eury currently is the main maintainer for the GRPC web project, and I created the project, I started the project quite a few years ago, back in 2018. I'd also like to thank the CNCF organizers to give us this opportunity to post this recording. Due to travel reasons, I am not able to give the talk in person at the Google Cloud. So today I'm going to cover three parts. First, I'd like to give you an introduction about the GRPC web, specifically, you know, the history and also the design goals we had at the time. Then I'm going to talk about the lessons. So we launched the project in, I think in October 2018. So it's been almost five years. So over these five years, we didn't learn a lot, and we also have more data that we can share with you. So I'd like to share the lessons and experience we had. Then the last part, we'll talk about the roadmap, specifically to address the gaps in a few areas we heard from the users, developers. I started with this sort of broad overview of what GRPC is and then what is special about doing something like a GRPC on a web platform. So on the left side, you can see there's a diagram. This is a very high level overview of what GRPC provides. So the kind we send a request over HGP2 protocol and the server will generate a response. And in addition, there is also a service protocol based service definition which provides the contract for the RPC, specifically, you know, the request message, the response message, and also the actual service on the server side. Now you look at the right side of the diagram. This is what really things look like when you try to do an RPC on a web client, for example, from a browser. Now there are two things that are unique between the two diagrams at a very high level. So for a web platform, for a web client, the actual protocol, the communication is managed and controlled by the so-called web platform. Typically, we call this user agent instead of the RPC runtime. Second, almost in all the production deployment, the web client does not really talk to the server directly. There are always some kind of proxies between the client and the server. Sometimes on the client side, you know, we refer to as forward proxy, but there's also on server side, which does low balancing or that, which typically referred to as a reverse proxy. And there are also different types of proxies. Some provide firewall functionalities, some does gateway, which does some translation between different protocols. So that really make the web platform or making the RPC work from a browser very unique compared to the typical server side, you know, client-server RPC communication. So because of the uniqueness of the web platform, we have established a few design goals when we started this project. So I want to just give you an overview of sort of the history. First is we decided that we like to keep the GRPC web protocol, which is a dedicated protocol specifically designed for GRPC web clients to be, to match as closely as possible of the so-called core GRPC protocol, which is based on HP2. The hope is that over time, the difference will become smaller and smaller. And then maybe one day, a broader client can directly talk to a server without any translation. So that's one of the motivations, but also try to keep our stack as simple as possible. Instead of we have all different divergent protocols, we try to keep the whole protocol, the core semantics of the protocols very close to each other. But on top of that, we also understand that web is different, it's unique. There are certain concepts such as cost, which is unique to the web. Cost is when a security model that allows web apps talks to different orange, which is different than where the web page is downloaded. Those are very specific to web clients. It doesn't really apply to other type of RPC communications or environments. The second sort of design goals we had at the time is we want to make sure that the solution we created will allow a tight integration with the web ecosystems. Specifically, we are envisioning that the GRPC web will be used for building real web applications that interact with the GRPC-based microservices as opposed to just providing development or debugging tools that's browser-based. In that case, you can imagine the browser actually will talk to the server directly and you have the full control of what browser versions you want to use and what environment you want to run the client and the server, which is not really the case when you located the internet-based web applications. The second design goal at the time we made is that instead of building the GRPC web support in every language for its GRPC implementations, we rely on Envoy to translate the GRPC web protocol to GRPC. In some cases, the same functionality can be provided by language-specific native web frameworks. I'll talk about this a little bit more at a later slide. Then the other design goals we had is that really try to keep this whole solution as simple as possible. Specifically, we made a decision that we will only support server streaming. There are two main reasons. One is that any kind of other streaming modes will require support of, you know, protocol like web sockets, which will significantly increase the complexity, especially the code size of the libraries. The second is, you know, we look at all the different use cases for streaming. Really, the majority of the streaming use cases for a web client when communicating against the services over the internet is really server streaming. And also, server streaming is very simple, very close to the core RPC semantics. It's largely stateless on the server side. When all the other kind of streaming, like request streaming, bi-directional streaming, or line-line streaming, which require a lot more work beyond what just RPC provides, which means that, you know, even if we create a solution that will support those kind of streaming modes, you as an application developer still need to understand and, you know, how to deal with, you know, this reliability and scalability. And those things are very unique to the stream, the nature of the streaming being stateful solutions. Then the second part is that we want this to work everywhere. What that means is the JAPS web client should work on very old browsers like IE at the time. It needs to work from different networking environment, which means that different HTTP versions or in some cases firewalls may block certain versions and also work from those so-called cross-platform web clients, for example, Reactive Native, which is also used on mobile clients. But otherwise, we consider this as a web platform. So we wanted this solution to be simple for applications so that they don't have to worry about when this solution will work or will not work. Right. And that's very important to go first. So this is this is all the sort of the background I want to cover today. And then the next I'll talk about where we learned from the past, you know, four or five years after we launched this project. So first, we at the time made a decision that this solution has to work for both Google's own internal applications and also for external applications, which means, you know, applications you guys build, you then get up open source releases. And what we noticed that is this is a very significant benefit for a lot of our users because now they trust that this solution also works for Google and they're more confident to use it in their production as part of their production solution. If you look at the, you know, the diagram, the overall the GRPC, you know, we don't have a lot of signals to actually measure the adoption of the GRPC web. So this is just one of the signals which is very easy to get is the number of stars. But if you look at that, the red one is Java, the blue one is for web, and then the yellow one is for node. And the overall, you know, the adoption of the GRPC web has been pretty steady and keep growing. However, such approach, it does have its own challenges for us. Specifically, now we have to make tradeoffs between taking new features, exposing new APIs versus co-sizing quizzes. Now for internal Google's own applications, you can imagine, for example, like a chat, Gmail or search, co-size is very important. Usually it's most critical performance metrics when for external developers, you know, you have the very, you know, wide spectrum of users, right? Some may not care about co-size as much. Then the other part of the overhead is really keep the code repository in sync. And, you know, any kind of bi-directional code sync between an external repository and an internal repository, like the one used by Google, you know, the Google's own internal applications, it's very complicated just process-wise, how to deal with, you know, low back, how to detect issues and how to do merge and do patching, managing releases. And that does provide, you know, it does present a lot of challenges for us and which also as a result costs us really very careful to release any new features. We want to make sure that this is as stable as possible. It works very reliably for those Google's, you know, own applications and external web applications. The second lessons we learned is that, you know, GRPC web rely on 2K Google-only technologies. And that does cause a lot of issues for our users, which we didn't quite expect at the beginning. So the first one is the Google Closure Library and Compiler. And this turns out to be a very, you know, Google-only technology today. As any like web framework, it's very opinionated in terms of style conventions and trade-offs. And this doesn't always work for other developers. So for that, we don't really, you know, we will stick with this Google Closure Library and Compiler because we wanted this thing to also work for, you know, Google applications. But we tried to hide this as more or less as internal implementation detail, hopefully that we can reduce the, you know, the user friction caused by the dependency on Google Closures. Now the second one is the issue there is more significant, which is the protocol itself. So we, GRPC web relies and exposes the Google Portable for JavaScripts. It's open source version, unfortunately, does not provide the best user experience. Neither the performance. So we don't really have immediate solution. I'll talk about this briefly when we talk about the roadmap. But overall, you know, this is sort of the lessons we learn when you create a solution that you want used by both, you know, Google applications, but also open source external applications, you know, now you have these kind of trade-offs. And, you know, it's one of those cases, one size doesn't really fit all. Then the last part about the lessons is, you know, when we look at the design goals we made at the time, looking back, we feel like we made some reasonable decisions, specifically on the silver side, as MOU become more and more popular and especially easier to deploy in the so-called cloud native or Kubernetes environment, our original concern that, you know, requiring a proxy will cause other deployment overhead doesn't seem to be a real user experience concern. What we heard from users is that deploying MOU is very easy and it doesn't really cause any problems for most of the users. The second one is the in-process sort of translation between GRPC web and the RPC handlers. At the time, we basically made the decision not to implement GRPC web directly in different languages. However, for languages that they do, they implement GRPC on top of the web platform or web frameworks. In that case, GRPC web and the GRPC become too parallel stack on top of the language native web frameworks. In that case, yeah, proxy become unnecessary. And that has been the case for .NET, Swift and .NET. And that kind of matches our vision at the time. For these three languages, GRPC web and GRPC kind of implemented it seamlessly against the web framework, and they will invoke the RPC hander also seamlessly. In that case, MOU is not needed. The other goal we had is basically, we want to just keep the GRPC web as a web-only solution. This is not a general fallback for GRPC. We don't want people to use GRPC web because GRPC, for example, does not work on better system or in some environment or people don't like how we implemented the APIs or anything like that. And for that, we envisioned that the ecosystem will pick up and create some solution to sort of bridge the gap. And that has been happening. There's a popular gateway, like the GRPC gateway, which provided a very good solution that you can write the rest kinds and that will interact with the GRPC silver transparently. So overall, for us is we tried to make sure that the GRPC web as an official solution focus on the core values and we make very diligent trade-off between complexity, core size, and features. One example is in Google, every one kilobyte of core size increase, we are generally alert. We have to justify why we're doing that, what is the value provided by this actual one kilobyte of code. This tells us we really tried very hard to make the trade-off. Next, I'm going to just briefly talk about the roadmaps. One of the most requested features you can imagine is the streaming support. So what happens is at this point, we decide that GRPC web will only be providing server streaming support over HTTP. The reason we made this decision is that we worked with Chrome team and did a so-called origin trial. So this is like some kind of experiments we can enable from Chrome against live traffic, in this case, any Gmail traffic, which will use this to do the experiments. Unfortunately, the experiment failed to conclude that it is safe to enable kind streaming over HTTP 1.1 from Chrome. What that means is that if we support like a request streaming, using the fetch streaming APIs as provided by web platforms, when the underlying protocol is HTTP 1.1, the library, the RPC library will have to do some kind of fallback to disable streaming. And this will greatly increase overhead. And also HTTP 2 only works with HTTP S. So if we want to do plain text HTTP, then there's no HTTP 2 as well. So because of all these reasons, so we decided not to pursue a request streaming support with the fetch streams API. We plan to support web transport over Quake specifically to support full-duper streaming at some point. This will also cover and enable request streaming. Web transport because it's Quake-based, it does provide latency improvements. And also web transport also provided some default fallback. So we believe that there's enough value to justify, to support this to support this actual transport in addition to HTTP. So we're looking to that. We'll keep you updated as we make more progress. So and then the next one is a protobuf. So we are working with our internal protobuf team as well as external protobuf JavaScript projects. Try to figure out some kind of roadmap and also migration paths depending on how we decided to integrate protobuf JavaScript and the JPC web. In addition to the protobuf JavaScript messages, we also think it may be beneficial to support JSON messages especially if we don't have a clear roadmap or migration path on protobuf JavaScript in a short timeframe which allows us to provide the best user experience. In that case, we may decide to decouple JPC web and protobuf messages and allow applications to actually just using the standard JSON messages which is also part of the protobuf spec. The second thing that relates to protobuf is we try to make more alignment between node and web. A little bit of history about the JPC JavaScript, the so-called JPC JS, which is designed for node. Google internally does not use node as a server-side language. So as a result, JPC node actually uses a different protobuf library and also the APIs are different. So we like to get into a position that the JPC node users and JPC web users they can have the same kind of protobuf experience if they choose to. So what that means is we may try to provide different options for JPC web. So JPC web users may decide to use its own customer version of protobuf instead of we mandate a particular protobuf JavaScript library on implementation. So we're looking into that and together with also the JPC node team. Now JPC web and JPC node at the API services was always designed to be aligned. For example, the JPC web, the streams API actually copy the exact node streams API spec instead of exploding the so-called WorldWag streams which are defined by the web platform. Lastly, I want to just touch base quickly about the ecosystems. There are two things why it's on the server side. As JPC web become more stable and mature, users start looking to other functionalities, things like different security features or all those things. We don't really have a standard solution at this point. We like to release with JPC web mostly because our internal versions are not quite suitable for external users or vice versa. So what we think we can do is to improve the overall ecosystem is to document some specs or publish some guidelines so that features implementing envoy versus features implementing let's say .NET, . they have some kind of consistency and the clients can also interact with those different web frameworks in a more portable way. So that's something we are looking to that. Then the second part is the so-called client-side web frameworks. Over the years, we've been working with project owners from broader, angrier, also the active native. Make sure that JPC web will work from those frameworks more seamlessly. I believe today you can use JPC web from all those frameworks, but we're looking to make sure that the overall experience is more aligned, more seamlessly for developers. The other thing is to try to make sure that the JPC implementation itself is more unified. One area we have been looking at is the JPC web on dot. Today, the JPC web on dot is implemented in dot. Our goal is to have the dot client represent the JavaScript client which we designed and implemented for the standard web clients. The reason is that the JPC web protocol and transport and the feature sets will evolve. For example, when we started to support the web transport, we liked the dot client's automatic features as opposed to trying to re-imprint everything. This will also make things easier, for example, patching and releases easier. So that's something we are also looking to that. So this concludes the main top part of the roadmap discussions I'd like to share with you. If you want to reach us, just go to the github.com slash jpc web and post your feedback questions on the web on the GitHub repository. We will try to get back to you as soon as possible. So that's all I have today. Thank you so much.